text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
[email protected] wave (GW) from the inspiral of binary compact objects offers a one-step measurement of the luminosity distance to the event, which is essential for the measurement of the Hubble constant, H_0, that characterizes the expansion rate of the Universe. However, unlike binary neutron stars, the inspiral of binary black holes is not expected to be accompanied by electromagnetic radiation and a subsequent determination of its redshift. Consequently, independent redshift measurements of such GW events are necessary to measure H_0. In this study, we present a novel Bayesian approach to infer H_0 from the cross-correlation between galaxies with known redshifts and individual binary black hole merger events. We demonstrate the efficacy of our method with 250 simulated GW events distributed within 1 Gpc in colored Gaussian noise of Advanced LIGO and Advanced Virgo detectors operating at O4 sensitivity. We show that such measurements can constrain the Hubble constant with a precision of ≲ 15 % (90% highest density interval). We highlight the potential improvements that need to be accounted for in further studies before the method can be applied to real data.Bayesian framework to infer the Hubble constant from cross-correlation of individual gravitational wave events with galaxies Sukanta Bose 0000-0002-4151-1347 January 14, 2024 ============================================================================================================================§ INTRODUCTION The discovery of gravitational wave (GW) events opened an independent way to probe the expansion history of the Universe. The very first multi-messenger approach to infer the Hubble constant was possible due to the detection of GW from the binary neutron star (BNS) merger, GW170817 <cit.> along with the electromagnetic (EM) counterpart <cit.>.The independent measurement of luminosity distance of the BNS merger from the GW data and its redshift from the EM data allows an inference of the Hubble constant H_0=70^+12.0_-8.0  <cit.>.However, most of the GW observations by the detectors in the LIGO-Virgo-KAGRA collaboration <cit.> are primarily binary black hole (BBH) mergers <cit.>, which are not accompanied by EM counterparts. Such events are aptly called dark sirens. Though the unique identification of the host galaxy for such BBH is not currently possible due to large sky localization errors, there are alternative approaches to infer the Hubble constant from such dark sirens. The idea initially proposed by Schutz <cit.> involves the use of galaxy catalogs for the identification of galaxies clustered within the sky localization of the GW events, which would provide noisy but useful information on the redshift of the events. His approach has been implemented as a statistical host identification method in a Bayesian framework (also sometimes termed the galaxy catalog method) by Refs. <cit.>. In another approach called the spectral siren method, the source redshift can be inferred by invoking the universality of the source frame mass distribution motivated from astrophysics <cit.>. Both these methods, i.e., the galaxy catalog method and spectral siren method, have been applied to 47 BBHs from GWTC-3 <cit.> in addition to the bright siren GW170817 and its EM counterpart <cit.>, to constrain the Hubble constant to H_0=68^+8_-6 and H_0=68^+12_-8  <cit.>, respectively[The errors denote 68% credible intervals in this case.]. There are also ongoing efforts <cit.> to unify the spectral and galaxy methods. Furthermore, if at least one of the components of the merger is a neutron star, one can use the measurement of its tidal deformability and knowledge of the neutron star equation of state to infer the source redshift <cit.>, and hence H_0 inference by combining the redshift information with the measured luminosity distance of the merger event.The current state-of-the-art methods <cit.> adopted to estimate H_0 in the recentLIGO-Virgo-KAGRA (LVK) cosmology analysis <cit.> are dominated by the assumptions related to the nature of the GW source population <cit.>.However, it is important to note that these two methods mentioned above do not utilize the information on the spatial clustering of galaxies explicitly. Both the GW source population and the galaxy population are tracers of the same underlying large-scale structure in the Universe. So, GW sources share the same large-scale structure as galaxies at their redshifts and, hence, are correlated to each other. Therefore, one can expect that the cross-correlation function between galaxies with known redshift and GW sources would be non-zero at the true redshift of the GW sources and, thus, in conjunction with the luminosity distances of the GW events, can be used to constrain the cosmological parameters such as the Hubble constant <cit.>. This method, which can be termed as cross-correlation, does not directly rely on assumptions related to the intrinsic population parameters that describe the GW sources. In this work, we demonstrate a novel Bayesian framework to infer H_0 from individual GW events by utilizing the cross-correlation between GW events and the galaxy catalog. This method explores the volume uncertainty region of individual GW events while computing cross-correlation with galaxies for estimating the Hubble constant.The paper is structured as follows. In Sec. <ref>, we describe the methodology of using galaxy clustering information to estimate H_0 from individual GW events. After that, we elaborate on the simulations performed to demonstrate the methodology in Sec. <ref>. In Sec. <ref>, we show the efficacy of our method in recovering H_0 and examine how various parameters impact the precision of the resulting constraints. Finally, we conclude our paper and discuss future directions in Sec. <ref>. § METHODOLOGY In the standard model of cosmology (also known as the model), the matter distribution in the Universe forms a cosmic web of large-scale structure with clustering properties determined by the value of cosmological parameters. Dark matter halos form at the peaks of the matter density distribution. The clustering properties of these halos on large scales reflect the clustering properties of the matter distribution, modified with a multiplicative factor b^2, where b is the linear bias that corresponds to the ratio of the halo overdensity (δ_ h) to the matter overdensity (δ_ m),δ_ h(, M) ≡(, M)/-1= (M) δ_ m().Here, (, M) denotes the number of halos at a given positionwith halo mass M andis the average number of halos <cit.>. The bias of the dark matter halos increases with increasing halo mass <cit.>. More luminous galaxies are expected to form within more massive dark matter halos <cit.>, and on large scales, they inherit the bias of the dark matter halos they inhabit, such that= ∫ dM P_h(M) b_h(M)where P_h(M) specifies the distribution of halo masses to which the galaxies belong <cit.> (for a detailed review on the halo and galaxy bias, one may refer to <cit.>).In this section, we describe the methodology to utilize the cross-correlation between individual GW events and galaxy catalogs to constrain the Hubble constant.The spatial clustering between galaxies and GW events, separated by the comoving distance , can be characterized by the 3D cross-correlation function, ξ_ gw, g(), given byξ_ gw, g () ≡⟨δ_ gw() δ_ g( + )⟩ ,whererepresents the location of GW sources; δ_ gw and δ_ g denote the overdensity of the GW source distribution and the galaxy distribution, respectively. As the Universe is assumed to be isotropic, the cross-correlation function depends on the scalar distance r=. Since GW events and galaxies are tracers of the same underlying matter distribution, both GW overdensity and galaxy overdensity are related to the matter overdensity δ_m on large scales as, δ_ g = b_ gδ_m δ_ gw = b_ gwδ_mHere, the proportionality constants b_ g and b_ gw are the linear biases of the population of galaxies and GW sources with respect to the matter distribution, respectively. A bias of unity would imply that the observed population follows the underlying matter distribution. In general, the bias parameters depend on different properties of the sources. The dependence of galaxy bias on redshift can be estimated from the measurement of auto-correlation functions of galaxies in a given redshift slice with themselves. The redshift-dependent GW bias needs to be assumed a parametric form and be marginalized over those parameters (see Ref. <cit.>).The cross-correlation function , defined in Eq. <ref> can be written as, =. Here,is the non-linear matter power spectrum <cit.>, which we compute under the assumption of the standard model of cosmology.In the flat Universe, the luminosity distance d_L is related to the redshift z as follows: d_L = c(1+z)/H_0∫_0^zdz^'/H(z^')= d_c(1+z) Here, d_c denotes the comoving distance, and H(z) is the Hubble parameter, expressed in terms of the Hubble constant H_0 and matter density parameter Ω_m as: H(z)= H_0 E(z) = H_0√(Ω_m(1+z)^3+(1-Ω_m)) . Since the correlation function (r) quantifies the excess probability over a random Poisson distribution of finding a galaxy in a volume element dV, which is separated by a comoving distancefrom a GW event located at position , we have (+) dV =n̅_V[1+(r)]dV , where n̅_V is the average number density of galaxies in comoving volume. So, the galaxy overdensity can be calculated by integrating Eq. (<ref>) over some finite volume if we have the theoretical prediction of the non-linear matter power spectrum . Consider a GW event located at its true position denoted by redshift z^ gw, right ascension α^ gw and declination δ^ gw. The modeled galaxy overdensity, δ_g^ mod, in an angle θ_ max around the maximum a posteriori probability (MAP) location (α^ gw_ m, δ^ gw_ m) and within a redshift bin centered around z, can be computed by carrying out the following volume integral (see Fig. <ref> for the geometry),δ_g^ mod(z | z^ gw(d_L^ gw, H_0), α^ gw, δ^ gw) = ∫_0^θ_max∫_z-Δ z/2^z+Δ z/2n̅_V(z^') ξ_ gw, g(| r|) 2 πsinθ d_c^' 2 J(d_c^'/z^') dθ dz^'/∫_0^θ_max∫_z-Δ z/2^z+Δ z/2n̅_V(z^') 2 πsinθ d_c^' 2 J(d_c^'/z^') dθ dz^' .The true GW event redshift can be inferred from the true luminosity distance by inverting Eq. (<ref>). The quantity d_c^' in the above equation denotes the comoving distance corresponding to a redshift z^' within the redshift bin enclosed by the vertical planes and J(d_c^'/z^') corresponds to the Jacobian of the coordinate transformation from comoving distance to redshift, which can be written as (see, Eq. (<ref>)), J(d_c/z)=∂ d_c/∂ z=c/H(z) .The modeled galaxy overdensity thus depends on the choice of H_0. The observed galaxy overdensity within the redshift bin, centered around z, can be measured within the angular distance θ_ max around the line-of-sight towards the MAP position (α^ gw_ m, δ^ gw_ m), asδ_g^ obs(z) = n(z |δ^ gw_ m,α^ gw_ m)/n̅(z) - 1 , where n(z |δ^ gw_ m,α^ gw_ m) denotes the number of galaxies within the redshift bin. For a galaxy survey with uniform depth, the average number of galaxies n̅(z) expected within the redshift bin can be calculated by averaging the observed number of galaxies within the same angular radius θ_ max around random lines-of-sight. The observed overdensity δ_g^ obs can be calculated in several different redshift bins of galaxies. We call this the galaxy data vector d_g^ obs. This galaxy data vector can be compared with the model vector of δ_g^ mod for the same redshift bins of galaxies, denoted by d_g^ mod to construct a galaxy overdensity likelihood. Note that the model vector is conditioned on the true location (both on the sky and in redshift) of a particular GW event. The comparison of the two should provide a constraint on the Hubble constant, H_0. The true position on which the modeled galaxy overdensity is conditioned has its own uncertainty as the sky localization of the GW event is not precisely known but known as a posterior distribution over d_ gw={d_L^ gw, α^ gw, δ^ gw} inferred from the gravitational wave strain data d_ strain. The posterior for the Hubble constant, in that case, is given by,p(H_0|d_ strain, d_g^ obs)= ∫ p(H_0, d_ gw|d_ strain, d_g^ obs) d d_ gw∝∫ L(d_ strain, d_ g^ obs|H_0, d_ gw) P(H_0, d_ gw) dd_ gw∝∫ L(d_ g^ obs|H_0, d_ gw)L(d_ strain|H_0, d_ gw) P(H_0, d_ gw) dd_ gw ,where we have applied Bayes' theorem and assumed independence of the galaxy overdensity and the strain data from gravitational wave events. The integration in the last equation is performed over the localization uncertainty of the GW event. The first likelihood in Eq. (<ref>) corresponds to the galaxy overdensity likelihood, and we assume it to be a Gaussian such that,ℒ (d_g^ obs| H_0, d_ gw) ∝exp[-1/2(d_g^ obs-d_g^ mod)^ TC^-1(d_g^ obs-d_g^ mod) ]Here, C denotes the covariance of the overdensity field, and we assume it to be diagonal with individual elements σ^2_δ corresponding to the variance in the observed galaxy overdensity in a given redshift bin within the angle θ_ max. The procedure for calculating σ_δ using random lines-of-sight is described in Sec. <ref>.The second likelihood term in Eq. (<ref>) corresponds to the strain data. Often, the posterior distributions of P(d_ gw) are computed assuming certain priors. Therefore, the likelihood can be obtained by taking the ratio of the posteriors for source parameters in d_ gw and the priors that are used to infer these parameters. This allows us to separately impose our own priors on the quantities p(H_0, d_ gw) inEq. (<ref>). Since GW events are independent, the final joint posterior of the Hubble constant can be obtained by combining the independent likelihoods of all the detected sources,P(H_0|{d_ strain}, {d_g^ obs})∝ P(H_0) ∏_i L(d_ strain_i, d_g_i^ obs| H_0) ∝ P(H_0) ∏_i∫ L(d_ strain_i, d_g_i^ obs| H_0, d_ gw) P(d_ gw) dd_ gwwhere the index i denotes individual GW events and d_g_i^ obs is the galaxy overdensity data vector corresponding to the i^ th GW event.When constructing the posterior in Eq. (<ref>), we take the flat cosmology to be the true cosmological model, with H_0as a free parameter and the matter density Ω_m kept fixed at 0.307. In the flat Universe, the relation between luminosity distance d_L and the redshift z in Eq. (<ref>) is used to convert the measured luminosity distance of each GW event to its redshift for a given H_0 whenever required while performing the integral over d_ gw in Eq. (<ref>). § SIMULATIONWe test the framework discussed in Sec. <ref> with a mock galaxy and GW catalog in order to assess how well the expansion rate can be recovered given realistic localization errors in the positions of GW events. We consider dark matter halos from the Big MultiDark Planck (BigMDPL) cosmological N-body simulation <cit.> from the CosmoSim database [Publicly available at <https://www.cosmosim.org/>] to construct the galaxy catalog. The BigMDPL simulation follows the evolution of the matter distribution in a flat cosmology with the following parameters: Hubble parameter h=H_0/(100)=0.6777, the matter density parameter Ω_m=0.307, the amplitude of density fluctuations characterized by σ_8=0.823 and the power spectrum slope of initial density fluctuations n_s=0.96. The simulation is set up as 3840^3 collision-less particles in a cubic box with comoving side length 2.5 h^-1 Gpc with periodic boundary conditions and corresponds to a mass resolution of 2.359× 10^10 h^-1 M_⊙.We only consider well-resolved dark matter halos with masses ≥ M_ th=10^12 h^-1M_⊙. For simplicity, we assume that all galaxies are central galaxies and thus place each galaxy at the center of these dark matter halos. We place an observer at the center of the box and compute the sky positions of each of these mock galaxies with respect to the center, which is within the sphere of radius 1.25 h^-1 Gpc. We utilize the cosmological parameters of the simulation to determine the redshift of each galaxy from the comoving distance from the central observer. We use this mock galaxy catalog for further analysis. For simplicity, we have ignored any redshift space distortion effects.Once we have the mock galaxy catalog, we randomly subselect galaxies to be the hosts of GW events without any regard to the properties of these halos or, equivalently, the properties of the galaxies in the catalog. The true redshifts and sky positions of the GW events are determined from their respective host galaxies in the cosmological box. We use the same cosmological parameters of the simulation box in order to compute the luminosity distance of GW events. The galaxy catalog we use extends up to a comoving distance of 1.25 h^-1 Gpc given the size of the box before we run into incompleteness issues. This comoving distance corresponds to redshift z∼ 0.46, assuming the true cosmology. In this work, however, we consider uniform prior over H_0 between 20 and 120. Consequently, a higher value of H_0 can lead to an increase in the redshift of a GW event at a given luminosity distance. Thus, the GW event can have redshift up to z ∼ 0.36 when H_0=120 and yet be detectable in the box. Therefore, we restrict the injection of GW events within a luminosity distance of 1 Gpc to avoid any biases creeping in due to the above selection effects. The distribution of the source-frame masses of the BBHs follows the Power Law+Peak model <cit.> between minimum mass m_ min = 5 M_⊙ and maximum mass m_ max = 60 M_⊙. The injected values of other mass hyperparameters (see Appendix B of Ref. <cit.> for the descriptions of the hyperparameters) are λ_ peak=0.06, α=3.5, δ_m=4, μ_m=34, σ_m=5, β=1.1, which are consistent with the previous observations <cit.>. However, note that the detected masses are redshifted and are expected to differ from the source-frame mass, m, as m_z=m(1+z), wherez is the redshift of the event. The other BBH parameters (see Table 1 of Ref. <cit.> for the list of parameters) are randomly chosen from the default priors of   (see Table 1 of Ref. <cit.> for the default priors). The simulated BBH signal is added to stationary Gaussian noise, colored with the O4 design sensitivity <cit.> of LIGO and Virgo detectors. We analyze the strain signal between 20 Hz and 2048 Hz to infer the source parameters. We consider the same non-precessing waveform model  <cit.> for both the signal injection and recovery[Note that we consider these simplified waveform model, as we would like to demonstrate a proof-of-concept for the cross-correlation based framework that we present in this paper. Analysis of real events will involve upgraded waveform models.]. We have used the nested sampler  <cit.> implemented in  <cit.> to infer the posterior distribution of all the parameters given the waveform.We consider uniform prior over chirp mass and mass ratio. This choice of priors for mass parameters is convenient for efficient sampling. We employ cosmology independent d_L^2 prior to the luminosity distance of the source. The priors over the rest of the extrinsic parameters are the default priors implemented in  (refer to Table 1 in Ref. <cit.> for the default priors).The parameters thus inferred are utilized to constrain H_0 by cross-correlating the localization of these individual events with galaxies in the mock catalog. We are primarily interested in the extrinsic source parameters, such as d_L^ gw, α^ gw and δ^ gw in this work. So, we compute the GW likelihood ℒ_ gw by marginalizing over the rest of the intrinsic and extrinsic parameters. The galaxy overdensity likelihood ℒ_g is also calculated for different sky coordinates within the localization of the GW events as a function of redshift in order to perform the integration of Eq. (<ref>).The galaxy overdensity likelihood includes the computation of the observed and modeled galaxy overdensity fields as a function of redshift.The modeled galaxy overdensity is calculated from the cross-correlation function ξ (see, Eq. (<ref>)), which depends on the separation (comoving distance in h^-1 scale) between the GW event and the redshift where the observed galaxy overdensity is also measured.We have used the publicly available code  [<http://surhudm.github.io/aum/index.html>] to compute ξ <cit.> from the non-linear matter power spectrum <cit.> assuming the flat cosmology. To study the effect of the cross-correlation length-scale on the H_0 estimation, we assume 4 different values of the angular radius θ_ max (elaborated in Sec. <ref>) along the line-of-sight to calculate the observed and modeled galaxy overdensity fields.In order to compute the likelihood of observing the galaxy overdensity given the modeled overdensity, we quantify the variance of the galaxy overdensity field (denoted as σ^2_δ in Eq. (<ref>)) around random lines-of-sight at each redshift bin. The variance of the overdensity depends upon the specific value of θ_ max over which we average. We use 10^6 random lines-of-sight within the mock galaxy catalog used in this work. The probability distribution of δ/σ[omitting superscripts and subscripts] for 5 representative redshift bins of width Δ z=0.01 up to z=0.36 are shown in Fig. <ref>, for θ_ max=0.03 radian. The corresponding standard deviation of the overdensity field is shown as a function of redshift as solid lines in Fig. <ref>, where the colors signify different values of θ_ max. As expected, σ decreases with redshift for any θ_ max due to the increase in cosmological volume probed within the angular region per redshift interval.Similarly, a higher value of θ_ max also results in a smaller standard deviation for the overdensity at any given redshift.Notably, the redshift width of Δ z=0.01 for calculating galaxy overdensity has been carefully selected to ensure a sufficient number of galaxies in each redshift bin.We use this variance in order to evaluate the likelihood of the galaxy overdensity given the modeled overdensity and a true position of the GW event, which, among other parameters, depends upon the Hubble constant. In order to infer the posterior distribution of the Hubble constant given the data, we assume a uniform prior in H_0∈ U [20, 120]. We finally infer the joint posterior for H_0 for multiple events as given by Eq. (<ref>).§ RESULTS We apply our method as mentioned in Sec. <ref> to 3 different mock GW catalogs for 3 different choices of θ_ max = {0.01, 0.02, 0.03} radian to study the statistical robustness. The GW catalogs are random realizations of the same population, as discussed in Sec. <ref>. Each GW catalog consists of 250 simulated GW events that are detected with a network SNR ρ_ net≥ 12 in the three LIGO-Virgo detectors, with O4 design sensitivity <cit.>. We also study H_0 posteriors for 2 subsets of 150 and 200 GW events from those 3 sets of 250 detected GW events. The threshold on network SNR in this work is chosen based on the earlier LVK cosmology papers <cit.>. Sources with low network SNR would not contribute significantly to the improvement in the inference of the Hubble constant due to the poor sky localization and larger uncertainty in distance measurement. Fig. <ref> shows the H_0 posteriors for 3 different sets of GW events with varying numbers of detected events. Each row of Fig. <ref> corresponds to a particular value of θ_ max, mentioned in the right of the figure for different numbers of GW events. The columns of Fig. <ref> correspond to H_0 posteriors for a fixed number of GW events, mentioned at the top of that panel, with varying θ_ max. The black vertical dashed lines correspond to the injected value of H_0=67.77 in Fig. <ref>.We also summarize the uncertainty corresponding to 90% highest density interval in constraining H_0 in Fig. <ref> for different number of detected GW events. The different panels correspond to varying numbers of detected events N_ GW. In each panel, the error bar of H_0 uncertainty for 3 different sets are depicted with 3 different colors. Set 1 and 3 have offsets of -0.001 and 0.001 radian, respectively, corresponding to the true θ_ max for clarity in the representation. In Fig. <ref>, the black horizontal dashed lines represent the injected value of H_0=67.77.As expected, the posterior of H_0 starts to converge to the true value of H_0=67.77 with increasing number of GW events.However, for the smaller number of GW events ∼ 150, the H_0 posteriors can show a wide variation. We speculate that the non-Gaussian nature of galaxy overdensity fluctuation σ_δ (see Fig <ref>) and not accounting for the covariance between different redshift bins is likely the reason for this wide variation. Though this is not an issue for unbiased estimation of H_0 with larger GW events, it is an aspect that needs to be carefully explored in the future. We have studied H_0 posteriors for larger θ_ max > 0.03 radian and found the posteriors degrade even when the number of GW events is significant, i.e., ≳ 150. This is likely related to the fact that large values of θ_ max would decrease the amplitude of the correlation signal, reducing the total signal-to-noise ratio.On the other hand, we have observed it becomes difficult to capture the correlation signal for smaller values of θ_ max≲ 0.005 radian due to the significant fluctuation in measuring the clustering for a smaller number of galaxies, especially at low redshift. So, it is crucial to explore the optimal value of θ_ max to perform this method for unbiased estimation of the Hubble constant. From Fig. <ref>, we observe that θ_ max=0.02 radian may be the optimal choice in our study. However, the optimal value of θ_ max may vary for different GW events. In this work, we keep the values of θ_ max fixed for all GW events for simplicity. We leave the exercise of finding the optimal θ_ max given the localization area on the sky to investigations in the future.We report the 90% highest density interval of the inferred values of H_0 corresponding to the three different sets in Table <ref> for 3 different choices of θ_ max={0.01, 0.02, 0.03 } radian.The corresponding H_0 posteriors are also shown in Fig. <ref>. We observe that we can constrain H_0 with a precision of ≲ 15% (90% highest density interval) by combining the individual H_0 posterior from 250 GW events.It is important to note that we consider GW events within 1 Gpc as the mock galaxy catalog used in this work is volume limited, extending up to redshift z ∼ 0.46, already detailed in Sec. <ref>.The sky localizations of these simulated GW events are better than those from larger distances, which are observed by the current generation detectors. So, the broad sky localizations of GW events from significantly large distances may affect H_0 estimation and result in broadening the H_0 posterior. § DISCUSSIONObservations of GWs from coalescing binaries provide the direct measurement of the luminosity distance unaffected by systematics in various external astrophysical calibrations that rely on the cosmic distance ladder. However, GW estimates can be biased due to systematics inherent to its measurements, such as GW detector calibration <cit.>. Assuming that the instrument can be calibrated sufficiently accurately <cit.>, GW sources can be used to estimate the Hubble constant and other cosmological parameters. However, redshift information is not available from GW observations. Additionally, most of the detected GW events are BBH mergers and are not accompanied by EM counterparts.In this paper, we successfully demonstrated the inference of the Hubble constant from individual GW events by cross-correlating with the galaxy catalog. This method is useful even if the mergers do not have any EM observations and can supplement the inference of the Hubble constant from BNS mergers with detected EM counterparts.Our method consists of measuring the galaxy overdensity as a function of redshift at the maximum a posteriori probability in the sky localization of individual gravitational wave events. We compare the data vector of observed galaxy overdensity d_g^ obs with that of the modeled galaxy overdensity d_g^ mod which depends on the value of H_0, to define a likelihood and infer the posterior distribution of H_0 in a Bayesian analysis. As examples, we considered 3 different mock GW catalog to infer H_0 using the method described in Sec. <ref> and were able to constrain the Hubble constant with an accuracy of ≲ 15% (90% highest density interval). The efficacy of the method is tested for the second-generation detector network comprising of the Advanced LIGO <cit.> and Advanced Virgo <cit.> detectors. With the improved sensitivity in the third-generation detectors like Cosmic Explorer <cit.> and Einstein Telescope <cit.>, GW sources will be detected with much higher detection rate and, therefore, precise estimation of luminosity distance and sky localization. We expect this method to constrain H_0 with higher precision with such detectors.In our current analysis, we have only considered H_0 as a free parameter to be constrained from the GW and galaxy overdensity observations. However, it is straightforward to extend our analysis to constrain other cosmological parameters, such as the matter density at the current epoch and dark energy equation-of-state. We do not consider them here to avoid the extra computational costs involved in estimating cosmological parameter(s). With the second-generation GW detectors, cosmological parameters except for H_0 are expected to be weakly constrained <cit.>.Our results show that the cross-correlation method can be utilized to infer H_0 from individuals from GW events as a proof-of-concept. We have made several simplifying assumptions in this work. The mock galaxy catalog used here is assumed to be volume-limited instead of being flux-limited. We have also considered the idealistic scenario where the GW events are the unbiased tracers of underlying matter distribution. In reality, the GW events and the galaxies trace the large-scale structure with a redshift-dependent bias <cit.>. This redshift-dependent bias could arise from the evolving way in which the gravitational wave events populate dark matter halos or due to selection effects. These effects need to be parameterized and marginalized to constrain the cosmological parameters from real data. We also have not accounted for any impact that weak gravitational lensing will have on the luminosity distance estimates due to the intervening large-scale structure <cit.>. Weak lensing induces a non-trivial spatial correlation on the sky at redshifts lower than that of the GW events. It should be incorporated into our methodology while working with real data, especially for high redshift events. This is currently beyond the scope of this work and will be investigated further in future studies.§ ACKNOWLEDGEMENTS The authors acknowledge Divya Rana for helpful discussions throughout this work. The authors would like to thank Simone Mastrogiovanni, Aditya Vijaykumar and Anuj Mishra for a careful reading of the manuscript and useful suggestions. TG acknowledges using the IUCAA LDG cluster Sarathi, accessed through the LVK collaboration, for the computation involved in this work. The work of S. Bera was supported by the Universitat de les Illes Balears (UIB); the Spanish Agencia Estatal de Investigación grants PID2022-138626NB-I00, PID2019-106416GB-I00, RED2022-134204-E, RED2022-134411-T, funded by MCIN/AEI/10.13039/501100011033; the MCIN with funding from the European Union NextGenerationEU/PRTR (PRTR-C17.I1); Comunitat Autonòma de les Illes Balears through the Direcció General de Recerca, Innovació I Transformació Digital with funds from the Tourist Stay Tax Law (PDR2020/11 - ITS2017-006), the Conselleria d’Economia, Hisenda i Innovació grant numbers SINCO2022/18146 and SINCO2022/6719, co-financed by the European Union and FEDER Operational Program 2021-2027 of the Balearic Islands. S. Bose acknowledges support from the NSF under Grant PHY-2309352.The MultiDark Database used in this paper and the web application providing online access to it were constructed as part of the activities of the German Astrophysical Virtual Observatory as a result of a collaboration between the Leibniz-Institute for Astrophysics Potsdam (AIP) and the Spanish MultiDark Consolider Project CSD2009-00064. TheBig MultiDark Planck simulation has been performed in the Supermuc supercomputer at LRZ using time granted by PRACE.
http://arxiv.org/abs/2312.16305v1
{ "authors": [ "Tathagata Ghosh", "Surhud More", "Sayantani Bera", "Sukanta Bose" ], "categories": [ "astro-ph.CO", "gr-qc" ], "primary_category": "astro-ph.CO", "published": "20231226192447", "title": "Bayesian framework to infer the Hubble constant from cross-correlation of individual gravitational wave events with galaxies" }
Peter Grünberg Institute, Theoretical Nanoelectronics, Forschungszentrum Jülich, D-52425 Jülich, Germany The Caldeira-Leggett model is the most commonly accepted approach to describe dissipation in superconducting circuits. But the existence of the resulting Schmidt-Bulgadaev (SB) transition is still heavily debated. We study a microscopically motivated model for a charge qubit electrically and capacitively coupled to a normal metal, with seemingly minute changes to Caldeira-Leggett – but a significantly different phase diagram. While this model still hosts an SB transition in the transmon regime, the critical parameter is largely independent of the metal's resistivity. In the opposite Cooper pair box regime, the system now resembles an anisotropic Kondo model with the two degenerate charge states as the pseudo-spin. Here, however, a transition to a ferromagnetic phase is not possible for regular electrostatic interactions. We further argue that some of the guiding principles of our work are transferable to resistors made of bosonic transmission lines. Dissipation and phase transitions in superconducting circuits coupled to normal metal resistor O. Kashuba and R.-P. Riwar January 14, 2024 ==============================================================================================Introduction – A realistic description of any quantum system requires accurate models to take into account dissipation. For superconducting quantum circuits coupled to a generic resistive element, the Caldeira-Leggett model is until today the most studied and used. Applied to a circuit with a single Josephson junction, it reads <cit.> H=E_CN^2 -E_Jcos(ϕ)+∑_j[2e^2/C_jN_j^2+(ϕ_j-λ_jϕ)^2/8e^2 L_j]where the charging energy of the superconducting island is E_C=2e^2/C and [N,ϕ]=i are the charge and phase difference across the Josephson junction, and [N_j,ϕ_j^']=iδ_jj^' describe a fictitious ensemble of LC resonators (believed to represent a generic resistive element). The inductive coupling to the junction via the parameter λ_j leads to a resistance R felt by the circuit <cit.>.Schmid <cit.> and Bulgadaev <cit.> found that this model hosts a superconductor-insulator transition (henceforth referred to as SB transition) at R_Q/R=1 (R_Q=π/2e^2 is the resistance quantum with respect to the Cooper pair charge 2e, with ħ=1). While this result was subsequently theoretically corroborated in many works <cit.>, experimental confirmation <cit.> as well as certain aspects regarding the theoretical treatment <cit.> are to this day highly controversial. In this letter, we take a different direction: instead of exploring or revisiting the phenomenology predicted by Eq. (<ref>), we critically reexamine its universal applicability. For concreteness, we choose to study a circuit coupled to a piece of normal metal, see Fig. <ref>a (the resistor used in the experiments of Refs. <cit.>). Part of our discussion, however, also applies to superconducting transmission lines made of Josephson junction arrays <cit.>.We start from a microscopically motivated fermionic model of the normal metal reservoir, and identify crucial differences with respect to Eq. (<ref>). In our model, the gate-induced offset charge N→ N+N_g is no irrelevant gauge term – to the contrary, it enters as a tuning parameter relevant for the phase diagram. Moverover, different parameter limits of the charge qubit, such as the transmon (E_J>E_C) <cit.> and the Cooper pair box regime (E_J<E_C) <cit.>, exhibit completely different phase diagrams. While we still find an SB transition in the transmon regime, it is a different parameter that controls it: it is no longer the total resistance felt by the circuit, but only the capacitive contribution, which marks the transition between the insulating and superconducting phase. Importantly, this capacitive part is in large parts independent of the normal metal resistivity, which was the parameter that was modified in Ref. <cit.>. In the Cooper pair box regime, we find that the system is described by an anisotropic Kondo model <cit.>, where the two quasidegenerate charge states play the role of a pseudo-spin. This mapping enables us to predict the existence of a pseudo-ferromagnetic phase with suppressed Cooper-pair exchange. But this phase transition can only occur for negative capacitive coupling, and is therefore forbidden for conventional electrostatics.Note that a derivation from a microscopic model of superconductor-normal metal heterostructures to Caleira-Leggett model has already been worked out <cit.>, by tracing out the fermionic degrees of freedom <cit.>. By performing an analysis of a bosonized version <cit.> of our model, we are able to identify the approximation step where our derivation starts to diverge from Ref. <cit.>. While bosonization itself is not per se problematic, it invites a subsequent approximation step where local charge quantization (in both the superconductor and normal metal) is broken. This seemingly small and innocuous procedure turns out to be at the origin of the vast differences in the predicted phase diagram. Our updated treatment of the system-bath interaction thus showcases the importance of charge quantization <cit.> for dissipative quantum phase transitions. Finally, we expect the fermionic treatment to be advantageous for a full numeric calculation of the phase diagram by means of numeric RG methods <cit.>, envisaged for a follow-up research effort.The proposed model – The starting point of our endeavour is a fermionic description of the superconducting-normal metal (SN) interface, which accounts for two types of interactions: a direct electric contact via the proximity effect (leading to an exchange of Cooper pairs at the SN interface, see also <cit.> and references therein) and a capacitive coupling between the Cooper pair charge on the island. The Hamiltonian isH=E_C (N+∑_kk^'σλ_kk^'^z c_kσ^† c_k^'σ)^2-E_Jcos(ϕ)+∑_kk^'(e^-i ϕλ_kk^'^⊥ c^†_k↑c^†_k^'↓+h.c.)+∑_kσϵ_k c_kσ^† c_kσ+… ,where λ^z and λ^⊥ parametrize the capacitive coupling and electric contact, respectively. The notation is obviously borrowed from Kondo model physics, interpreting the charge as a large, generalized spin. The normal metal electrons with momentum k and spin σ are annihilated (created) with the fermionic operators c_kσ^(†). The dots (…) at the end indicate that the normal metal may by all means include a whole host of other physics, such as disorder, electron-electron [Note that the capacitive coupling term ∼λ^z can be generally written inside the charging energy ∼ E_C upon quadratic completion, such that the electron-electron interactions in the normal metal may have an artifical addition due to a quadratic term ∼ (λ^z)^2.], and electron-hole interactions. Let us emphasize that static disorder can (at least formally) be included in the single-particle picture, by simply reshuffling single-particle energies ϵ_k and corresponding eigenvectors, leading to an updated k-dependence of the coefficients λ^z,λ^⊥. While such an indirect treatment of disorder is of course not always practical for explicit calculations (diffusion is most conveniently captured by diagrammatic techniques involving artifical averaging over impurities which conveniently introduces a relaxation process, see <cit.> for an instructive review), the possibility of such an inclusion is nonetheless important to appreciate the generality of some of the statements that follow below. In particular, while the coupling ∼λ^⊥ seems to nominally represent only the contact resistance, we can take it to effectively include the intrinsic normal metal resistivity.Crucially, in order to arrive at Eq. (<ref>), we made (only) twoassumptions: we took the speed of light to be infinite (nonrelativistic limit), such that the photon field could be integrated out (leaving us with direct electrostatic interactions captured by the different capacitive couplings), and we considered energies below the superconducting gap by eliminating quasiparticle excitations within the superconducting part of the system <cit.>. Naturally, a significant portion of the discussion has to be about the extent to which Eq. (<ref>) is a better approximation than Eq. (<ref>) to describe dissipation induced by a normal metal. But before doing so, we point out the main differences between the two models.As foreshadowed, Eq. (<ref>) allows for a compact representation of the phase ϕ even in the presence of the bath (that is, we can impose 2π-periodicity on the wave functions of the full Hamiltonian). Consequently, and contrary to Eq. (<ref>), the system is no longer insensitive to offset charges N→ N+N_g, and the phase diagram depends on N_g. Since the N_g-sensitivity of the qubit depends very strongly on the ratio E_J/E_C <cit.>, we will discuss the different regimes separately.Phase compactness plays a second important role. We cast Eqs. (<ref>) and (<ref>) into a more comparable form by transforming the couplings to the phase ϕ into an effective capacitive coupling. Applying respective unitary transformations U^(CL)=∏_j e^iλ_j N_jϕ and U^(SN)=∏_qσ e^ic_qσ^† c_qσϕ/2, we get,H^(X)=E_C[N+N_env^(X)]^2-E_Jcos(ϕ)+H_env^(X) ,with X=CL,SN for either the Caldeira-Leggett model, or the fermionic model. In this representation, the interaction term universally appears inside the charging energy ∼ E_C as a bath-induced shift of the charge, the respective charges beingN_env^(CL) =∑_jλ_jN_j N_env^(SN) =∑_kk^'σλ_kk^'^z c_kσ^† c_k^'σ_N_loc+1/2∑_kσc_kσ^† c_kσ_N_tot ,and the (uncoupled) environment Hamiltonians readH_env^(CL) =∑_j[2e^2/C_jN_j^2+ϕ_j^2/8e^2 L_j]H_env^(SN) =∑_kσϵ_k c_kσ^† c_kσ+…+∑_kk^'λ_kk^'^⊥(c^†_k↑c^†_k^'↓+h.c.).Inspired by the discussion in Ref. <cit.>, we note that in this basis choice, the Caldeira-Leggett Hamiltonian is likewise 2π-periodic in ϕ – which seems to render our previous concerns moot. In particular, for a closed system, a 2π-periodic Hamiltonian and a 2π-periodic ϕ-space are straightforwardly related, in that the former allows for the qubit basis (the system without the reservoir) to be expressed in terms of a Bloch wave vector k, whereas for the latter, Bloch wave vector is fixed to the value k=N_g, with N_g being the equivalent of the vector potential along the compact ϕ <cit.>.But if we cast ϕ on the circle for the transformed Caldeira-Leggett model, there is no way to get back to the original representation in Eq. (<ref>), because the above introduced unitary transformation U^(CL) is not periodic in ϕ and thus ill-defined for compact ϕ. That is, even though the Caldeira-Leggett Hamiltonian in the basis choice given in Eq. (<ref>) is 2π-periodic in ϕ, we need to include all (periodic as well as nonperiodic) Bloch states k in the description (and for the computation of observables, see Ref. <cit.> for a similar argument), in order to guarantee the equivalence of Eqs. (<ref>) and (<ref>).This means that the Caldeira-Legget model does not distinguish between a coupling to the phase [Eq. (<ref>)], which physically corresponds to a direct electric contact to the bath, and capacitive coupling to the charge [Eq. (<ref>)], and there is only one meaningful total resistance R. In contrast, the fermionic model allows for a clear distinction: while we can transform the electric contact into an effective capacitive coupling (it is up to us how we choose to keep track of the number of Cooper pairs the system exchanges across the SN interface), we cannot simply transform away the entire capacitive coupling into an effective electric contact. At this step, it is not only the quantized Cooper pair charge (expressed in terms of a compact phase), which is important. The two environment charges in Eq. (<ref>) are of very different nature: N_loc a local portion of the charge within the normal metal, which partakes in the capacitive coupling to the Cooper pair charge. This coupling occurs in general with a finite spatial resolution (given by, e.g., fringe effects of the electric fields). Consequently <cit.>, the distribution of eigenvalues of N_loc can by all means be assumed to be (quasi)continuous. N_tot on the other hand corresponds to the total charge on the normal metal (in units of the Cooper pair charge, hence the prefactor 1/2), which changes due to dissipative Cooper pair exchange at the SN interface. In the absence of quasiparticle processes (more on that later) N_tot must be integer quantized. Therefore, N_tot, and only N_tot, can be subject to unitary transformations defined on a compact ϕ. That is, while both types of couplings (capacitive and electric contact) contribute to the total resistance, they are nonetheless physically distinguishable – particularly with regard to their relevance for poor man's scaling of the system parameters.The resistance is computed via the quantum Langevin equation <cit.>C/4e^2ϕ̈ = -E_Jsin(ϕ) - J(t) + ∫_t_0^t dt' Y(t-t')ϕ̇(t'),where J(t) = i[H_env(t),N_env(t)]_- and dissipation is described by Y(t-t') = [N_env(t') ,J(t)]_-. Taking the Markovian limit, the last term simplifies to R^-1ϕ̇(t). The resistance for Caldeira-Leggett (CL) and fermionic (SN) models take the formR_Q/R^(CL) = π∑_k(λ^z)^2ω_kδ(ω_k)R_Q/R^(SN) = π |λ^⊥|^2ν^2 + πν^2lim_ω→0(λ^zω)^2_R_Q/R^z,respectively. Note that while U^(SN) eliminates the ϕ-dependence from the environment, it leads to pairing fluctuations in the metal, which in turn gives rise to the first term in Eq. (<ref>).Transmon regime – To see why both resistance contributions in Eq. (<ref>) play diametrically different roles, we consider E_J> E_C [Note that strictly speaking, the word transmon is commonly associated with very high ratios E_J/E_C (of order ∼ 50 or higher). We here use the term a little more flexibly, and mean simply that E_J shall be sufficiently large compared to E_C, such that the energy gap between the ground and first excited states ∼√(E_J E_C) is larger than E_S.]. Here the low-energy behaviour of the charge qubit reduces to a simple quantum phase slip process, leading to the system-bath HamiltonianH^(X)≈ - E_S cos(2π[N+N_env^(X)])+H_env^(X) ,where E_S=16[E_C E_J^3/(8π^2)]^1/4e^-√(32E_J/E_C)≪√(E_C E_J) <cit.>. In accordance with the above discussion, we have two possible types of quantized charges, either the charge on the superconducting island (compact ϕ), or the possibility of integer-valued contributions in N_env^(X). The former has in the transmon regime a rather benign effect (after all, the N_g-dependence is weak here). If we allow for ϕ to be extended, we can replace the island charge operator N with the Bloch wave vector k, representing the likewise extended low-energy charge qubit basis. The resulting energy -E_Scos(2π k) represents the physics of the lowest Bloch band due to quantum tunneling between different minimas in the cos(ϕ) potential. For compact ϕ, we replace k→ N_g, providing here the offset charge sensitivity for the transmon ground state, see also inset in Fig. <ref>b.We can understand the coupling to the environment in Eq. (<ref>) as a renormalization of amplitude E_S (see also Ref. <cit.>). This is true independent of ϕ being extended or not. We directly attach the coupling to the parameter E_S, and then integrate out a given high-energy slice of the bath degrees of freedom (from frequency Λ to Λ_0>Λ), E_S→ e^i2π N_env^(X)E_S →tr[e^i2π N_env^(X)]_{Λ,Λ}E_S .This procedure is also referred to as adiabatic renormalization <cit.>. For the Caldeira-Leggett model, we get the standard renormalized quantum phase slip amplitudeE_S→ e^-R_Q∫_Λ^Λ_0dωY(ω)/ωE_S ,where Y(ω)=π/2∑_jλ_j^2/L_jδ(ω-ω_j) (with ω_j=1/√(L_jC_j)) is the admittance of the dissipative bath. For an Ohmic bath, Y(ω)=1/R^CL, we arrive at the well-known resultE_S→(Λ/Λ_0)^R_Q/RE_S ,with the dissipative phase transition marked at the point where the critical exponent R_Q/R=1, in accordance with Refs. <cit.>.For the fermionic model, we similarly integrate out a finite frequency slice from Λ to Λ_0 of the metal's electron degrees of freedom,E_Str[e^2π i (N_loc+N_tot)]_{Λ,Λ_0}=E_Str[e^2π iN_loc]_{Λ,Λ_0}.Note the term on the right hand side, where we included that [N_loc,N_tot]=0 and that the eigenvalues of N_tot are integer, such that e^i2π N_tot=1. Hence, the electric contact does not directly contribute to the renormalization of E_S. In general, the finite proximity effect term ∼λ^⊥ will certainly have an indirect influence on the expectation value of N_loc (and its higher moments), but this is a higher order effect. In leading order, we can take the trace with respect to the unperturbed normal metal state. This is a first central result of our work: while both capacitive coupling and electric contact to a normal metal contribute to the total resistance, Eq. (<ref>), only the capacitive contribution R^z is relevant for the RG and thus for the SB phase transition.As an important caveat, note that the above statement is a consequence of standard adiabatic renormalization <cit.>. A more precise treatment will likely provide a finite contribution of the electric coupling to the renormalization of E_S. In particular, even though N_tot is integer, it will in a higher order approximation couple the transmon ground and excited states. The first excited state has a charge dispersion of opposite sign, such that a mixing of the two states will effectively reduce E_S by some amount. The point of our statement is not to claim that the electric contact has exactly zero impact on the RG – as this is only a lowest-order poor man's scaling result. We merely aim to point out that on this simple level of the RG, the Caldeira-Leggett model massively overestimates the contribution of the electric contact. On a similar token, we point out that we further discarded quasiparticle excitations in Eq. (<ref>) occuring at energies above Δ, see Fig. <ref>a. These excitations can be included in a simplified way by allowing N_tot to have half-integer (instead of integer) fluctuations. Thus, quasiparticles likewise contribute to a renormalization of E_S. However, these fluctuations are highly suppressed compared to the fluctuations of N_loc, by the fact that the quasiparticles need to traverse the tunnel barriers. Renormalization by means of capacitive coupling is therefore still dominant.A question of charge counting - We further observe that in the transmon regime, the task of identifying dissipative phase transitions has become equivalent to a full-counting statistics problem, as the formula in Eq. (<ref>) is structurally the same as the moment generating function m(χ)=tr[e^iχ N_loc] with the counting field χ (with the difference, that we here only trace over a finite energy range). Consequently, the Ohmic nature of the bath directly relates to a fundamental full-counting statistics result. For illustration purposes, let us take the normal metal to be 1D, and the SN interface to be a parallel plate capacitor of length l. Then the charge right underneath the superconductor is the relevant one which needs to enter Eq. (<ref>). The computation thus reduces to the well-studied problem of electron counting in a finite interval l on a line <cit.>. These studies <cit.> showed that for a fuzzy (nonquantized) charge, the dependence on auxiliary field χ can be sufficiently described by the first two cumulants. We stress that this is not the expansion in χ, but is due to the infrared cut off on l^-1 (the interval length) which are present only in these two cumulants.The first cumulant is proportional to the large ratio of the ultraviolet (Fermi momentum) and infrared cutoffs, but contributes only in the form of the oscillatory dependence on χ. The second cumulant has a weaker, logarithmic dependence on this ratio, ⟨ N_loc^2⟩ - ⟨ N_loc⟩^2 = π^-2log⟨ N_loc⟩ + const, though it results in the exponential suppression of the generation function, ⟨ e^iχ N_loc⟩≈ e^iχ⟨ N_loc⟩ - χ^2(⟨ N_loc^2⟩ - ⟨ N_loc⟩^2)/2. Thus, using the modified expression for the N_loc from Eq. (<ref>) and setting χ to 2π, we get that the renormalization of the absolute value of the E_S in Eq. (<ref>) will be (Λ/Λ_0)^2(λ^z)^2E_Sand the constant power factor here hints that the bath is Ohmic. Again, note that the terms with λ^⊥ from Eq. (<ref>) cannot contribute to the renormalization, as they merely form a proximity gap at low energies and thus may contribute to the Λ, but not to the exponent.To complete this discussion, we touch upon the matter of the interaction and disorder. As Aristov demonstrated in his work <cit.> using a bosonization approach, the e-e interaction in 1D case indeed creates a multiplicative factor K, the Luttinger parameter, in front of the renormalisation exponent 2K(λ^z)^2, though in higher dimensions, the interaction leads rather to the formation of the Fermi liquid, and, thus, does not affect the phase transition. Disorder is even less relevant in this case, as it merely reconstructs the energy levels while leaving density of states, and thus the renormalization exponent untouched. We thus conclude that the normal metal resistivity hardly plays a role for the SB transition; instead the phase transition is dominantly controlled by the strength of the SN capacitance, which defines the magnitude of λ^z, see Fig. <ref>d. A hidden Kondo model – Let us now consider the Cooper pair box regime of Eq. (<ref>), i.e., E_C>E_J and N_g≈ 1/2, see Fig. <ref>c. Tuning to N_g≈ 1/2 is only possible for a compact ϕ, and thus for our fermionic model. The Caldeira-Leggett model does not allow tuning into this regime (as discussed previously). The only available low-energy states are the two (quasi) degenerate charge states N=-1,0, forming a pseudo-spin system, N→ (σ_z-1)/2 and e^± iϕ→σ_±. Consequently, in this limit, we arrive at a typical anisotropic Kondo model, where λ^z and λ^⊥ play the role of generalized parallel and perpendicular Kondo couplings <cit.>. A finite residual charging energy ∼δ N_g E_C (due to small deviations from the charge degeneracy point N_g=1/2+δ N_g) and a finite Josephson energy term correspond to the Kondo model including a finite magnetic field. Via the perturbative poor man's scaling technique <cit.>, the standard RG flows equations follow for λ^z and λ^⊥, see the phase diagram in the right panel of Fig. <ref>b. Note, however, that the ferromagnetic phase (where Cooper-pair exchange at the SN-interface is suppressed) can only occur for λ^z<0, equivalent to a negative capacitance between the superconductor and normal metal piece. Therefore, no phase transition can occur in the Cooper-pair box regime for regular electrostatics. As a speculative outlook, we point out that unconventional capacitances (such as negative or nonlinear capacitances) are real, physical phenomena, which can likely be engineered, either by coupling to nearby polarizers <cit.> or superconducting circuit elements <cit.>, or through ferroelectric materials <cit.>.Comparison to bosonic models – As already indicated in the introduction, contrary to our finding, Ref. <cit.> concluded the original Caldeira-Legget model, Eq. (<ref>), to be suitable to describe dissipation through coupling with a normal metal. Since Ref. <cit.> applied a transformation to a bosonized representation of the normal metal degrees of freedom (via Hubbard-Stratonovich <cit.>), we will likewise represent the SN coupling in a bosonized way.In fact, to set the stage, let us first consider a related bosonic model of a charge qubit electrically coupled to a Josephson junction array (the device measured in Ref. <cit.>). Assuming dominant capacitances to ground, the Hamiltonian here reads <cit.>,H=E_C N^2-E_J cos(ϕ)-e_J cos(ϕ_1-ϕ) +∑_j e_C N_j^2-e_J cos(ϕ_j+1-ϕ_j) ,where j indexes the islands of the array. Notice that this model respects local charge quantization on each island (ϕ as well as all ϕ_j are compact). Here the electric contact manifests itself in the coupling ∼cos(ϕ_1-ϕ) exchanging Cooper pairs between the charge qubit and the first island, j=1. In analogy with the previous discussion, we can again transform this coupling into an effective capacitive interaction, Eq. (<ref>), here, with the unitary U^(JJA)=e^iϕ∑_j N_j, yielding here an effective environment charge N_env^(JJA)=∑_j N_j. This charge is quantized in the exact same manner as N_env^(N) in Eq. (<ref>), such this coupling is likewise irrelevant for the SB transition in the regime E_J>E_C. In order for the model to host an insulating phase, we likewise need a true electrostatic coupling, e.g., of the form N_env^(JJA)=∑_j α_j N_j. Consider now the fact that neighbouring islands likely have a very small phase difference. One may thus be tempted to simply approximate all the cosines ∼ e_J quadratically. This leads us right to the Caldeira-Leggett model, Eq. (<ref>) (with an appropriate diagonalization of the array modes, that is). Crucially, we now get a finite renormalization (at least on the level of the rudimentary adiabatic renormalization scheme <cit.>) due to the electric contact, which was absent before the approximation. This represents a marked difference due to a seemingly innocuous approximation step.We now return to the SN heterostructure. For a 1D system of total length L, we apply the standard bosonization technique <cit.>. Neglecting for simplicity the Josephson junction, and the capacitive couplings as well as the spin of the electrons in the normal metal (the spin-full case works analogously <cit.>) we find H_SN=∫ dx_0^L ℋ_SN with (see also Ref. <cit.>)ℋ_SN=v_F/2π[(∂_xθ)^2+ (∂_xφ)^2]+2Δ(x)/π Lcos[ϕ-2φ(x)].The pairing Δ(x) is nonzero for the superconducting part of the system superconductor, 0<x<l, with l<L.The bosonized fields (describing the plasmon physics inside the normal metal) satisfy [n(x),φ(y)]=-iδ(x-y), with the charge density n(x)=∂_x θ (counting the charge in units of e). Note that on this level, even the bosonized SN Hamiltonian remains 2π-periodic in ϕ. Local charge quantization of the plasmon field is, however, a more involved subject <cit.>, and may likewise lead to issues here. In Eq. (<ref>), we could in principle again transform the phase ϕ away by means of the unitary e^iϕ∫_0^L dx n(x)/2, leading yet again to a total charge shift ∼∫_0^L dx n(x)=θ(L)-θ(0) which would appear in the capacitive coupling. The quantization of this total charge could be guaranteed by applying standard boundary constraints on the boson field θ at x=0,L <cit.>. If we take the limit of large Δ on the other hand, the boson field φ gets pinned to the value ϕ/2 for all x where the pairing is nonzero, leaving us with only the part of the boson field in the normal metal as a remaining dynamical degree of freedom. At the SN interface, x=l, the boson field must now satisfy the boundary constraint φ(l)=ϕ/2. If done without parsimony, the large Δ limit thus now returns once more the pure Caldeira-Leggett model, and breaks ϕ compactness as well as charge quantization within the remaining low-energy normal metal plasmon modes – and yet again does not accurately describe the impact of the electric coupling. Charge quantization could be effectively salvaged by updating a boundary constraint of the boson field θ at x=l,L, but this must be added by hand, and does not follow automatically from the theory.Conclusions and outlook – We demonstrate that physically distinct dissipation mechanisms do not have the same impact on the predicted phase diagram of dissipative quantum circuits. By deploying a fermionic model describing capacitive coupling as well as the proximity effect, we find that phase transitions predicted by the standard Caleira-Leggett model either fail to occur, or are controlled by different physical parameters. In particular, capacitive coupling turns out to be the dominant factor to transition to an insulating phase, rendering the normal metal resistivity (the parameter controlled, e.g., by Ref. <cit.>) irrelevant. We show that direct electric coupling between the superconducting system and a resistor only maps to a pure Caleira-Leggett type of interaction, when making unphysical approximations violating local charge quantization. Our fermionic model furthermore lends itself to future research endeavours by means of numerical RG methods <cit.>, where the currently presented predictions can be significantly refined.Acknowledgements – We warmly thank T. Costi, D. P. DiVincenzo, G. Catelani, E. König, A. Petrescu, Ç. Girit, C. Altimiras and P. Joyez for fruitful discussions. This work has been funded by the German Federal Ministry of Education and Research within the funding program Photonic Research Germany under the contract number 13N14891. apsrev4-2 Supplementary materials on “Dissipation and phase transitions in superconducting circuits coupled to normal metal resistor”O. Kashuba and R.-P. Riwar § DERIVATION OF PROXIMITY EFFECT AT SN INTERFACE In the main text, we describe the impact of the SN interface at energies below the superconducting gap Δ. Here, we show how the proximity pairing term ∼λ^⊥ in Eq. (<ref>) (due to Andreev reflection) emerges from a Schrieffer-Wolff transformation by eliminating Bogoliubov quasiparticles. We describe the SN interface including excitations at all energies by means of the standard Hamiltonian,H_SN=H_S+H_N+H_TwithH_S=∑_qσϵ_qc_qσ^†c_qσ+∑_q(Δ e^-iϕc_q↑^†c_-q↓^†+h.c.)H_N=∑_kσϵ_kc_kσ^†c_kσH_T=∑_kqσ(t_kqc_qσ^†c_kσ+t_kq^*c_kσ^†c_qσ).We diagonalize the superconducting Hamiltonian,H_S=∑_qσ√(ϵ_q^2+Δ^2)γ_qσ^†γ_qσ,whereγ_q↑=u_qe^iϕ/2c_q↑+v_qe^-iϕ/2c_q↓^† γ_q↓=v_qe^-iϕ/2c_q↑^†-u_qe^iϕ/2c_q↓ ,withu_q=1/√(2)√(1+ϵ_q/√(ϵ_q^2+Δ^2))v_q=1/√(2)√(1-ϵ_q/√(ϵ_q^2+Δ^2)) .These identities are readily inverted as follows,c_q↑=u_qe^-iϕ/2γ_q↑+v_qe^-iϕ/2γ_q↓^†c_q↓=v_qe^-iϕ/2γ_q↑^†-u_qe^-iϕ/2γ_q↓ .Note that in the diagonalized H_S, we have discarded a constant offset energy, such that the BCS ground state is defined as having energy 0.We now eliminate Bogoliubov quasiparticle excitations by means of a Schrieffer-Wolff transformation, by projecting onto the BCS ground state with the projector P. We can thus approximate the total Hamiltonian asH≈ H_N+H_proxwithH_prox=-PH_T(1-P)1/H_SH_TP =∑_kk'σδϵ_kk^'c_kσ^†c_k'σ+∑_kk'(e^-iϕλ_kk^'^⊥c_k↑^†c_k'↓^†+h.c.) ,withδϵ_kk^'=-∑_qt_kq^*t_k^'qu_q^2-v_q^2/√(ϵ_q^2+Δ^2) λ_kk^'^⊥=-∑_qt_kq^*t_k'q^*2u_qv_q/√(ϵ_q^2+Δ^2) .The δϵ term is a local impurity potential, which can be included in the normal metal eigenstates by means of diagonalization, thus updating the single electron eigenenergies ϵ_k and eigenstates. It therefore does not need to be included explicitly in Eq. (<ref>). The λ^⊥ pairing term on the other hand accounts for the exchange of Cooper pairs at the interface. § BOSONIZATION OF THE HAMILTONIAN DESCRIBING THE SN INTERFACE In the main text, we discuss a spin-less version of the SN heterostructure within a bosonized language, see Eq. (<ref>). Here, we show that the more accurate spin-full case does not qualitatively change the statements made in the main text.We first write the SN Hamiltonian in terms of continuous fermionic field operators,H_SN=∫_0^L dx [∑_σΨ_σ^†(x)(-∂_x^2/2m)Ψ_σ(x) +Δ(x)[e^iϕΨ_↓(x)Ψ_↑(x)+h.c.]] ,where L is the system size of the total heterostructure, and Ψ_σ(x) is the standard fermionic field operator annihilating an electron with spin σ at position x, satisfying the anti-commutation relation {Ψ_σ(x),Ψ_σ^'^† (y)}=δ_σσ^'δ(x-y). The above Hamiltonian is similar to Eq. (<ref>), except that here, we are specifically in 1D, and the SN interface is assumed to be ideal (no impurities at the interface). Assuming the interface to be located at x=l, the pairing is Δ(x)=Δ for 0<x<l and Δ(x)=0 for l<x<L.We next cast the fermionic field operators into standard bosonized fields <cit.>,Ψ_σ(x)=∑_±1/√(2π L)e^i[φ_σ(x)±θ_σ(x)]e^± ik_Fx ,where [n_σ(x),φ_σ^'(y)]=-iδ_σσ^'δ(x-y). This yields the HamiltonianH_SN=∫_0^L dx[ v_F/2π∑_ξ=n,s([∂_x θ_ξ]^2+[∂_x φ_ξ]^2)+2Δ(x)/π Lcos[ϕ+θ_n(x)]cos[φ_s(x)]]Contrary to Eq. (<ref>) in the main text, we here have both a charge and spin boson field, θ_n=θ_↑+θ_↓ and θ_s=θ_↑-θ_↓ (and in analogy for φ, but with factor 1/2 to preserve the commutation relation).For finite Δ, we thus have in general spin dynamics on top of charge dynamics. However, what is qualitatively the same as in Eq. (<ref>) is the fact that the phase is compact on this level. Likewise, the total charge can still be quantized by means of standard constraints <cit.>.In the large Δ limit, the situation simplifies significantly. Here, the charge field φ_n gets pinned to ϕ/2 in the region with nonzero pairing (0<x<l), whereas the spin field gets pinned to the value 0. Hence the problem separates into a spin component which does not couple to the charge qubit, and the charge component which does. Hence in this limit the spin-less and spin-full models are effectively equivalent. § COMPUTING ADMITTANCE FROM QUANTUM LANGEVIN EQUATIONIn the main text we provide the formulas for the respective resistances for the Caldeira-Leggett model, Eq. (<ref>), and for the fermionic model, Eq. (<ref>). In this appendix, we show how these equations are derived from the quantum Langevin equation (similar to Ref. <cit.>). We begin with the Hamilton equations for the Hamiltonian in Eq. (<ref>),Ṅ = E_Jsin(ϕ) ϕ̇ = -2E_C( N + N_env) .Eliminating the equation for the number of Cooper pairs N, we getϕ̈ = -2E_CE_Jsin(ϕ) - 2E_CṄ_env , Ṅ_env= i [H_env,N_env]_- ,where the phase commutes with the environment operators: [ϕ̇,N_env]_-=0.All operators are given in the Heisenberg representation, namely A≡ A(t) = e^iHtA(0)e^-iHt, H(t)=H(0) .Separating the coupling, H=H_0+…, and switching to the interaction representation A_i(t) = e^iH_0tA(0)e^-iH_0t (which makes ϕ̇ commute with all environment operators) we get, expanding the time-ordered propagatorϕ̈ = -2E_CE_Jsin(ϕ) - E_C J + E_C∫_t_0^t dt' Y(t-t')ϕ̇(t') ,withJ = 2i[H_env(t),N_env(t)]_- ,Y(t-t') = 2[N_env(t') , [H_env(t),N_env(t)]_-]_- ,where all time-dependency of operators are given in terms of the decoupled system with Hamiltonian H_env'=H_env- E_C N_env^2.Averaging out over the external degrees of freedom, we getJ =2i Z^-1Tr{e^-β H_env' [H_env,N_env]_-} ,Y(t)=2Z^-1Tr{e^-β H_env'×[ N_env, e^-i H_env't[H_env,N_env]_-e^i H_env't]_-} ,with the partition function Z = Tr e^-β H_env'.Assuming fast internal dynamics of the bath, we can simplify the dissipation term to Markovian form,∫_t_0^t dt' Y(t-t')ϕ̇(t') ≈ - ϕ̇(t)∫_0^t-t_0 dt' Y(t'). §.§ Caldeira-Leggett model For the Caldeira-Leggett model, Eq. (<ref>), we findN_env = ∑_kλ_k(a_k+a_k^†) ,H_env = ∑_kω_ka_k^†a_k ,where ω_k=1/√(L_kC_k), λ_k=λ√(C_kω_k/2)=λ√(C_k/4L_k) so that the commutators in the interaction representation are[H_env,N_env](t) = ∑_kλ_kω_k (-a_ke^-iω_kt+a_k^†e^iω_kt) ,[ N_env, [H_env,N_env](t) ] = ∑_kλ_k^2ω_k (e^-iω_kt+e^iω_kt) = Y(t) .For t_0→-∞ we getR^-1 = ∫_0^∞Y(τ)dτ = π∑_kλ_k^2ω_kδ(ω_k)= π/2λ^2∑_k C_kω_k^2δ(ω_k) = π/2λ^2∑_k1/L_kδ(ω_k) ,as in Eq. (<ref>) in the main text.§.§ Fermionic model Since both the Caldeira-Leggett and fermionic models can be cast into the same form, Eq. (<ref>), the Hamilton equations are initially the same as in Eq. (<ref>). To proceed, we use the expansion approach,[N_env, H_env] = ∑_kk'(λ^⊥_kk' c_k'↓c_k↑ - λ^⊥*_kk' c_k↑^†c_k'↓^†+∑_σ (ϵ_k'-ϵ_k)λ^z_kk'c_kσ^†c_k'σ) ,such that∫_0^∞[N_env, H_env](t)e^iω tdt = ∑_kk'(λ^⊥_kk' c_k'↓c_k↑1/-i(ω + ϵ_k+ϵ_k')+δ - λ^⊥*_kk' c_k↑^†c_k'↓^†1/i(-ω+ϵ_k+ϵ_k')+δ+ ∑_σ(ϵ_k'-ϵ_k)λ_kk'^zc_kσ^†c_k'σ1/i(-ω+ϵ_k-ϵ_k')+δ) .The commutator with density matrix can be calculated as [N_env, ρ_env] =Z^-1∑_kk'( λ^⊥_kk' c_k'↓c_k↑e^β(ϵ_k+ϵ_k')-1/ϵ_k+ϵ_k'+ λ^⊥*_kk' c_k↑^†c_k'↓^†e^-β(ϵ_k+ϵ_k')-1/ϵ_k+ϵ_k'+ ∑_σ(ϵ_k'-ϵ_k)λ_kk'^zc_kσ^†c_k'σe^β(ϵ_k'-ϵ_k)-1/ϵ_k'-ϵ_k) e^-β∑_kc_kσ^†c_kσ . Consequently, the total trace is Tr {∫_0^∞[N_env, H_env](t)e^iω tdt [N_env, ρ_env] } = ∑_kk' |λ^⊥|^2 (n_k'↓+n_k↑-1) ∑_±i(ω± (ϵ_k+ϵ_k'))+δ/(ω±(ϵ_k+ϵ_k'))^2+δ^21/ϵ_k+ϵ_k'+ 1/2∑_kk'σ(ϵ_k'-ϵ_k)λ_kk'^zλ_k'k^z (n_k'σ-n_kσ) ∑_±i(ω± (ϵ_k-ϵ_k'))+δ/(ω±(ϵ_k-ϵ_k'))^2+δ^2 .Splitting into the real and imaginary part we get (we ignore spin-dependence)Re Y(ω)= π∑_kk'( |λ^⊥|^2 (n_k'+n_k-1)∑_±δ(ω±(ϵ_k+ϵ_k')) 1/ϵ_k+ϵ_k' + λ_kk'^zλ_k'k^z (n_k'-n_k) ∑_± (∓ω) δ(ω± (ϵ_k-ϵ_k')) ) ,Im Y(ω)= 2πω∑_kk'( |J|^2 (n_k'+n_k-1)∑_±1/ω^2-(ϵ_k+ϵ_k')^21/ϵ_k+ϵ_k' + λ_kk'^zλ_k'k^z (n_k'-n_k) ∑_±ϵ_k'-ϵ_k/ω^2- (ϵ_k-ϵ_k')^2) .So in the limit of ω→ 0 we getR^-1=Y(0) = π |λ^⊥|^2ν^2 + πν^2lim_ω→0(λ^zω)^2 ,as Eq. (<ref>) in the main text.
http://arxiv.org/abs/2312.15951v1
{ "authors": [ "O. Kashuba", "R. -P. Riwar" ], "categories": [ "cond-mat.mes-hall", "cond-mat.supr-con" ], "primary_category": "cond-mat.mes-hall", "published": "20231226083401", "title": "Dissipation and phase transitions in superconducting circuits coupled to normal metal resistor" }
Chris Fryer [email protected]]Christopher L. Fryer Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0001-5353-7483]Paul A. Keiter Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0002-4163-4830]Vidushi Sharma Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0003-2393-3973]Joshua Leveillee Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0002-0904-1250]D.D. Meyerhofer Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0002-4646-7517]D. H. Barnak Laboratory for Laser Energetics, University of Rochester, NY, 14623, USA0000-0003-3956-6506]Tom Byvank Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0002-7373-9303]A. T. Elshafiey Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0003-1087-2964]Christopher J. Fontes Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0001-7252-3343]Heather M. Johns Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0001-6849-3612]P. M. Kozlowski Los Alamos National Laboratory, Los Alamos, NM, 87545, USA0000-0002-7257-608X]Todd Urbatsch Los Alamos National Laboratory, Los Alamos, NM, 87545, USA Radiation flow through an inhomogeneous medium is critical in a wide range of physics and astronomy applications from transport across cloud layers on the earth to the propagation of supernova blast-waves producing UV and X-ray emission in supernovae.This paper reviews the current state of the art in the modeling of inhomogeneous radiation transport, subgrid models developed to capture this often-unresolved physics, and the experiments designed to improve our understanding of these models.We present a series of detailed simulations (both single-clump and multi-clump conditions) probing the dependence on the physical properties of the radiation front (e.g. radiation energy) and material characteristics (specific heat, opacity, clump densities).Unless the radiation pressure is high, the clumps will heat and then expand, effectively cutting off the radiation flow.The expanding winds can also produce shocks that generates high energy emission.We compare our detailed simulations with some of the current subgrid prescriptions, identifying some of the limitations of these current models.§ INTRODUCTION For many physics and astrophysics applications, energy transport is often carried by photons, neutrinos and electrons.In its generality, solving this energy transport problem requires modeling a seven dimensional problem where the spatial, angular and energy distribution of these carriers must be evolved (see reviewshttps://www.osti.gov/servlets/purl/917504).Coupling this energy transport to matter drives further hydrodynamic instabilities, pushing the limits of our current modeling capabilities.This has led to many approximate methods in modeling radiation-hydrodynamics that include simplifications in the radiation transport (e.g. integrating over angle or energy), coupling terms (e.g. modeling pure transport or pure hydrodynamics models), or spatial dimensions (homogenizing the radiation flow into a 1-dimensional flow).These simplifying assumptions are all valid in specific conditions.For example, for supernova shocks traveling within a star, radiation is trapped in the flow, but after the shock breaks out of the star well into the circumstellar medium, radiation flows freely from where it is emitted and the hydrodynamics can be approximated by a simple homologous outflow.In stars, although the radiation is not trapped, it is in the “diffusive" regime where the angular distribution is well-defined and integrating over angle can remove dimensions from the numerical problem.Although many studies exist that investigate different approximations in radiation transport or radiation-hydrodynamics coupling, there are far fewer studies of the role of inhomogeneities in energy transport (also known as heterogeneous transport or transport through a clumpy medium).The effect of a clumpy medium can modify transport solutions across a broad range of approximation regimes.In transport-only solutions, inhomogeneous media can alter the angular distribution of the radiation.In optically thin applications, these more optically-thick clumps screen the photons, reducing the radiative flux.In more optically-thick regimes, they alter the radiation flow.In radiation-hydrodynamics calculations, a broad range of additional physical effects from radiation/clump interactions can alter the radiative flow.Radiation can cause ablation or, through heating, drive a wind from the clumps.In this paper, we refer to ablation when the disruption of the material is due to momentum deposition and “blow-off" or “wind" when the outflow is due to energy injection.This can inject higher density and, in some cases, higher opacity material into the surroundings, increasing the impact of the clumps on the radiation flow.Shocks from these winds can produce high-energy emission, altering the energy spectrum of the radiation.In some cases, radiative pressure can cause the clumps to implode.The importance of these different effects depends on the applications and their conditions.This paper investigates the radiation flow through these clumpy media, studying the physics effects on the radiation transport and coupling between radiation and hydrodynamics.Understanding the details of radiation flow in clumpy media will enable a better use of observations in nature and results from high energy-density experiments to constrain and validate simulations (codes, numerical methods, physical data, models, and the approximations therein).Although becoming less rare, modeling a clumpy medium at scale severely strains computational resources, even on massive supercomputers.In addition to the aforementioned numerical approximations, simplifications include simulating a small subset of the medium and homogenizing or under-resolving small geometric details.An under-resolved simulation tests the asymptotic behavior of the numerical methods and the accuracy of averaging methods for geometry, materials, equations of state, and opacities.Furthermore, a clumpy medium that we simulate or fabricate for an experiment is usually a single and particular realization of a stochastic medium, which is defined probabilistically.Transport-in-Stochastic-Media methods have been developed to produce ensemble-average linear transport solutions, as if an infinite number of simulations of media sampled from the stochastic definitions were run and averaged.The applications of these methods to coupled radiation-hydrodynamics are much less mature, as is an accompanying estimation of the standard deviation of the ensemble average.For the bulk of this paper, we focus on energy transport via photon radiation.Many of the conclusions also apply to other energy carriers including neutrinos and electrons scaled by the transport regime (mean free path) and emission/deposition properties.Section <ref> reviews a set of applications where radiative transport through an inhomogeneous medium is important, discussing which physical effects are most critical in each application.Section <ref> reviews the current state of both modeling and experiment to study inhomogeneous radiation flow.In this paper, we focus on detailed radiation-hydrodynamics calculations to better understand the physics behind radiation flow and Section <ref> describes our initial conditions and the computational methods used in this paper.Section <ref> focuses on the basic physics behind radiation flow across a single clump.Section <ref> studies the more complex interactions in a multi-clump medium.We conclude with a discussion of how the physics studied in these simulations affects the different applications.§ APPLICATIONSRadiation transport is critical in a wide range of applications including oncological treatments, radiation shielding, nuclear reactors, astrophysics, and planetary atmospheres; for a review, see <cit.>.In some cases (particularly in astrophysics), the deposition of energy and momentum from this radiation plays a critical role in the evolution of the phenomena.For example, momentum deposition from radiation is key in understanding mass outflow from stars.Astronomers are now realizing that the smooth, line-driven winds <cit.> are too simple to explain all of the features in the stellar wind profile.Explosive shell burning, opacity-driven instabilities and pressure waves can all produce bursts of mass ejection in a star, producing inhomogeneities in the circumstellar medium including both shells and clumpy media <cit.>.Studies of line-driven winds indicate that even these, relatively quiescent, outflows, can produce large inhomogeneities in the circumstellar medium <cit.>.In particular, clumping in the line-driven wind can alter the radiation and mass outflow from these winds <cit.>.Understanding the interaction between radiation and these clumps determines both this mass loss and the evolution of these clumps.The clumpy structure produced in stellar winds provides the initial conditions through which supernova explosions propagate.Radiation is typically trapped in the material shock until the supernova breaks out of the star.It is believed that the edge of the star is marked by a sharp density break as the stellar profile transitions to a stellar wind profile.In simple spherically symmetric models, the radiation in the shock rapidly transitions from being trapped in the material flow to freely streaming out of the shock <cit.>.This effect depends sensitively on the structure of the edge of the star as well as asymmetries in the supernova blastwave (from the explosion and the stellar envelope) which complicate this picture <cit.>.Especially in Wolf-Rayet stars where shock breakout actually occurs in the wind, not the edge of the star, shock breakout can be drastically modified by the radiation flow through the clumpy wind material <cit.>.Interactions between the radiation and the clumps disrupt the clumps and the ablation of these clumps produce shocks that drive high-energy emission <cit.>.Beyond shock breakout, interactions of the radiation flow with the circumstellar medium can dramatically alter the observed supernova light-curves.Interactions of the radiation with the circumstellar medium drive many of the features seen in type II narrow-line supernovae.Most of the current studies focus on the interaction with a spherically symmetric shell <cit.>, but it is likely that the circumstellar medium is much more heterogeneous.Transport of radiation through a dusty medium is another application where including the inhomogeneities of the clumps (dust particles) can alter the propagation of radiation and albedo of a dust cloud <cit.>.This phenomenon plays a role in analyzing observational data of these dust clouds, the penetration of energy in molecular clouds, and the destruction of dust.These effects, in turn, can alter star formation in molecular clouds.The propagation of radiation around dense clumps in molecular clouds can cause the clumps to implode, driving star formation <cit.>.This inhomogeneous transport should also affect the interpretation of nova and supernova observations, but less work has been done in this field.Climate and environmental studies have also studied inhomogeneous radiation flow.Understanding the transport through vegetation canopies is important both for agriculture and climate modeling <cit.>.Climate scientists have studied radiative transfer through inhomogeneous cloud cover <cit.> and recipes to mimic that transfer are used in many climate codes <cit.>.With improving satellite data, tests of these models become increasingly stringent, driving advances in this transport. Studies of transport through an imhomogeneous or stochastic medium is not limited to photons.A growing number of charged particle transport studies, particularly of cosmic rays (high energy electrons, protons and ions), have focused on the radiation flows across single clumps <cit.> and turbulent media <cit.>.These studies focused both on the effect of the clumpy media on the flow of radiation and the effect of the radiation on the clumps, including the energy and momentum deposition of the radiation into the clumpy media.§ CURRENT UNDERSTANDING §.§ Inhomogeneous Transport SolutionsThe effects of radiation transport through inhomogeneous media depends on the properties of the inhomogeneities.Studies of inhomogeneities generally consider a 2-component medium consisting of a radiation flow region with embedded clumps.Throughout this paper, we will refer to the “flow" region as the lower-density region through which the radiation front propagates, and the clump region as the denser clumps around which the radiation flows.In experiments, the flow region is often composed of a silicate foam and the clumps are inclusions in that foam.In many astrophysical applications, the flow region is simply the lower density region where the clumps are produced by dense turbulent eddies.The effect of the clumps depends on their densities, opacities, specific heats, geometry, sizes, and covering fractions.It depends on the transport regime:diffusion limit, transport regime, free-streaming limit of the radiation flow.In the free-streaming limit, dense clumps can completely alter the nature of the flow.For instance, inhomogeneities in atmospheric clouds can lead to an increase in the albedo of the cloud layer, decreasing the amount of radiation that reaches a planet's surface.In the diffusive regime, radiation is more able to flow around clumps.Transport-regime conditions lie in between these two extremes.Many of the solutions developed for radiation transport in inhomoegeneous media are most appropriate in the diffusion limit.These methods often use Markov-process solutions <cit.> with Poisson distributions of the inhomogeneities.Some studies derived solutions assuming specific properties of the inhomogeneities <cit.>.Other studies focused on understanding the solution limits <cit.>.The simplifications required in these analytic or semi-analytic solutions drove scientists to develop increasingly sophisticated modeling tools to capture radiation-flow in heterogeneous media <cit.>.Many of the studies focused on developing reduced-order models that rely on a subset of the relevant physics (e.g. mean free path with respect to clump size, number density of clumps, clump spacing, etc.).A straightforward approach to introduce inhomogeneites in reduced order techniques is to derive an effective opacity in the region with inhomogeneities.For example, the approach of <cit.> included an additional effective scattering opacity as well as a modification to the absorption opacity for an inhomogeneous medium for a purely absorptive medium.The effective absorption opacity (σ_ a, eff) is given by:σ_ a, eff = 1/(p_1/σ_a1 + p_2/σ_a2)where p_1,2 are the covering fractions or covering fractions of the clump and surrounding medium respectively and σ_a1,a2 are the equivalent absorption opacities for these materials.They included an effective scattering opacity (σ_ s, eff):σ_ s, eff = ν^2/[σ̃ (1 + λ_c σ̃)]where σ̃= p_1 σ_a1 + p_2 σ_a2 is the average opacity, ν^2=p_1 p_2 (σ_1-σ_2)^2, and λ_c = λ_1 λ_2/(λ_1+λ_2) is the geometric mean of the chord lengths, where the chord length is defined by the length of a chord covered by each material.Using this prescription, we can calculate the effective absorption and scattering opacities as a function of the covering fraction of the clumps.Figure <ref> shows the effective absorption and scattering opacities under this formalism as a function of the clump covering fraction.Although the effective absorption cross section does not increase until the clump covering fraction is large, the effective scattering term quickly approaches the value of the clump opacity.This scattering term can drastically lower the propagation speed of the radiation. For stellar winds, <cit.> used a different approach, setting the opacity toσ = σ̃ (1 + τ_ cl f_ ic)/(1 + τ_ cl)where τ_ cl is the clump optical depth and f_ ic is the density of the inter-clump medium divided by the average density of the medium.Most of these past studies focused on transport-only solutions or solutions where radiation/hydrodynamic effects can be minimized.As such, the solutions typically only consider the relative opacities of the different clumps.More detailed studies included the effects of the size scale of the clumps (with respect to the photon mean free path).This effect becomes increasingly important as we move from the diffusive to the free-streaming transport regimes. §.§ ExperimentsA number of experiments studying radiation transport in clumpy media already exist. <cit.> performed experiments investigating supersonic radiation propagation in a low-density heterogeneous medium.These experiments investigated radiation propagation in a low-density plastic (C6H12) foam target was well as in a Cu-doped foam (C6H12Cu0.394).The Cu was added by means of small (59 nm in diameter) Cu particles and achieved roughly a 2.14% atomic fraction.The foams were viewed face-on with a trichromatic streaked x-ray spectrometer (TCS).The TCS consisted of a three-imaging-pinholes array and a three-transmission-grating array coupled with an x-ray streak camera.The TCS observed the emission from two different photon energies, 210 eV and 840 eV. A soft x-ray spectrometer observed the soft x-ray spectrum of the hohlraum, from which a temperature could be inferred. Finally, a transmission grating spectrometer was used to measure the time-integrated spectrum from either the hohlraum or the foam sample. A delay in the radiation breakout was observed when comparing the Cu-doped foam with the pure foam case for emission at 210 eV; however, this trend was reversed when comparing emission at 840 eV. The opacity for both the pure foam and the Cu-doped foam was estimated at each of these energies. For the pure foam, the opacity was determined to be 273 cm2/g and 407 cm2/g for x-rays of energy 210 eV and 840 eV, respectively. For the Cu-doped foam, the opacity was determined to be 967 cm2/g and 760 cm2/g for 210 eV and 840 eV photons, respectively.This study concluded that differences in the opacities caused the different measured radiation flow speeds.But as we shall see here, the flow may also depend on the blow-off of the inclusions in the flow region.More-recent experiments by 2008PhPl...15e6901K examined radiation with multiple particle sizes. These experiments used a gold hohlraum as a radiation source to drive a supersonic heat wave into a foam package. The inhomogeneous medium was a low-density (65 mg/cm3) plastic foam C15H20O6 with Au particles. The Au fraction from the particles ranged from 0 to 14% by atomic fraction. This type of characterization allows one to visualize the distribution of Au particles throughout the foam and to determine if the particles are clumping together, thereby resulting in a different distribution than originally thought.As we shall see in Section <ref>, characterization of these foams is absolutely crucial for understanding the radiation transport. A detailed description of the complete characterization of these particular foams is found in 2008PhPl...15e6901K.Three different regimes were studied in these experiments, corresponding to no particles (or the homogeneous situation) and particles that are sub-micron and 6 microns in diameter. The position of the radiation front was measured by observing the self-emission of the foam with a soft x-ray imager <cit.>. The soft x-ray imager observed a narrow bandwidth of radiation, peaking around 270 eV. The position of the radiation front was measured in each experimental case and compared with simulations. It should be noted that while foam packages were observed face-on in many previous radiation flow experiments, these measurements were made side-on in order to allow multiple measurements to be made and a time history to be compiled.In order to demonstrate the ability to correctly simulate and understand the radiation propagation with this particular foam, measurements were first obtained with a homogeneous foam. Next, measurements of the position of the radiation front were made in inhomogeneous foams and then compared with two types of simulations: simulations of the pure foam case and simulations using an atomic-mix model. Although the atomic-mix model matches the smallest particles well, neither model could explain the radiation front position for the 6 micron diameter particles.The Pomraning model <cit.>, a precursor to the model described in Section <ref>, was then applied to the simulations. This model assumed a mean particle size of 6 microns in diameter and the particles were in pressure equilibrium with the foam. The mean opacity was determined based on the material properties at each timestep in the calculation. The mean opacity was then divided by the opacity of the homogeneous foam to determine a correction factor to apply. This correction factor depended on the temperature of the Au-foam mixture and therefore changed in time as the temperature of the system evolved. Within the uncertainties of these experiments, the Pomraning model provided a good fit to the data.§ PHYSICS OF INHOMOGENEOUS FLOWSIn this section, we present a wide set of calculations of radiation flow across an inhomogeneous medium, using simplified conditions to better understand the physics.Section <ref> discusses the basic models used followed by two sections focusing on single- (Section <ref>) and multi-clump (Section <ref>) physics effects. §.§ Simulation Tools and Initial ConditionsFor our calculations, we use conditions close to that of the XFOL and XFLOWS experimental campaigns led by LANL <cit.> and the Cassio code developed under LANL's Advanced Simulation and Computing program.The Cassio code modelshydrodynamics using a cell-based adaptive mesh refinement scheme with a two-shock approximate Riemann solver. The code has been verified against a variety of analytic test problems, the most relevant for this problem being the Sedov blast wave <cit.>.Radiation transport can be modeled using a variety of transport solutions including flux-limited diffusion <cit.>, discrete ordinate (S_N) <cit.>, and Implicit Monte Carlo transport <cit.>.The Cassio code has been used extensively in the laboratory experimental community for a wide variety of problems studying hydrodynamics or radiation-hydrodynamics <cit.>.In addition to many of these validation tests, the Cassio has been verified against other experimental codes <cit.>.It has been applied to a number of radiation-hydrodynamics problems in astrophysics <cit.> and has been verified against other astrophysical codes as well <cit.>.For most of our calculations, we use Cassio's S_N transport scheme with an N=8 quadrature set and 71 energy groups.The coarse grid resolution is 0.6μ m and we use only 2 levels of refinement.We discuss comparisons to other transport schemes and grid resolutions below.The initial conditions for our simulations are guided by active XFOL and XFLOWS experimental campaigns <cit.> that builds upon past Pleiades and COAX radiation flow studies <cit.>.In these experiments, a laser-driven hohlraum (either at the Omega laser facility in Rochester <cit.> or the National Ignition Facility at Lawrence Livermore National Laboratory <cit.> produces a radiation front that then propagates through a target. This experiment consists of a target placed on top of a hohlraum (Figure <ref>).Lasers fired into the hohlraum create a hot, radiation-dominated plasma that then propagates through the experimental target.The power in this drive peaks quickly and then decays slowly over the course of our few-ns simulation.In our multi-clump, high-drive models, we also consider a simplified flat power (or temperature) for this drive.The target is a silicon dioxide foam filled with vanadium-oxide inclusions; the opacities are obtained from the OPLIB database <cit.>, calculated with the Los Alamos suite of atomic physics codes <cit.>.In this paper, we focus solely on modeling the target, using a radiation source term at its base to represent the hohlraum-produced radiation flow.The target is modeled in cylindrical symmetry with a radius of 0.04 cm and a height of 0.08 cm. §.§ Single Clump Studies:Probing the Fundamental PhysicsThe interaction of a radiation flow with a clumpy medium has much broader effects than the alteration of the flow of radiation including outflows and shocks.The magnitude of these effects depends upon the properties of the clumps.To understand this physics, we have conducted a number of focused, single clump radiation studies.For these studies, we use our standard experimentally-motivated target (Section <ref>) with a single embedded clump.These single-clump simulations allow us to study the physical effects and the dependencies of these effects on the properties of the clumps.As a first study, we focus on the effects of clump density.Figure <ref> shows the density map of 4 separate simulations 1.6 ns after the launch of the radiation front where the only difference in the 4 simulations is the density of the clump.For these clumps, the composition of clump and flow regions are identical.Only the clump density is varied including models where the clump density is lower and higher than the ambient flow region density.For the low-density clump, radiation rushes through the cavity, leading the flow in the ambient region.For high density clumps, the radiation flows around the clumps.For modest density increases (e.g. 10 times the ambient medium), the radiation can both ablate and compress the clump.However, the timescale to do this increases with increasing clump density and, at the snapshot in time shown in Figure <ref>, the radiation flow has little effect on the clump.The radiation heats up the clump, striving to place it in temperature equilibrium.If the clump is denser than that of its surroundings, it begins to blow a wind, sending a material flow back into the flow region.As we shall discuss below, if the composition of the clumps is different than the ambient medium, these outflows mix into the flow region.If this material has higher opacity, it can choke off the radiation flow. Another features of these outflows is that they can cause shocks, raising the temperature of the outflow (Figure <ref>).These shocks are critical in understanding supernova outflows, producing extended Ultraviolet and X-ray emission <cit.>.The alterations in the flow caused by a low-density region can also alter the temperature profile.Especially in ambient media where the radiation is in the transport or free-streaming regime, this heating from radiation-matter interactions can dramatically alter the emerging spectra.If the radiation flow is highly photon-energy dependent, this effect can also alter the flow of radiation.Matter interactions are important in our simulations.They drive outflows that both heat the material, altering the energy distribution of the radiation, and inject clump material into the ambient medium.The physics behind these outflows can be understood through pressure and energetic constraints.Radiation heats the clumps, striving to achieve temperature equilibrium.But, as soon as the pressure in the clump exceeds the radiation dominated gas, it will expand.The pressure in the clump/flow region can be approximated by a combination of ideal gas and radiation pressure:P_ clump/flow =ρ_ clump/flow/⟨ A ⟩ R T_ clump/flow + a_ rad/3 T^4_ clump/flowwhere P_ clump/flow, ρ_ clump/flow and T_ clump/flow are the pressure, density and temperature of the clump or flow region, ⟨ A ⟩ is the average atomic weight of the material, R=8.31 × 10^7erg K^-1 mol^-1 is the universal gas constant, and a_ rad=7.567 × 10^-15erg cm^-3K^-4 is the radiation constant.Figure <ref> shows the clump temperature versus the flow-region temperature when the pressure in the clump equals that of the flow region.The lines in this plot denote pressure equilibrium at the two temperatures on the x and y axes.We have set up rough conditions for our experiment where ρ_ clump/flow/⟨ A ⟩ = 0.01 and we vary the clump value up to 30 times higher.At these densities, ideal gas pressure dominates the radiation pressure at the temperatures in the experiment.As the clump is heated, its pressure will exceed that of the flow region and it will expand, blowing a wind into the flow region.If the radiation pressure dominates (at roughly 1–3 keV for these conditions), the blow-off is likely to be minimal.For many astrophysical applications, the density of the material is lower and the temperature at which radiation dominates will also be lower.The critical temperature (T_ crit) can be estimated by assuming that radiation pressure dominates the flow region and determining the temperature in an ideal gas to match that pressure, i.e. T_ crit = a_ rad/3 T^4_ flow / (ρ_ clump/ ⟨ A ⟩ R).Table <ref> provides rough values of the critical temperature above which radiation pressure dominates and this blow-off is minimized.For the different phenomena, both the clump density and temperature of the flow region can vary by many orders of magnitude.For most stars, high stellar radii and low mass-loss rates lead to conditions where the stellar radiation dominates the pressure.Even though Wolf-Rayet stars are hotter, the mass loss rates are higher and the stars are more compact.Near the stellar surface, material pressure can be important.The specific heat of the clump material dictates the amount of energy that must be injected into this material to heat it (Δ T ∝Δ E/c_v:the change in temperature is proportional to the change in energy divided by the specific heat).This will determine both how quickly the clump begins to expand or blow off as well the temperature of this blow-off. The higher the specific heat, the longer it takes for the radiation and material temperatures to reach an equilibrium.The specific heat dictates the timescale for the temperature in the clumps to rise.Because the opacity is extremely sensitive to the temperature, the specific heat can play a huge role in determining the evolution of the opacity in the flow.From our analytic models, we expect the outflows to depend upon both the specific heat and opacity of the material.Figure <ref> shows the temperature map from two simulations at two snapshots in time where the specific heat is raised and lowered by an order of magnitude. For high specific heat models, the temperature in the clump need only rise modestly to achieve sufficient pressures to push out against the ambient medium, driving an outflow into the flow region.If the clump material has a lower specific heat, it must become hotter to achieve the requisite pressures to drive an outflow.The outflow will take longer to expand and will be hotter.Figure <ref> shows the density and volume fraction (f_ vol) maps corresponding to the temperature maps in Figure <ref>.Because the outflow starts earlier for the higher specific heat model, it expands more quickly into the ambient medium.In addition to being lower temperature, the density of the outflow is lower for higher C_v clump.If the opacity of this material is high, it can cut off the radiation flow.Although the high specific clump material expands more quickly, it is less dense, so the impact on the radiation flow may be greater for the clumps with lower specific heats.The opacity of the clump material can modify the evolution of the clump.Figures <ref> and <ref> show the density, temperature, opacity and volume fraction for models where the opacity of the clump material is raised or lowered.If the opacity of the clump is high, the radiation is unable to penetrate deeply into the clump.This means that the wind from the clump taps only a small mass reservoir.The resulting outflow is lower density but higher temperature.The extent of the high-opacity outflow is slightly lower.Depending on how the opacity evolves as it transitions from a solid to a plasma, the nature of the clump-material opacity can have a huge impact on the radiation flow.One important aspect of many computational studies of transport in inhomogeneous media is that all opacity databases that we are aware of assume that the material is a plasma.Especially for laboratory experiments, the embedded clumps are in a solid phase, and the opacity of this material can be very different from opacities obtained under plasma conditions.Electrons obey Fermi-Dirac statistics, which ensures that at solid densities, the partial occupation of higher-lying energy levels decays slowly, resulting in the long characteristic Fermi-Dirac tail. However in dense plasmas, the eigenspectrum is continuous and the partial occupations die out rapidly. This behavior governs the intra (inter) band transitions, or equivalently, bound–bound, bound–free, and free–free electron transitions, and therefore has a direct impact on optical spectra <cit.>. A recent study of asymmetric carbon–hydrogen (CH) mixtures has shown that compressing the system to three times its density at k_BT=10 eV results in an almost threefold increase in its electrical and thermal conductivities <cit.>. A compressed “nonideal" plasma state emerges in numerous dynamic compression environments such as in the laser irradiation of condensed matter in pinched electric discharge experiments, or when the critical density values are attained at high static pressures <cit.>.In the presence of spatial inhomogeneities, a subtle interplay existsbetween the temperature gradient arising from thermal conduction and the density profile that evolves to compensate for the thermal gradient <cit.>.At an inhomogeneous interface, different opacities create a temperature gradient at the junction driving a density gradient. The response of the density to this induced temperature change can serve as a diagnostic tool for thermal conductivity, a critical parameter for planetary modeling.In these studies, we have assumed numerical issues (e.g. grid resolution, radiation transport methods) do not significantly alter our results.Our standard simulations are fairly high resolution (in a 0.04cm = 400 μ m target, our resolution is 0.6 μ m). However, ideally, our calculations would resolve the mean free path of the radiation and typical mean free paths in our clump are 0.15 μ m.To test the resolution in our standard calculations, we ran a series of high-resolution calculations where the resolution was 0.15 μ m.The results, comparing the standard- to high-resolution calculations is shown in Figure <ref>.Although small structures seem to be appearing at t=2ns, the outflow and shock positions are identical in these calculations. Another numerical uncertainty lies in the implementation of the radiation transport.As discussed in Section <ref>, the Cassio code includes two higher-order transport methods:discrete ordinate S_N and IMC methods.These two approaches use very different mathematical representations of the angular distribution of the photons and the radiation transport.For most of our calculations, we use the discrete ordinate method.Figure <ref> shows the results of this IMC run.The IMC boundary condition is slightly different than that used in the S_N calculations and the angular distribution of the radiation will be slightly different.Comparing to the results in Figure <ref>, we note that, although there are differences in the sourcing and in the radiation/material coupling algorithms that deposit energy in the zones, the interaction with the clump and its outflow is very similar between the IMC and S_N calculations. One numerical issue that might lead to differences, but that we do not test here, is the implementation of the hydrodynamics.For instance, the Cassio code uses an adaptive mesh refinement (we only do 2 levels of refinement).We do not compare our results to a Lagrangian hydrodynamics scheme.In some scenarios, the radiation in the flow can cause the clumps to collapse.For example, in star-forming regions, radiation pressure causes clumps that are on the boundary of gravitational stability to begin to collapse, seeding star formation <cit.>.Radiation pressure also drives the implosion of capsules in inertial confinement fusion <cit.>.The exact details of the collapse depend on the amount of energy injection versus the radiation pressure.As this subject matter diverges from the primary focus of the studies here, we will defer this discussion and detailed study to a later paper. §.§ Multi-Clump Studies:Putting it all TogetherRealistic inhomogeneous radiation-flow problems typically involve chaotic inhomogeneities with many dense clumps.To understand better how radiation flows through a multi-clump medium, we expand our study to include a series of calculations with 25 clumps in our flow region <cit.> varied the size and number of clumps in a similar experimental setup).To better understand the additional complexity caused by this structure, we have run a series of randomly spaced multi-clump calculations.We continue to use the clump properties (density, composition) from the XFOL and XFLOWS laboratory experiments <cit.>.We then vary the number, size and distribution of the clumps to better understand the broader implications of inhomogeneous radiation flow.In particular, these studies allow us to investigate the effects of blow-off (and its dependence on the radiation pressure) under more realistic conditions.For our first study, we focus on the impact of hydrodynamic feedback on the flow of radiation.Figure <ref> shows the temperature profile of the same initial clump profile, but with two different drives, each comparing a simulation that is pure transport (“no hydro")) with another that includes hydrodynamic feedback.The pure-transport calculations show an extreme limit where the radiation simply flows around the clumps.For the lower drives, the hydrodynamic feedback is faster than the flow timescale and the clumps expand to block the radiation flow.The radiation flow is dramatically altered by the blow-off from the clumps.With a stronger drive, the outflow is minimized due to both higher radiation pressures and the fact that the flow is faster, allowing less time for hydrodynamic feedback.In the stronger-drive simulations, the differences between the pure-transport and radiation-hydrodynamics simulations are less dramatic.Because the radiation flow is strongly affected by the clumps in the material-pressure dominated regime, the flow of radiation strongly depends on the distribution of the clumps (in the low clump-number regime).We have run 100 different 25-clump calculations, studying the flow of radiation.Figure <ref> compares the time it takes for the radiation to emerge from the clump region for 100 distinct simulations, assuming a random distribution of clumps but conserving coverage area.The propagation timescale of the radiation front varies by 20%.For many inhomogeneous flows, it will be difficult to exactly characterize the inhomogeneities and any solution will be limited by this uncertainty.The complex structures are very important in our standard models where the radiation temperature is ∼ 100 eV and the material pressure of the clumps (after being heated by the radiation) exceeds the radiation pressure.We have run a series of simulations using a high-temperature radiation source where the radiation temperature is at 1 keV.At these temperatures, radiation pressure dominates, simplifying the radiation flow. However, the physics is still much more complex than simple pressure-equilibrium physics suggests or than the subgrid models discussed in Section <ref> assume.Figure <ref> shows the material and radiation pressure of two 100-clump, high-temperature simulations.First note that the radiation and material temperatures are very different for these two models.At these high temperatures, the radiation front moves through our target much faster than the material can equilibrate.Even without this equilibration, blow-off from the clumps still affects the long-term flow.For our clump and flow region compositions, as well as for a fixed density and temperature, we expect the clump opacity to only be a factor of 4 higher than the opacity in the flow region.As such, following the prescriptions in Section <ref>, we do not expect such a dramatic effect on the propagation timescale.A few factors contribute to this seeming discrepancy.First, the material temperatures of the clumps and the flow region are not the same (especially just as the radiation front propagates across the clumps) and this difference becomes larger for larger clumps.This can lead to a much more dramatic variation between the clump and flow-region opacities (Figure <ref>).The opacities for our materials vary dramatically with temperature and this variation is at the heart of the differences between our simulations results and those of the past analytic prescriptions described in Section <ref>.Because the temperature in the clumps evolves differently than that of the flow region, even if the opacities of the materials are very similar at the same temperature, the opacities for the clump vs. flow region can be very different if their temperatures are different.Especially near the radiation front, the opacities can differ by more than two orders of magnitude.In addition, even though the radiation pressure dominates, the clumps do expand and begin to constrain the entire flow region (Figure <ref>).If we were only interested in the instantaneous speed of the front, this expansion might not matter.But in steady-state cases, or cases where the front is driven by further radiation, this clump expansion will alter the radiation flow in this inhomogeneous medium.To further understand the role of clumps, we include two more sets of calculations.In Figure <ref>, we show the results of three multi-clump calculations, varying the clump number from 25 to nearly 100 clumps.This set of calculations effectively shows the effect of raising the covering fraction from 11% to 44%.Although the basic trends follow what we expect from our subgrid prescriptions, the opacity evolution and expansion factors exacerbate the effects.Another way to understand the deviations from our simple subgrid models is to compare the results of two models with the same clump covering fractions, but different size and number of clumps (Figure <ref>).Although the covering fraction is initially identical between these two runs, the evolution of the radiation fronts are very different.This behavior is primarily due to the fact that even a small amount of outflow from the clumps can constrain the flow of radiation.Clearly, even in the radiation-dominated case, hydrodynamics effects like outflow/winds will be important.The subgrid formulae in Section <ref> will struggle to match this data.The <cit.> formulae discussed in Section <ref> do not reproduce the magnitude of this variation.Although the additional scattering term in the <cit.> approach can come closer to fitting the data, this requires incorporating the temperature-dependence of the opacity.We defer a detailed application-specific comparison of these subgrid models to a later paper.§ APPLICATIONS REVISITED Radiation flow through inhomogeneous media is critical for a broad range of problems in astrophysics and beyond (see Table <ref>).In many cases, the inhomogeneities can not be resolved in the large-scale calculations of these applications.In this paper, we reviewed some of the recipes developed to capture radiation transport through these media and the experiments developed to test them.This paper included a large set of simulations to better understand the physics behind radiation flow through an inhomogeneous medium.We found that for regimes where the radiation pressure is not dominant, radiative heating of the high-density clumps can cause the clumps to expand.This wind can increase the opacity throughout the medium, dramatically altering the flow of radiation.Most of the recipes in the literature focus on transport-only effects, which are more valid for the conditions where radiation pressure dominates.But, as we have seen in this paper, even in a radiation-pressure dominated system, radiation-hydrodynamics effects can not be ignored.In all of our models, energy deposition from the radiation front played a much bigger role than momentum deposition and hence, “outflows" or “winds" were more important for the subsequent evolution than ablation.For less dense clumps, as we might expect from turbulent instabilities, ablation can play a much more important role <cit.>.Ablation will have a similar effect to blow-off in the sense that it will disperse the clumpy material, causing shocks and cutting off the flow.But the nature of ablation will be different than the models shown here and much more work must be done to understand all aspects of inhomogeneous radiation flow for the broad set of applications discussed in this paper.We noted the complications caused by opacities that vary rapidly with temperature.For our materials, the opacity for both our materials drops by over an order of magnitude as the radiation front propagates across the flow region. The opacity of the clump material will drop much slower because it takes time for it to heat.The same is true for many astrophysical conditions.Figure <ref> shows the solar opacities from the Los Alamos OPLIB database for a range of temperatures and densities.The temperature of a giant star is roughly 10,000 K, but can rise up to 1 million K when the supernova blastwave breaks out of the star.In such scenarios, that opacity can drop as much as 3 orders of magnitude.Any subgrid model estimating the opacities of a clumpy medium must include this temperature dependence.This figure assumes the radiation and matter are in equilibrium.As we showed in our simulations, it is unlikely that this will hold in many applications.Out-of-equilibrium effects must be included to fully follow this radiation flow.At this time, the physics is sufficiently complex that no model captures all of these effects.A much more comprehensive study is needed to develop a more generic solution and it may be that the more appropriate approach would be to devise individual recipes for specific problems.We defer this study to a later paper.The work by CLF was supported by the US Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001).A portion of the work by CLF was performed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. aasjournal
http://arxiv.org/abs/2312.16677v1
{ "authors": [ "Christopher L. Fryer", "Paul A. Keiter", "Vidushi Sharma", "Joshua Leveillee", "D. D. Meyerhofer", "D. H. Barnak", "Tom Byvank", "A. T. Elshafiey", "Christopher J. Fontes", "Heather M. Johns", "P. M. Kozlowski", "Todd Urbatsch" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20231227184301", "title": "Understanding Radiation Flow in a Stochastic Medium" }
My Publication Title — Multiple Authors First Author Name1,2, Second Author Name2, Third Author Name1Received ...; accepted... ====================================================================================Multi-agent trajectory prediction is crucial for various practical applications, spurring the construction of many large-scale trajectory datasets, including vehicles and pedestrians.However, discrepancies exist among datasets due to external factors and data acquisition strategies. External factors include geographical differences and driving styles, while data acquisition strategies include data acquisition rate, history/prediction length, and detector/tracker error.Consequently, the proficient performance of models trained on large-scale datasets has limited transferability on other small-size datasets, bounding the utilization of existing large-scale datasets. To address this limitation, we propose a method based on continuous and stochastic representations of Neural Stochastic Differential Equations (NSDE) for alleviating discrepancies due to data acquisition strategy.We utilize the benefits of continuous representation for handling arbitrary time steps and the use of stochastic representation for handling detector/tracker errors.Additionally, we propose a dataset-specific diffusion network and its training framework to handle dataset-specific detection/tracking errors.The effectiveness of our method is validated against state-of-the-art trajectory prediction models on the popular benchmark datasets: nuScenes, Argoverse, Lyft, INTERACTION, and Waymo Open Motion Dataset (WOMD).Improvement in performance gain on various source and target dataset configurations shows the generalized competence of our approach in addressing cross-dataset discrepancies. The code is available at <https://github.com/daeheepark/TrajSDE>. § INTRODUCTION Trajectory prediction stands as one of the most crucial challenges to corroborate the safety of autonomous driving systems. Its objective of predicting future trajectories allows autonomous agents to respond optimally to actively changing environments. As a response, a number of large-scale trajectory datasets such as nuScenes, Argoverse, WOMD, Lyft, INTERACTION, and TrajNet++ have been established <cit.> to pursue a data-driven approach towards constructing a reliable motion forecasting system <cit.>. One well-known issue with data-driven models is their limited performance when discrepancies in data distributions are manifested between training and test data.Therefore, to construct a trajectory prediction system on a specific environment, the optimal way is to collect data from that environment. However, recent models require abundant data for optimal performance, which require a cumbersome process of acquiring such an amount of data. In that sense, adequate utilization of existing large-scale datasets grants an advantage in circumventing this hurdle. Recent approaches have attempted to overcome such challenge by proposing domain adaptation <cit.> or increasing model generalizability via multi-source dataset training <cit.>. Compared to these efforts in handling domain gaps, the dataset-specific discrepancies caused by disparity between each data acquisition strategy have been excluded from being considered as a domain gap and have been less visited.Our work shows that adequate handling of these dataset-specific discrepancies unlocks a collective potential from cross-dataset motion patterns.In doing so, we focus on two representative distinctions across datasets.First is the time step configuration difference, including observed/predicted time lengths and sampling frequencies as shown in Tab. <ref>.This results in the discrepancy of feature manifold of input/output trajectory in the feature space.For instance, a model trained on the WOMD dataset, which is to predict 8 seconds of future from 1 second of past with 10Hz, learns a mapping function between the past 1-second motion feature and the future 8-second motion feature.However, when evaluating this model on the nuScenes dataset which involves predicting 6 seconds into the future from 2 seconds of observed data in 2Hz, the model struggles to map past trajectory features to future ones accurately. Secondly, trajectory datasets are obtained by detecting and tracking surrounding agents from the sensor data taken from the ego-agent.As a result, the tracked results (tracklets) are prone to both sensor noise and also inaccurate detection and tracking results <cit.>, and it adversely affects prediction performance <cit.>.Moreover, each dataset manifests unique tendencies of tracklet errors.It is because they use different types of sensors and detector/tracker configurations in the acquisition process.Their unique tendencies of tracklet errors are shown in Fig. <ref>.The tracklet noise is also influenced by the time step configuration, for different sampling rates exhibit unique noise patterns. Namely, tracklet noise tends to be more severe with smaller Δ t, as shown in Fig. <ref>, where Argoverse has more severe tracklet noise than nuScenes with the same past length. To address these disparities holistically, we adapt the continuous and stochastic representation of Neural Stochastic Differential Equation (NSDE).Rather than dealing with time series data discretely as conventional approaches, we leverage NSDE to handle time series data in a continuous space. Additionally, we show the capability of stochastic representation in handling the tracklet errors. Specifically, we propose a dataset-specific diffusion network of NSDE and its training method to enhance robustness against dataset-specific tracklet errors. Our contributions are summarized as follows:* We utilize a continuous representation of NSDE for trajectory prediction to diminish internal discrepancies across datasets collected in arbitrary temporal configurations.* We propose a framework of dataset-specific diffusion network and its training method to handle unique tracklet errors across datasets.* The proposed methods are validated against state-of-the-art prediction methods including regression-based and goal-conditioned method, the two mainstreams of trajectory prediction methodology.* We validate our methods across five datasets: nuScenes, Argoverse, WOMD, Lyft, and INTERACTION, and show consistent improvement in prediction accuracy with state-of-the-art prediction models. § RELATED WORKS§.§ Trajectory predictionTrajectory prediction involves predicting the future paths of road agents based on observed past trajectories and environmental information, such as HD maps <cit.>. With its increasing interest, a number of large-scale trajectory datasets have been established <cit.>.These datasets acquire trajectories by detecting and tracking surrounding agents using sensor input installed on ego-agent. HD map information can be obtained from pre-built HD maps or derived from sensor data <cit.>.The introduction of large-scale datasets has resulted in improved performance of data-driven trajectory prediction models.Various methods have been proposed to capture agent interactions or better relationship between HD maps <cit.>.The methodology for motion forecasting based on these datasets' patterns could be broadly classified into two categories: regression-based and goal prediction-based models.Regression-based models predict the entire trajectory at once, while goal prediction-based models initially predict the endpoints, followed by conditional generation of motion path for each end points.However, despite the rapid advancements in the past few years, prediction in cross-domain scenarios remains relatively underexplored, as all these methods have been individually trained and evaluated on each pre-existing large-scale dataset. §.§ Cross-domain trajectory predictionRecent research has highlighted the presence of domain discrepancies among various trajectory datasets <cit.>.Analyzing datasets such as nuScenes, Argoverse, Interaction, and Shift, it has been confirmed that transferability between datasets is limited.From a general domain adaptation perspective, approaches have been proposed to address such discrepancies <cit.>. Besides, trajectory datasets exhibit discrepancies due to various factors.For instance, geographical (external) factors can lead to variations in driving environments and agent density, resulting in different driving patterns.To tackle such discrepancies related to road structure curvature, one method <cit.> proposed a domain normalization technique using Frenet coordinates. However, the methods mentioned above do not consider discrepancies arising from differenct data acquisition strategies including varing time step configuration and tracklet errors.While these domain adaptation papers all set up cross-domain experiments, they restricted the time steps to an overlap of all dataset time steps.For example, in <cit.>, they used a common time step configuration of 1 second past and 3 seconds future, which is shared by the datasets. Additionally, although they acknowledged that detection/tracking errors during dataset collection could affect transferability <cit.>, the different tendencies of error in cross-domain environments are yet to be addressed. §.§ Neural Differential Equation (NDE) The proposal of Neural Ordinary Differential Equations (NODE) <cit.> has made significant strides in applying continuous representation to time series data, making it intuitive for various tasks like predicting continuous functions <cit.> or other applications that need continuous time series representation <cit.>.Neural ODEs have been employed in encoder-decoder structures and have been used to represent the latent space of entire time series data in continuous form <cit.>. Therefore, motion forecasting has been deemed as an epitome of a pattern recognition task solvable via NODEs for its temporally coordinated time series structure.The first work to utilize NODEs for motion forecasting was social ODE <cit.> which applied Neural ODEs to pedestrian trajectory prediction to enable interaction modeling.Moreover, the MTP-GO <cit.> constructed a graph-based NODE model for trajectory prediction.Nevertheless, these prior works have not fully leveraged the continuous characteristics of neural ODEs and lack the incorporation of stochastic nature of NSDE modeling, thereby our method substantially differs from previous frameworks based on NODEs.§ METHOD§.§ Preliminaries§.§.§ Problem definition Given N agents, a position of a road agent n ∈{ 1, ..., N} at a specific time t can be denoted as _t^n for the past, and _t^n for the future. In general, trajectory prediction aims to predict future trajectory = { _Δ t ^n, ... , _T_f^n } from map information M (optional) and observed history trajectory = {_-T_p^n, ... , _-Δ t ^n, _0^n } with fixed time step Δ t, history length T_p, and prediction horizon T_f. In our problem definition, we assume that we have large-scale source dataset {^sr, ^sr} and small-scale target dataset {^tg, ^tg}. Because each dataset has own time step configuration (T_p, T_f, and Δ t), we design a model that can handle arbitrary time-step sampled trajectory = {_t^n }_t ∈ (0, T_F] and = {_t^n }_t ∈ [-T_P, 0] where T_F and T_P are maximum values of prediction horizon and history length across datasets. From now, we omit the superscript n for simplicity.§.§.§ Neural Stochastic Differential EquationNeural Ordinary Differential Equation (NODE) is an approach that models the derivative of hidden state _temploying neural networks to model the transition of features over time.Neural Stochastic Differential Equation (NSDE) introduces stochasticity by incorporating a term resembling Brownian motion into the transition of the hidden state <cit.>.This can be represented as follows:d _t = f(h_t, t)dt + g(h_t, t)dW_tHere, W represents the standard Brownian motion, while f and g respectively denote the drift and diffusion functions and are parametrized by neural networks.The stochastic noise term acts as a regularizer, mitigating perturbations present in the data. With the above derivatives, we can get a hidden state at a specific time t with initial value problem (IVP) solvers.Thanks to its continuous nature across time, NDE is known as effective for handling irregularly sampled time series data <cit.>.Therefore, we use NSDE to encode and decode temporal trajectory, which is originally performed with discrete networks like transformer <cit.>, or LSTM <cit.> in previous methods. §.§ Proposed Framework §.§.§ Modeling time-wise continuous latentFollowing conventions in both NDE and trajectory prediction, our model follows an encoder-decoder (sequence-to-sequence) structure as shown in Fig. <ref>. At first, we encode past trajectories of agents with SDE-GRU. We adopt ODE-RNN structure to handle incoming irregularly sampled data, where ODE is replaced with SDE. When the input positions is not observed at a time stamp, the latent is continuously translated via NSDE. If the agent position at time t (_t) is observed, the latent vector (_t) is updated using encoded incoming data (__t) via GRU following  <cit.>. How to obtain next step latent from current input and latent is as:d _t = f(h_t, t)dt + g(h_t, t)dW_t_t+Δ t' = _t + f(_t, t) Δ t + g(_t, t) √(Δ t) W_t _t + Δ t = (_t+Δ t' , __t)Here,cell is represented as:_t = σ ( W_r( _t+Δ t' ⊕__t )) + _r )_t = σ ( W_z( _t+Δ t' ⊕__t )) + _z ) _t =( W_g( (_t ⊙_t+Δ t') ⊕__t )) + _g ) _t + Δ t = _t ⊙_t + (1-_t) ⊙_twhere _t, _t, _t correspond to reset gate, update gate, update vector and ⊕, ⊙ correspond to concatenation, element-wise product. Here, we omit the superscript past for Eqs. <ref>, <ref> for simplicity. Then, integrating latent feature from -T_p to 0, we get a latent feature of each agent at current time step (t=0):_0^past = ∫_-T_P^0((_t^past, t), __t ) dt _0^fut = 𝔼(_0^past, ℳ)After encoding past trajectory as a single feature per agent, remaining part of encoder 𝔼 is performed such as encoding with map information ℳ.In case of decoder, it has different network design depending on whether the base model is regression-based model or goal-conditioned model. Unlike the past feature which needs to be updated as data coming as t passes, there is no incoming data for future decoding. Therefore, in regression-based model, hidden state at future time step can be obtained by vanilla SDE solver without GRU update. However, in case of goal-conditioned method, we propose a multi-scale-goal updating method as depicted in the right part of Fig <ref>.The goal-conditioned decoder predicts trajectory both from _0 and goal feature.Goal is predicted at the last time step for each dataset configuration: 𝒢_T_f^1, 𝒢_T_f^2. Here, T_f^1 is a time step which is smaller one between { T_f^sr , T_f^tg }, and T_f^2 is the larger one which is identical with T_F. To adopt this multi-scale goal-conditioned, we additionally utilize SDE-RNN which can be represented as:_t ∈ ( 0, T_F]= {(( _0, 𝒢_T_f^1 ) )0 < t ≤ T_f^1(( _T_f^1, 𝒢_T_f^2 ) )T_f^1 < t ≤ T_F .Finally, a mlp decoder 𝔻 is utilized to decode future position = {_t }_t ∈ (0, T_F] from hidden state {_t }_t ∈ (0, T_F].§.§.§ Handling tracklet uncertainty As elaborated in previous sections, the tendencies of tracklet error are unique across datasets. Although the NSDE is known to be robust to data perturbation, it is troublesome to account for each and every uncertainty tendency at once. With this motivation, we adopt the concept of SDE-Net <cit.>.The SDE-Net utilizes both in-distribution data (ind) and out-of-distribution data (ood) to train the diffusion network. They accomplish this by training the diffusion net to assign 0 to ind and 1 to ood in the SDE derivative computation.With this approach, the model accomplishes two things: 1. Uncertainty measurement of previously unseen ood data, 2. Larger weighting of brownian motion term dW_t for uncertain samples when solving SDE.Its objective is the following:min _θ_gE__t ∼ P_ind g(_t , t) + max _θ_gE__t ∼ P_ood g(_t , t) Compared to the SDE-Net, our method differs in two aspects.Firstly, we focus on detecting and utilizing trajectory samples with tracklet error within a dataset, rather than addressing cross-dataset distribution.To this end, we define ind as a clean trajectory and ood as trajectory containing tracklet error.During training, the ego-agent's trajectory is used for clean data since it is obtained from GPS and localization information, thus free from tracklet errors originating from occlusion or ID switch.For ood data, Gaussian noise is added to the ego-agent's trajectory.It is based on the fact that most multi-object trackers are based on recursive Bayesian filters, so they produce Gaussian state uncertainty  <cit.>. This way, the encoder NSDE is trained to assign larger weight on Brownian motion term dW_t for noisy trajectory data, and smaller weight for clean data. Since the Browninan motion term of NSDE is known to act as a regularizer <cit.>, our method fosters robustness by intensifying regularization effect on noisy trajectory samples through diffusion network-driven Brownian motion weighting.Secondly, we address the tracklet error variation across datasets by using separate diffusion networks per dataset.In the NSDE formulation, the drift net aims to achieve high prediction accuracy via system control, while the diffusion net captures aleatoric uncertainty.To leverage multi-source learning for prediction accuracy, we share the drift net across datasets and allocate distinct diffusion networks for each dataset. Ultimately, the derivative of NSDE encoder and its training objective is formulated as follows: d _t = f(h_t, t)dt + g(h_t, t)dW_t , g={ g_srif ^sr g_tgif ^tg. min _θ_g_srE__t ∼ P_ego^srg(_t , t) + max _θ_g_srE__t ∼ P_ego^sr g( _t , t) +min _θ_g_tgE__t ∼ P_ego^tg g(_t , t) + max _θ_g_tgE__t ∼ P_ego^tg g( _t , t)Here, g_sr and g_tg are diffusion networks for source and target dataset, respectively. _t and _t corresponds to hidden states encoded from clean ego-agent trajectories and noise-injected ego-agent trajectories. Figure. <ref> illustrates the proposed uncertainty training framework. §.§.§ Training detailsIn addition to the objective of Eq. <ref>, prediction loss betweenand GTis also used in order to train the overall encoder-decoder structured model.In detail, the prediction loss trains the decoder SDE to learn a continuous transition function between the encoded feature h_0 and future timestep hidden states {_t }_t ∈ (0, T_F] and encoder SDE to produce a continuous representation of the observed trajectory X, thus enabling the model to handle arbitrary time lengths and steps. Additionally, if the baseline model is a goal-conditioned model, goal prediction loss between predicted goal position 𝒢̂ and gt goal 𝒢 is used. For the implementation of SDE methods, torchsde <cit.> is used. For the initial latent value (_-T_p^past), we assign a learnable parameter. Please refer to the supplementary materials for additional details on the training procedure and model implementation. § EXPERIMENT §.§ Experiment setupOur goal is enhancing performance in target dataset via additional training on a large-scale source dataset. To assess whether the prediction model alleviates limited transferability across datasets, we jointly train with training set of both dataset and evaluate on validataion set of target dataset. In detail, the regression-based prediction model is trained only with nuScenes train set (N), jointly with nuScenes + Argoverse train sets (N+A) or nuScenes + WOMD train sets (N+W), then validated on nuScenes val set, with a widely used metric, mADE_10. To evaluate the model's generalizability on different target datasets, we additionally report training on WOMD. As for the goal-conditioned model, its improvement on nuScenes and Lyft validation sets are evaluated. Specifically, for nuScenes validation, we compare training only on nuScenes train set (N) and jointly with nuScenes + INTERACTION train sets (N+I). We additionally report its improvement on Lyft validation set by comparing only training on Lyft train set (L) and jointly with Lyft + INTERACTION train sets (L+I).To enable discrete baseline models to be applied to two different time-step configured data, we have re-arranged both trajectory datasets. We create empty time series data that can contain both dataset time steps, then scatter each data to each time step. For example, with nuScenes (2/6s, 2Hz) as target and Argoverse (2/3s, 10Hz) as source, we create 81 bins (2/6s, 10Hz) of empty data. Then, nuScenes data is scattered with 5 time step intervals for overall lengths, while only the first 50 time steps are filled for Argoverse data. §.§ Datasets and baseline modelWe use nuScenes (30k) and Lyft (160k) as small-scale target dataset for their relatively smaller sizes. We utilize INTERACTION, Argoverse (200k) and WOMD (500k) datasets as largse-scale datasets for additional training. To utilize common information among datasets, we use past/future trajectories and lane centerline information only. Additionally, while these datasets have both vehicle and pedestrian trajectory data, we only train and evaluate vehicle trajectories for simplicity. To show the effectiveness of our framework, we select HiVT <cit.> and MUSE-VAE <cit.> as the latest regression and goal prediction-based trajectory prediction method and show that even the state-of-the-art method has room for improvement with the fusion of our proposed SDE framework. § RESULTS §.§ Effectiveness in multi-source trainingTable. <ref> shows the improvements due to multi-source training on the regression-based model. We compare our method with original discrete models, as well as ODE-RNN <cit.> and LatentSDE <cit.> adaptations.Compared to training with N, the baseline model's mADE has improved 7.56% for N+A, and 9.09% for N+W. This improvement signifies an underfitted result when the model is only trained with nuScenes, a relatively smaller dataset that is comprised of only 30k training data.However, further performance gain has been limited since the discrete temporal encoding of vanilla HiVT is incapable of efficiently handling the cross-dataset discrepancy.By adopting ODE-RNN as the temporal encoder/decoder, the use of additional training data has brought about much more performance improvement (11.62% for N+A, 13.71% for N+W) thanks to its continuous modeling of latent transition across time. Although adopting latentSDE is known to be robust against data perturbation, its performance gain slightly decreases compared to ODE-RNN (9.67% for N+A and 12.64% for N+W). It is because the single diffusion network of latentSDE failed to address different type of tracklet error across datasets. Finally, with our proposed method, mADE_10 improves to 0.913 for N+A, and 0.893 for N+W. These improvements correspond to 12.55%/14.46% compared to nuScenes only training (N), and 5.49%/6% compared to vanilla HiVT (0.966 → 0.913 and 0.950 → 0.893) which empirically show the effectiveness of the proposed methods.Table. <ref> shows the effectiveness of our method on the goal-conditioned model. We conduct experiments on two different target validation datasets: N and L.For each case, we compare the performance gain when using I set as additional training data.The use of our method resulted in significant improvement in performance gain compared to the vanilla MUSE-VAE model. Specifically, its performance gains on both validation sets are threefold compared to the vanilla MUSE-VAE method, demonstrating the importance of improved transferability of our method across different types of backbone prediction models. In addition, similar to the goal-conditioned model's generalized improvement on two different target sets, the regression-based method also shows improvement in performance on a different target validation set.Table <ref> reports the regresssion-based method's performance on W set as target set. Use of only 5% of W set is compared to additional use of A set to assume a situation where only a small amount of training data within target set distribution is available.The use of our method over the baseline HiVT again shows significant improvement in performance gain.§.§ Effect of target dataset size Previous experiments have been conducted with the size of target dataset as 30k, the size of nuScenes train set. However, 30k is still a considerably large-scale dataset, and collecting a labeled dataset of an equivalent size could still be considered a cumbersome work.Therefore, we have also conducted the same experiments with smaller sizes of target dataset to show the effectiveness of our model even when the available target dataset is smaller. By randomly dropping a ratio of nuScenes train set, we construct the target datasets size of 20k, 15k, 10k, 3k, and 0 (no target dataset is used). Argoverse dataset is used as the source dataset. In the case of 0 target dataset setting, we share diffusion net and maintain the uncertainty training objective in Eq. <ref>. In Fig. <ref>, mADE_10 of the baseline and our method are plotted with bar graph, and their difference is plotted as line graph. The effectiveness of the proposed method gradually increases in the range of 30k to 15k, and exponentially increases with the smaller target dataset sizes.Such larger improvements on smaller target datasets show that our proposed method effectively promotes transferability across datasets. Indeed, at an extreme with no target dataset for training, mADE of the baseline diverges over 14 while ours remain reasonable at 1.26. These results show that our proposed NSDE temporal networks's advantage of effective cross-dataset discrepancy handling is even more valued for smaller target datasets. §.§ Effectiveness of continuous representation We compare our NSDE with the baseline HiVT model equipping with other methods for handling dataset-wise unique time step configuration as reported in Tab. <ref>. First method is random dropping (RD) where some portion of time steps is randomly dropped during training. We expect RD to be equivalent to stochastic noise injection, thus improving generalizability. However, RD shows minimal improvement of only 0.006 since dropping time steps does not provide any extra time step data to a discrete temporal network. In that sense, we experiment with manipulating source data (1/8s, 10Hz) to target data's time step configuration (2/6s, 2Hz) (S → T) or vise-versa (T → S) through interpolation and extrapolation.Converting target dataset to source dataset configuration severely downgrads the prediction performance due to the source data's inaccurate information obtained from extreme extrapolation to unseen future time steps. While converting source dataset to target dataset slightly increases accuracy, its improvement remains minimal due to inaccurate extrapolation of past trajectory.Lastly, we apply domain-adpation method which is feature align loss ℒ_align between source and target dataset following <cit.>, where MMD loss with RBF kernel is used for the distance function.While the original paper tackled unsupervised domain adaptation problem, we provide labels for the target dataset for a fair comparison with other methods. However, applying ℒ_align hinder prediction loss from target dataset supervision and had adverse effects on the prediction performance.§.§ Uncertainty handling abilityOur NSDE intensifies SDE's regularization effects by recognizing uncertain samples and assigning them large browninan motion weighting.Our method relies on the recognition of uncertain samples, therefore we quantify the recognized uncertainty to assess our method's adequate operation.The details of uncertainty quantification process are explained in the supplementary materials. For a qualitative review, Fig. <ref> plots uncertain samples in red lines and others in yellow, thresholded by average standard deviation value of 0.06.In the nuScenes samples (1st row), it shows that our model can properly recognize uncertain samples due to tracking error with sudden position change (left) and meandering motion (right). Other dataset samples also show competent classification of samples with their dataset-specific uncertainties.More samples can be found in supplementary material.Here, we analyze whether the diffusion net-based brownian-motion weighting indeed improve the model's robustness against tracklet error. In doing so, we compare the prediction results between our method (green) and the baseline (blue) on ood samples as in Fig. <ref>. Among 10 predictions for both models, only the most accurate predictions to GT (magenta) are plotted.Predictions of our method are consistently more accurate compared to the baseline's predictions.Such improvement is also quantitatively compared in Tab. <ref> where mADE_10 is compared between predictions of baseline and ours on normal samples (ind) and uncertain samples (ood). Notably, our method exhibits larger accuracy gains in ood, demonstrating superior robustness against uncertain samples due to tracklet noise.§.§ Ablation study Ablation on model architecture appears in the upper part of Tab. <ref>, with HiVT as baseline. First, we model the past feature at time 0 (_0^past) as Gaussian latent following general NDE methods. After obtaining mean and variance from _0^past as in VAE, we sample past feature F times. (F is the number of prediction sample, here, set as 10). This approach severly worsen the performance since non-probability sampling is much better for multi-modal trajectory prediction <cit.>.Second, we adjust the number of layers of drift and diffusion network of encoder NSDE and decoder NSDE. Their number of layer is originally set as 4, and we reduce them as two. Comparing the results between encoder and decoder, model capacity decline resulted in larger performance drop for the encoder. We believe such discrepancy comes from higher complexity of encoder's task, as the encoder needs to translate past features while also considering incoming data on certain timesteps via GRU. The lower part of Tab. <ref> shows ablation on uncertainty training. Our model is comprised of shared drift network along with separate diffusion networks and we ablate each component. Although we lose multi-source training for temporal encoding when separating drift network, performance drop is relatively small since other components of the model are still shared.In case of sharing diffusion network, however, the performance drop is larger since a single diffusion network is insufficient to handle disparate types of noises. Indeed, noise in argoverse is more severe compared to nuScenes as shown in the second row in Fig. <ref>, sothe diffusion network is dominated by argoverse data's distribution and results in wrong Brownian noise injection.In addition, we remove the uncertainty training objective in Eq. <ref>, which has also resulted in a considerable performance drop. The above results consistently reveal that uncertainty handling plays a crucial role during cross-domain trajectory prediction.§ CONCLUSIONIn this paper, we introduce a novel approach to addressing the challenges posed by discrepancies in trajectory datasets.By leveraging continuous and stochastic representations within NSDE, the proposed method tackles two key issues: varying time step configurations and different patterns of detection/tracking noise across datasets.The continuous representation effectively handles diverse time intervals, enabling seamless adaptation to different dataset structures, while the stochastic aspect accommodates the inherent uncertainties arising from tracklet errors.Through experimentation on nuScenes, Argoverse, Waymo, INTERACTION, and WOMD, our NSDE consistently improved upon both regression and goal prediction-based the state-of-the-art methods.We not only highlight the importance of dataset-specific considerations in trajectory prediction but also introduce a practical solution that bridges the gap between diverse data sources. These contributions underscore the methodology's potential for advancing the reliability and safety of autonomous mobility systems, offering a promising avenue for further research and development in the field.§ ACKNOWLEDGMENTS This work was partially supported by Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.2014-3-00123, Development of High Performance Visual BigData Discovery Platform for Large-Scale Realtime Data Analysis), and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF2022R1A2B5B03002636).
http://arxiv.org/abs/2312.15906v1
{ "authors": [ "Daehee Park", "Jaewoo Jeong", "Kuk-Jin Yoon" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226065029", "title": "Improving Transferability for Cross-domain Trajectory Prediction via Neural Stochastic Differential Equation" }
Maurice H.P.M. van Putten [email protected] 0000-0002-0786-7307]Maurice H.P.M. van Putten Physics and Astronomy, Sejong University209 Neungdong-ro, Seoul, South Korea INAF-OAS Bologna, via P. Gobetti, 101, I-40129 Bologna, Italy Recent JWST surveys reveal a striking abundance of massive galaxies at cosmic dawn, earlier than predicted by ΛCDM. The implied speed-up in galaxy formation by gravitational collapse is reminiscent of short-period galaxy dynamics described by the baryonic Tully-Fisher relation. This may originate in weak gravitation tracking the de Sitter scale of acceleration a_dS=cH, where c is the velocity of light and H(z)∝(1+z)^3/2 is the Hubble parameter with redshift z. With no free parameters, this produces a speed-up in early galaxy formation by an order of magnitude with essentially no change in initial galaxy mass function. It predicts a deceleration parameter q_0=1-( 2π/GAa_dS)^2 =-0.98± 0.5, where G is Newton's constant andA=(47±6)M_⊙ (km/s)^-4 is the baryonic Tully-Fisher coefficient (McGaugh 2012). At 3σ significance, it identifies dynamical dark energy alleviating H_0-tension when combined with independent q_0 estimates in the Local Distance Ladder. Conclusive determination of q_0=dlog(θ(z)H(z))/dz|_z=0. is expected from BAO angle θ(z) observations by the recently launched Euclid mission. § INTRODUCTION Recent JWST surveys <cit.>reveal an abundance of massive galaxies in the Early Universe.Extending previous HST high-redshift galaxy surveys <cit.>This early galaxy formation faster than expected posesa radically new challenge to ΛCDM<cit.> in addition to the H_0-tension problem between the Local Distance Ladder and the Planck ΛCDM analysis of the Cosmic Microwave Background (CMB) seen in the recent history of cosmological expansion<cit.>.Galaxies form by gravitational collapse of relic density perturbations at the time of the surface of last scattering<cit.>,marking large-scale structure at the angular scale of Baryon Acoustic Oscillations (BAO) in the CMB<cit.>.Early galaxy formation by fast collapse is reminiscent of short-period orbital motion in spiral galaxies described by the Tully-Fisher luminosity-velocity relation<cit.>. Alternative interpretations of the latter by dark matter<cit.>and non-Newtonian physics <cit.>appear viable but breaking the degeneracy between the two is challenging<cit.>.A common origin to early galaxy formation and the baryonic Tully-Fisher relation points to non-Newtonian dynamics in weak gravitation even when applied to dark and baryonic matter combined. This appears inevitable in the face of the essentially matter-dominated evolution in the early Universe satisfying Ω_M≃ 1, defined by the ratio of total matter density over closure density ρ_c=3H^2/8π G with Hubble parameter H and Newton's constant G.Generally, galaxy dynamics is mostly in weak gravitation below the de Sitter scale of acceleration a_dS=cH, defined by the Hubble parameter H and the velocity of light c. Weak gravitation beyond ΛCDM may be tracking a_dS with observational consequences by redshift dependence <cit.>.First, it predicts a non-smooth transition to non-Newtonian acceleration. Circular orbits in spiral galaxies can be described by a_N/α of expected Newtonian acceleration a_N to the observed centripetal acceleration α as a function of ζ = a_N/a_dS. These have a C^0-transition at ζ = 1, observed in a 6 σ gap between data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) and ΛCDM galaxy models<cit.>. This C^0-transition can be attributed to a change in the binding energy in the gravitational field of centripetal acceleration α by the equivalence principle. In the Newtonian limit, the inertia of this binding energy equals a particle's rest mass.This is cut short when accelerations drop below a_dS (ζ drops below unity), as the Rindler horizon at ξ = c^2/α drops beyond the Hubble radius R_H=c/H <cit.>. This cut gives a C^0-transition at ζ=1 with corresponding radiusr_t=√(R_HR_G)≃ 4.7M_11^1/2∝ (1+z)^-3/4in galaxies of mass M=10^11M_11M_⊙ with gravitational radius R_G=GM/c^2. Accordingly, high-z galaxies are increasingly non-Newtonian.The asymptotic regime ζ≪1 of dynamics in disks of spiral galaxies satisfies the baryonic Tully-Fisher relation of total mass M_b in gas and stars and rotation velocities V_c, satisfying <cit.>M_b = A V^4_c with A≃(47± 6)M_⊙ (km/s)^-4.In the absence of dark matter, (<ref>)is described by centripetal accelerationsα=V_c^2/r at radius r,that can be modeled in terms of a logarithmic potential.Specifically, (<ref>) is equivalent to Milgrom's law α=√(a_0a_N) for short-period orbital motion beyond the Newtonian orbits at a_N=GM_b/r^2,parameterized by a_0=1/GA ≃( 1.6± 0.2) 10^-10/^2<cit.>. However, redshift dependence in A(z) is inconclusive. High-resolution observations are limited to galaxies at essentially zero redshift<cit.>and recent observers out to moderate redshifts<cit.>measure rotation curves limited to moderate radii, intermediate between the asymptotic acceleration 1/r in Milgrom's law and Newtonian acceleration 1/r^2<cit.>.Second, weak gravitation r≫ r_t tracking a_dS in (<ref>) predicts redshift dependence in the Milgrom parameter (Fig. <ref>)a_0(z) = √(1-q(z))/2π a_dS(z)∝(1+z)^3/2  (z≫1),where q(z)=-1+(1+z)H^-1H^'(z) is the deceleration parameter in a three-flat Friedmann universe.Here, we show that (<ref>) produces fast galaxy formation at cosmic dawn, explaining the JWST observations with no free parameters. Matter-dominated cosmological evolution covers an extended epoch up down to intermediate redshifts before the onset of the presently observed accelerated expansion<cit.>.This epoch includes primeval galaxy and structure formation since the surface of last scattering (z≃ 1100) when cosmological evolution is described by a Friedmann scale factor a(t)∝ t^2/3, a=1/(1+z) as a function of cosmic time t. In 2, we discuss fast gravitational collapse in weak gravitation defined by a_dS. The consequences for accelerated galaxy formation are quantified in 3. In 4, we summarize our findings with a new estimate of q_0 in tension with ΛCDM.§ FAST GRAVITATIONAL COLLAPSE Critical to galaxy formation is the time scale of gravitational collapse and the time scale of formation of the first stars and dynamical relaxation.Expansion in the early universe might encompass more time as a function of redshift <cit.>.This can be modeled by a stretched time τ. For instance, power law a∝τ^n gives additional time over ΛCDM by a factorτ(z)/t(z)=(1+z)^3n-2/2n = √(1+n),where the right-hand side exemplifies linear expansion (n=1)<cit.>.For the ultra-high redshift JWST galaxies, √(1+z)∼4 appears to be sufficient to satisfyobservational constraints on the initial formation rate of the first stars and galaxies<cit.>.However, precision cosmology at low redshifts including the ages of the oldest stars in globular clusters are pose stringent constraints on alternative cosmological models. In particular, a cosmology a(t)∝ t, has a deceleration parameter q_0=0, which is securely ruled out by data of the Local Distance Ladder and Planck ΛCDM analysis of the CMB<cit.>.Nevertheless, (<ref>) for n=1 sets a scale for the required speed-up in galaxy formation soon after the Big Bang to accommodate the recent JWST observations. A gain similar to (<ref>)obtains equivalently from dynamical time scales that are sufficiently short, i.e., gravitational collapse times faster than expected from ΛCDM. Gravitational collapse takes place on a free-fall time scale. Representative analytic expressions are obtained in the two-body problem of radial motion of a test particle to a central mass M at initial separation R_0 with zero initial momentum. For Newton's force law 1/r^2 and 1/r in the baryonic Tully-Fisher relation (<ref>), the respective free-fall time scales are t_ff^(2) = π2^-3/2( R_0 / a_N,0)^1/2∝ R_0^3/2 and τ_ff^(2) = √(π)2^-1/4( R_0/√(a_0a_N,0))^1/2∝ R_0. Distinct from Newton's theory, the second is not scale-free by coupling to a_dS, leading to scaling with initial separation different from Newton's theory which points to relatively short free-fall time scales. While attributed to reduced inertia, this outcome is equivalently modeled by constant (Newtonian) inertia in a logarithmic potential.The ratio of the free-fall times hereby satisfies τ_ff^(2) = 2^5/4/√(π)(a_N,0/a_0)^1/4 t_ff^(2),explicitly showing faster gravitational collapse in the regime of weak gravitation, when a_N,0≪ a_dS.§ ACCELERATED GALAXY FORMATION We reinterpret (<ref>) in terms of fast gravitational collapse by (<ref>).By Gauss' law, the Newtonian result in (<ref>) permits a direct generalization a_N,0=GNm/R_0^2 for an (initially spherically symmetric) system of N particles of mass m,the same does not apply to the logarithmic potential.Hence, the time of gravitational collapse (defined by the time to first bounce prior to virialization) is evaluated numerically.In the large N-limit, N-body simulations of initially cold clusters show (Fig. <ref>)the relations t_c = 1.64 t_ff and τ_c=1.30 t_ff for the Newtonian and, respectively, logarithmic potential in (<ref>),the later used to model reduced inertia in weak gravity below a_dS.Here, t_ff= π< R^3/2_0>/2√(2 N) is the free-fall time-scale of an N-body cluster with mean < R^3/2_0> of the initial particle radial positions R_0. Importantly, t_ff expresses scaling with N^-1/2, inferred from the exact two-body free-fall time t_ff^(2) = (π/2) ( R_0/2a_N,0)^1/2a_N,0=GM/R_0^2 initial Newtonian acceleration of a test particle by gravitational attraction at initial separation R_0 to a mass M. This scaling with N^-1/2 for both potentialscan be attributed to the tight correlation with virtualization time, the latter dominated by diffusion. On a cosmological background, the N-body scaling relations (Fig. <ref>) and (<ref>) show a speed-up in gravitational collapseB=t_c/τ_c = 0.94 (a_0/a_N,0)^1/4=1.62ζ^-1/4,where ζ = a_N/a_dS <cit.>.Here, q=1/2 holds on the matter-dominated era at cosmic dawn.Specifically, (<ref>) applies to the scale l of progenitor mass taken from a perturbation in background closure density ρ_c feeding the formation of a M=10^9M_9M_⊙ galaxy, i.e.: l≃(2M/ρ_c,0√(ηΩ_M,0))^1/31/1+z≃(l_0/1+z) M_9^1/3, where l_0 ≃ 150. By a_N=GM/l^2≃ (4π/3)ρ_cl=(1/2)H^2l, ζ = a_N/a_dS=(1/2)β, β = Hl.Speed-up in gravitational collapse (<ref>) hereby satisfiesB ≃ 25 M_9^-1/12(1+z)^1/8,where β_0 = l_0H_0/c≃ 3.6× 10^-5 denotes the local Hubble flow across l_0.§ CONCLUSIONS In weak gravitation coupled to a_dS, (<ref>) shows collapse times to be considerably shorter than the Newtonian time scales of ΛCDM. Here, this is parameterized with no free parameters by (<ref>). This predicts weak gravitation to govern most of the gravitational interactions in galaxy formation early on identified by JWST. Speed-up (<ref>) in gravitational collapse is over an order of magnitude, sufficient to account for the JWST observations. With no free parameters, we derive (<ref>) from a natural unification of the baryonic Tully-Fisher relation in late-time cosmology and fast galaxy formation at cosmic dawn. Crucially, (<ref>) is essentially achromatic given a remarkably small power law index 1/12 in dependency on galaxy mass, which leaves galaxy mass distributions of ΛCDM effectively unchanged. The origin of (<ref>) is found in accelerated dynamics beyond the C^0-transition (<ref>) upon identifyinginertia with binding energy in the gravitational field of the accelerating particle (by the equivalence principle).On a cosmological background with a finite Hubble radius R_H,weak gravitation hereby has a finite sensitivity to a_dS.Expressed by (<ref>), it applies to the asymptotic regime α≪ a_dS, when binding energy (whence inertia) is cut short by the Hubble horizon <cit.>. For the first time, (<ref>) is confronted at high redshift by JWST observations. The resulting speed-up in galaxy formation (<ref>) takes us closer to cosmic dawn, beyond what can be explained by ΛCDM galaxy models, in a unification with the baryonic Tully-Fisher relation (<ref>). This confrontation gives clear evidence of galaxy dynamics tracking a_dS <cit.>. The large speed-up (<ref>) points to the existence of galaxies beyond those currently observed by JWST. These may be detected in upcoming JWST surveys or by ultra-high redshift gamma-ray bursts with the planned Transient High-Energy Sky and Early Universe Surveyor (THESEUS) mission<cit.>.In the more recent epoch of cosmic expansion, (<ref>) expresses sensitivity to the deceleration parameter q(z) as it drops from matter-dominated (q≃ 1/2) to negative values, signifying accelerated expansion during the present dark energy-dominated epoch. Fundamental to the latter is the combination (H,q), parameterized by the present-day values (H_0,q_0) of the Hubble constant H_0 and deceleration constant q_0. Inverting (<ref>) gives q_0 = 1- (2π/GA a_dS)^2 = -0.98^+0.60_-0.42given the baryonic Tully-Fisher coefficient of (20) in (<ref>) and H_0=73.3km s^-1Mpc^-1. The estimate (<ref>) is consistent with q_0=-1.08± 0.29 derived from the Local Distance Ladder <cit.>.An even combination of these two independent measurements givesq_0=-1.03± 0.17 distinct from the Planck ΛCDM value q_0≃ 0.5275 (5). At 3σ significance, this evidences a dynamical dark energy alleviating H_0-tension<cit.>A decisive result on q_0 - and hence the nature of dark energy - is expected from a survey of the recent expansion history of the Universe by the recently launched Euclid mission (35). Specifically, Euclid planned measurement of the BAO angle θ(z) and H(z) will provide the radically new measurement q(z) = (1+z)d/dzlog(θ(z)H(z)).The expected Euclid survey of O(10^9) galaxies out to a redshift of a few is expected to rigorously distinguish between dynamical dark energy (q_0≃ -1, H^'(0)≃0) from ΛCDM(q_0≃ -0.5, H^'(0)≃ H_0) and to reveal the physical nature of low-energy quantum cosmologyand the problem of stability of de Sitter space <cit.>. § ACKNOWLEDGMENTS We thank M.A. Abchouyeh for stimulating discussions. 99[Abchouyeh & van Putten(2021)]abc21 Abchouyeh, M. A., & van Putten, M. H. P. M. 2021, PhRvD, 104, 083511[Aghanim et al.(2020)]agh20Planck Collaboration, Aghanim, N., Akrami, Y., 187 et al. 2020, A&A, 641, A6 [Amati et al.(2018)]ama18 Amati, L., O'Brian, P., Gött, D., et al., 2018,AdSpR, 62 191 [Amati et al.(2021)]ama21 Amati, L., O’Brien, P. T., Gẗz, D., et al. 2021, ExA, 52, 183 [Austin et al.(2023)]aus23Austin, D., Adams, N., Conselice, C. J., et al. 2023, ApJL, 952, L7 [Boylan et al.(2023)]boy23 Boylan-Kolchin, M. 2023, Nature Astronomy, 7, 731[Camerana & Valerio(2020)]cam20 Camarena, D., & Valerio, M., 2020, Phys. Rev. Research 2, 013028 [Coe et al.(2013)]coe13Coe, D., Zitrin, A., Carrasco, M., et al. 2013, ApJ, 762, 32 [Oesch et al.(2016)]oes16Oesch, P. A., Brammer, G., van Dokkum, P. G., et al. 2016, ApJ, 819, 129 [de Swart et al.(2017)]des17 de Swart, J. G., Bertone, G., & van Dongen, J. 2017, Nature Astronomy, 1, 0059 [Eggen et al.(1962)]egg62Eggen, O. J., Lynden-Bell, D., & Sandage, A. R. 1962 [Eisenstein et al.(2005)]eis05Eisenstein, D. J., Zehavi, I., Hogg, D. W., et al. 2005, ApJ, 633, 560 [Eisenstein et al.(2023)]eis23Eisenstein, D. J., Willott, C., Alberts, S., et al. 2023, arXiv e-prints, arXiv:2306.02465 [Euclid(2023)]euc23 Euclid Mission (ESA), 2023, https://www.esa.int/ Science_-Exploration/ Space_-Science/ Euclid[Famaey & McGaugh(2012)]fam12 Famaey, B., & McGaugh, S. S. 2012, Living Reviews in Relativity, 15, 10 [Genzel et al.(2017)]gen17 Genzel, R., Förster Schreiber, N. M., Übler, H., et al. 2017, Nature, 543, 397[Gott et al.(1977)]got77Gott, J. R., I. 1977, ARA&A, 15, 235 [Gott et al.(1975)]got75Gott, J. R., I., & Rees, M. J. 1975, A&A, 45, 365 [Gunn et al.(1972)]gun72Gunn, J. E., & Gott, J. Richard, I. 1972, ApJ, 176 [Gupta(2023)]gup23 Gupta, R. P. 2023, MNRAS, 524, 3385[McGaugh(2012)]mcg12 McGaugh, S. S. 2012, AJ, 143, 40[Melia(2023)]mel23 Melia, F. 2023, MNRAS, 521, L85 [Milgrom(1983)]mil83 Milgrom, M. 1983, ApJ, 270, 365 [Lelli et al.(2019)]lel19 Lelli, F., McGaugh, S. S., Schombert, J. M., Desmond, H., & Katz, H. 2019, MNRAS, 484, 171 3267[Ó Colgain et el.(2019)]oco19 Ó Colgáin, É., van Putten, M.H.P.M., & Yavartanoo, H., 2019, Phys. Lett. B, [Padmanabhan & Loeb(2023)]pad23 Padmanabhan, H., & Loeb, A. 2023, arXiv:2306.04684[Perlmitter et al.(1999)]per99 Perlmutter, S., Aldering, G., Goldhaber, G., et al. 185 1999, ApJ, 517, 565 [Tully&Fisher(1977)]tul77 Tully, R. B., & Fisher, J. R. 1977, A&A, 54, 661 [Wechsler & Tinker(2018)]wec18 Wechsler, R. H., & Tinker, J. L. 2018, ARA&A, 203 56, 435[Riess et al.(1998)]rie98 Riess, A. G., Filippenko, A. V., Challis, P., et al., 1998, AJ, 116, 1009 [Riess et al.(2022)]rie22Riess, A. G., Yuan, W., Macri, L. M., et al. 2022, ApJL, 934, L7[van Putten(2017)]van17 van Putten, M. H. P. M. 2017, ApJ, 848, 28 [van Putten(2018)]van18 van Putten, M.H.P.M., 2018, MNRAS, 481, L26 [van Putten(2021)]van21 van Putten, M.H.P.M., 2021, Physics Letters B, 823, 136737 [van Putten(2018)]n1van Putten, M.H.P.M., 2018, in ICGAC-XIII and IK-15 on Grav., Astroph. and Cosmology, EPJ Web Conf. 168, 08005; Deriving A from a_0 from galaxy rotation curves requires care in the extrapolation of a finite range of values x=a_N/a_0, typically down to x=a_N/a_0= O(10^-2). This may incur systematic errors depending on interpolation between ζ≪1 and ζ≥ 1. Conventional interpolation f(x)=x/(1+x) does not take into account the C^0-transition at ζ=1, producing an underestimate of 20% compared to a more detailed theoretical fit. Corrected for this transition, a_0≃ 1.6× 10^-8cm s^-1 consistent with A in the baryonic Tully-Fisher relation (<ref>) [van Putten(2020)]van20 van Putten M.H.P.M., 2020, MNRAS, 491, L6
http://arxiv.org/abs/2312.16692v1
{ "authors": [ "Maurice H. P. M. van Putten" ], "categories": [ "astro-ph.CO", "astro-ph.GA" ], "primary_category": "astro-ph.CO", "published": "20231227190859", "title": "The Fast and Furious in JWST high-$z$ galaxies" }
Morse index of steady-states to the SKT model] Morse index of steady-states to the SKT modelwith Dirichlet boundary conditionsThis research was partially supported by JSPS KAKENHI Grant Number 22K03379. K. Kuto]Kousuke Kuto^† H. Sato]Homare Sato^ † Department of Applied Mathematics,Waseda University,3-4-1 Ohkubo, Shinjuku-ku, Tokyo 169-8555, Japan.Department of Pure and Applied Mathematics,Graduate School of Fundamental Science and Engineering, Waseda University,3-4-1 Ohkubo, Shinjuku-ku, Tokyo 169-8555, Japan. E-mail:This paper deals with the stability analysis for steady-states perturbed by the full cross-diffusion limit of the SKT model with Dirichlet boundary conditions. Our previous result showed that positive steady-states consist of the branch of small coexistence type bifurcating from the trivial solution and the branches of segregation type bifurcating from points on the branch of small coexistence type. This paper shows the Morse index of steady-states on the branches andconstructs the local unstable manifold around each steady-state of which the dimension is equal to the Morse index. [2020]35B09, 35B32, 35B45, 35A16, 35J25, 92D25[ [ January 14, 2024 ====================§ INTRODUCTIONIn this paper, we consider the following Lotka-Volterracompetition model with equal cross-diffusion terms:u_t=Δ [ (1+α v)u ]+ u(λ -b_1u-c_1v) Ω× (0,T),v_t =Δ [ (1+α u)v ]+ v(λ -b_2u-c_2v) Ω× (0,T),u=v=0 Ω× (0,T),u( · ,0)=u_0≥ 0, v( · ,0)=v_0≥ 0 Ω,where Ω(⊂ℝ^N) is a bounded domain with a smooth boundary ∂Ω if N≥ 2; a bounded interval (-ℓ,ℓ) if N=1. In 1979, for the purpose of realizing segregation phenomena of twocompeting species by reaction-diffusion equations, Shigesada, Kawasaki and Teramoto <cit.> proposed a population model consisting ofthe Lotka-Volterra competition system with random-, self- and cross-diffusion terms. Since the pioneering work, a class of Lotka-Volterra systems with cross-diffusion like (<ref>) is called the SKT model cerebrating the authors of <cit.>.In (<ref>), the unknown functionsu(x,t) and v(x,t) represent the population density of two competing species, respectively; λ, b_i, c_i (i=1,2) and α are positive constants, where λ can be interpreted as the amount of resources for both species; (b_1, c_2) and (c_1, b_2) are coefficients ofintra- and inter- specific competition, respectively. The cross-diffusion term αΔ (uv) represents an inter-species repulsive interaction of diffusion and describes a situation where each species promotes their own diffusion more where there are more other species. We refer to the book by Okubo and Levin <cit.> for the bio-mechanism of diffusion terms such as the cross-diffusion.Concerning the solvability for a class of quasilinear parabolic systems including (<ref>), in a series of works <cit.>,Amann established the following time-local well-posedness in the Sobolev space:Assume u_0, v_0∈ W^1,p_0(Ω ) with p∈ (N, ∞). Then (<ref>) has a unique solution (u,v) satisfyingu, v∈ C([0,T_m); W^1,p_0(Ω )) ∩ C^2,1(Ω× (0,T_m)),whereT_m is a maximal existence time. Moreover, if T_m<∞, thenlim_t↗ T_m( u( · ,t)_W^1,p_0(Ω)+ u( · ,t)_W^1,p_0(Ω) )=∞. Concerning the time-global existence of all the solutions obtained in Theorem <ref>, Kim <cit.> showed T_m=∞ inthe one-dimensional case where N=1, and in the sequel, Lou and Winkler <cit.> assured T_m=∞ when N=2, 3 with the additional condition that Ω is convex.It should be noted that proofs in <cit.> and <cit.> assumed homogeneous Neumann boundary conditions, but some modifications in their proofs assure T_m=∞ also under homogeneous Dirichletboundary conditions as (<ref>). See also <cit.>.Next we introduce the bifurcation structure for steady-states of (<ref>) obtained by Inoue and the authors <cit.>.The associated stationary problem of (<ref>) is reduced to the following Dirichlet problem of nonlinear elliptic equationsΔ [ (1+α v)u ]+ u(λ -b_1u-c_1v)=0 Ω, Δ [ (1+α u)v ]+ v(λ -b_2u-c_2v)=0 Ω,u≥ 0,v≥ 0 Ω,u=v=0∂Ω.It is possible to verify that any weak solution (u,v) with u≢0 and v≢0belongs to C^2(Ω)^2 and satisfies u>0 and v>0 in Ω by virtue of the elliptic regularity theory and the maximum principle (e.g., <cit.>). Throughout the paper, such a solution (u,v) of (<ref>) will be called a positive solution.In order to express the bifurcation structure, we introduce the following eigenvalue problem:-Δ=λ Ω, =0 ∂Ω.Hereafter, all eigenvalues of (<ref>) will be denoted by(0<)λ_1<λ_2≤λ_3≤⋯≤λ_j≤⋯lim_j→∞λ_j=∞,counting multiplicity. It is known from <cit.> that, if α>0 is sufficiently large, then λ_1 yields a threshold for the nonexistence/existence of positive solutions in the sense that (<ref>) has no positive solution when0<λ≤λ_1; at least one positive solution when λ>λ_1. The usual function space W^2,p(Ω )∩ W^1,p_0(Ω ) will be often used in the later argument. Then we denoteX_p:=W^2,p(Ω )∩ W^1,p_0(Ω),X_p:=X_p× X_p.In <cit.>,the asymptotic behavior of positive solutions of (<ref>) at the full cross-diffusion as α→∞ was studied. (As a similar perspectiveon the Neumann problem for the stationary SKT model,we refer to <cit.> and references therein for the unilateral cross-diffusion limit, and to <cit.> forthe full cross-diffusion limit.) It was shown in <cit.> that, if λ>λ_1 andλ≠λ_j for any j≥ 2, then any sequence {(u_n, v_n)} ofpositive solutions of (<ref>) with α=α_n→∞ satisfies either of the following two scenarios:(I) (small coexistence) there exists a positive function U∈ C^2(Ω) such thatlim_n→∞ (α_nu_n, α_nv_n)= (U,U)X_pfor any p∈ (N,∞),passing to a subsequence if necessary, and moreover, U satisfies the following limiting equation:Δ [ (1+U)U ]+λ U=0Ω,U=0∂Ω;(II) (complete segregation) there exists a sign-changing function w∈ C^2(Ω)such thatlim_n→∞ (u_n, v_n)= (w_+, w_-)Ω,passing to a subsequence if necessary, where w_+(x):=max{w(x),0} and w_-(x):=max{0,-w(x)}, and moreover, w satisfies the following limiting equationΔ w+λ w-b_1(w_+)^2+c_2(w_-)^2=0 Ω,w=0∂Ω. It was also shown that the set of all the positive solutions of (<ref>) form a curve𝒞_∞= {(λ, U( · ,λ))∈ (λ_1,∞) × X_p}with lim_λ↘λ_1U( · , λ )=0 in X_p, see Figures 1 and 2. In the one-dimensional case where Ω=(-ℓ, ℓ ), it is known that the set 𝒮^+_j,∞ (resp. 𝒮^-_j,∞) of solutions of (<ref>) with exact j-1 zeros in Ω and w'(-ℓ )>0 (resp. w'(-ℓ )<0) forms a curve𝒮^+_j,∞={ (λ, w( · , λ ))∈ (λ_j, ∞)× X_p}( 𝒮^-_j,∞={ (λ, w( · , λ ))∈ (λ_j, ∞)× X_p}).Here𝒮^+_j,∞∪{(λ_j,0)}∪𝒮^-_j,∞ forms a pitchfork bifurcation curve bifurcating from the trivial solution at λ=λ_j (e.g., <cit.>).In <cit.>, a subset of positive solutions of (<ref>) was constructed by the perturbation of 𝒞_∞, 𝒮^±_j, ∞ when α>0 is sufficiently large. More precisely,for any given large >0, there exists a large α=α()>0 such that, if α>α, then there exists a bifurcation curve𝒞_α, ={ (λ, u( · ,λ, α ), v( · ,λ, α )) ∈ (λ_1, ]×X_p}with lim_λ↘λ_1 (u( · ,λ, α ), v( · ,λ, α ))= (0,0) in X_p andlim_α→∞α (u( · ,λ, α ), v( · ,λ, α )) =(U( · ,λ), U( · ,λ))X_pfor each λ∈ (λ_1,]. Then it can be said that 𝒞_α, is the perturbation (with scaling) of 𝒞_∞over the range λ∈ (λ_1, ]. It was also proved that 𝒞_α, can be extended in the direction λ→∞ as a connectedsubset of positive solutions of (<ref>).Furthermore, it was shown in <cit.> that the subsets of positive solutions of (<ref>) with Ω=(-ℓ, ℓ) can be constructed by perturbation of 𝒮^±_j, ∞ if α>0 is sufficiently large. To be precise, for each λ_j∈ (λ_1,), there exists α_j>0 and β_j,α∈ (λ_1,)such that, if α>α_j, then bifurcation curves of positive solutions of (<ref>) can be parameterized by one variable s as follows𝒮^+_j, α, = { (λ, u,v)=( ξ_j(s, α), u_j( · ,s,α ),v_j( · ,s, α )) : 0<s≤ T^α, +_j}and𝒮^-_j, α, = { (λ, u,v)=( ξ_j(s, α), u_j( · ,s,α ),v_j( · ,s, α )) : -T^α, -_j≤ s<0}with lim_s→ 0 (ξ_j(s, α), u_j( · ,s,α ), v_j( · ,s, α ))= (β_j,α, u( · , β_j,α, α ), v( · , β_j,α, α )) ∈𝒞_α, in ℝ×X_p andlim_α→∞ (u_j( · ,s,α ), v_j( · ,s, α )) =(w_+, w_-) Ω,where w∈𝒮^+_j, α, λ=ξ_j(s, α) s>0, 𝒮^-_j, α, λ=ξ_j(s, α) s<0.Then it follows that 𝒮^-_j, α, ∪{(β_j,α, u( · , β_j,α, α ), v( · , β_j,α, α )) }∪𝒮^+_j, α, forms a pitchfork bifurcation curve bifurcating from the solution(β_j,α, u( · , β_j,α, α ), v( · , β_j,α, α )) on the branch 𝒞_α,, see Figures 1 and 3.This paper focuses on the stability analysis for steady-states of (<ref>) in the case where α is sufficiently large. It will be shown that each positive solution on 𝒞_α, is linearly unstable, whereas semi-trivial solutions, such that one of u and v is positive and the other is zero in Ω, are linearly stable. In the one-dimensional case, we show that if λ∈ (λ_j, λ_j+1) and α>0 is sufficiently large, then the Morse index of(λ, u( · , λ, α ),v( · , λ, α ))∈𝒞_α, is equal to j, whereas the Morse index of (λ, u_j( · ,s,α ), v_j( · ,s,α )) ∈𝒮^±_j, α, is equal to j-1. These results invoking the dynamical theory for a class ofquasilinear parabolic equations including (<ref>) (e.g. <cit.>) enable us to construct the local unstable manifold of each positive steady-state whose dimension is equal to the Morse index.To summarize our results from the ecological view-point, three bifurcation branches(consisting of semi-trivial solutions whose v component vanishes; semi-trivial solutions whose u component vanishes; positive solutions of small coexistence type) bifurcatefrom the trivial solution at λ=λ_1. Among three branches, two branches of semi-trivial solutions are linearly stable andthe branch 𝒞_α, of positive solutions of small coexistence type is linearly unstable. Following the branch 𝒞_α, in the direction of increasing λ,the dimension of the local unstable manifold is equal to 1 when λ-λ_1>0 is small, and then,the dimension of the local unstable manifold increases by 1 passing each bifurcation point from which the branch of positive solutions of segregation type bifurcate. Each positive solution on the branch𝒮^±_j,α, possesses the unstable dimension j-1 (i.e. the dimension of the local unstable manifold), which is equal to the number of locations where the segregation occurs since it can be interpreted thatthe competitive species are generally segregating nearzeros of w(x) by (<ref>). In other words, the number of segregation points increases,the more unstable the steady-states become. Indeed, in the final section of this paper,it will be shown that these purely mathematical resultsfor the calculation of the Morse index are supportedby the numerical simulations (Figure 4).The contents of this paper is as follows: In Section 2, the linearly stable/unstable ofthe trivial and the semi-trivial steady-states will be shown as the preliminary. In Section 3, we drive the Morse index ofpositive steady-states of small coexistence type onthe branch 𝒞_α,. In Section 4, in the one-dimensional case, we show that the Morse index ofeach positive steady-state of segregation type on the branches 𝒮^±_j,α, is equal to j-1. In Section 5, we exhibit a numerical bifurcation diagram with information on the Morse index by usingthe continuation software .§ PRELIMINARYIn what follows, we mainly discuss the linear stability/instability of steady-states to (<ref>).Corresponding to (<ref>), we define the nonlinear operator F : ℝ_+×X_p→Y_p ( := L^p(Ω )× L^p(Ω )) byF(λ, u, v):= -Δ[ [ (1+α v)u; (1+α u)v ]] -[ [ u(λ -b_1u-c_1v); v(λ -b_2u-c_2v) ]].For each solution (u^*, v^*) of (<ref>), we denote the linearized operator of F around (u^*, v^*) by L(λ, u^*, v^* ), that is, L(λ, u^*, v^* ):=D_(u,v)F(λ, u^*, v^*).Hence it is easily verified that L(λ, u^*, v^* ) ∈ℒ(X_p, Y_p), where ℒ(X_p, Y_p) represents the set of bounded linear operators from X_p to Y_p. The linearized eigenvalue problem is formulated asL(λ, u^*, v^*) [ [ ϕ; ψ ]] = μ[ [ ϕ; ψ ]].Hence L(λ, u^*. v^*) is represented byL(λ, u^*,v^* ) [ [ ϕ; ψ ]] =-Δ( [ [ 1+α v^* α u^*; α v^* 1+α u^* ]] [ [ ϕ; ψ ]] )- [ [ λ -2b_1u^*-c_1v^* -c_1u^*; -b_2v^* λ -b_2u^*-2c_2v^* ]] [ [ ϕ; ψ ]]. If all the eigenvalues of (<ref>) have positive real parts, then the steady-state (u^*, v^*) is called linearly stable. If all the eigenvalues of (<ref>) have nonnegative real parts and there exist an eigenvalue whose real part is equal to zero, then (u^*, v^*) is called neutrally stable. If (<ref>) has at least one eigenvalue whose real part is negative, then(u^*, v^*) is called linearly unstable.Let q(x)∈ℝ andr(x)≥(≢) 0beHölder continuous functions in Ω. Consider the following eigenvalue problem:-Δϕ+q(x)ϕ =μ r(x)ϕΩ, ϕ =0∂Ω.It is well-known that all the eigenvalues of (<ref>) consist of non-decreasing sequenceμ_1<μ_2≤μ_3≤⋯ lim_j→∞μ_j=∞,and moreover, the least eigenvalue μ_1=μ_1(q,r) can be characterized by the following variational formula:μ_1(q,r)= inf {∫_Ω(|∇ϕ|^2+ q(x)ϕ^2)∫_Ωr(x)ϕ^2 : ϕ∈ H^1_0(Ω), ∫_Ωr(x)ϕ^2>0}.The least eigenvalue μ_1(q,r)is monotone increasing with respect to q and monotone decreasing with respect to rin the following sense. * If q_1(x)≤(≢) q_2(x) in Ω, thenμ_1(q_1, r)<μ_1(q_2, r).* If r_1(x)≤(≢) r_2(x) in Ω, thenμ_1(q, r_1)>μ_1(q, r_2). See e.g. <cit.> and<cit.> for the proofs ofthe assertions (i) and (ii), respectively.In this section, we note the linearized stability/instability of the trivial and the semi-trivial solutions to (<ref>). If λ≤λ_1, then (<ref>) admits only the trivial solution (u,v)=(0,0), and moreover,it is linearly stable when λ<λ_1 and it is neutrally stable when λ=λ_1. If λ>λ_1, then the trivial solution is linearly unstable.See <cit.> for the nonexistence of nontrivial solutions of (<ref>) when λ≤λ_1. Setting (u^*, v^*)=(0,0) in (<ref>), we see that the linearized operator L(λ, 0,0)is expressed as L(λ, 0,0 )=-Δ-λ I. Therefore, the linearized eigenvalue problem (<ref>) with(u^*,v^*)=(0,0) is reduced to the linear elliptic equations-Δϕ-λϕ =μϕΩ,-Δψ-λψ =μψΩ, ϕ =ψ =0∂Ω.by (<ref>). Hence the linearized stability/instability can be determined by the sign of μ_1(-λ , 1). In view of (<ref>) and the Krein-Rutman theorem, we obtain μ_1(-λ_1,1)=0. By using (i) of Lemma <ref>, we knowμ_1(-λ, 1)>0λ<λ_1,=0λ=λ_1,<0λ>λ_1.Then all the assertions of Lemma <ref> are verified.Next we consider the linearized stability/instability of semi-trivial solutions of (<ref>). By setting v=0 in (<ref>), one can see that the existence ofsemi-trivial solutions with v=0 is reduced to that of positive solutions of the following diffusive logistic equationΔ u+u(λ -b_1u)=0Ω,u=0 ∂Ω.It is well-known thatthe existence and uniqueness of positive solutions of (<ref>) hold true if and only if λ>λ_1, see e.g. <cit.>. Hereafter, the positive solution of (<ref>) will be denoted by θ(x;λ, b_1). It is also known that the set {(λ, θ( · ;λ, b_1)) ∈ (λ_1,∞)× X_p} forms a simple curve bifurcating fromthe trivial solution at λ=λ_1. Hence (<ref>) has nosemi-trivial solution if λ≤λ_1, two semi-trivial solutions (u,v)=(θ( · ;λ,b_1), 0) and (u,v)=(0, θ( · ;λ,c_2)) if λ>λ_1. Let λ>λ_1. If α>0 is sufficiently large, then semi-trivial solutions (θ( · ;λ,b_1), 0) and (0, θ( · ;λ,c_2)) are linearly stable.It obviously suffices to show the linearized stability of(θ( · ;λ,b_1), 0). To avoid complications,θ( · ;λ,b_1) will be abbreviated as θ_λ. Setting (u,v)=(θ_λ, 0) in (<ref>), we see thatL(λ, θ_λ, 0)= -Δ( [ [1 αθ_λ;0 1+αθ_λ ]] [ [ ϕ; ψ ]] ) - [ [ λ -2b_1θ_λ-c_1θ_λ;0λ -b_2θ_λ ]].Then the linearized eigenvalue problem (<ref>) around (u^*,v^*)=(θ_λ, 0) is reduced toΔ (ϕ +αθ_λψ ) +(λ -2b_1θ_λ)ϕ -c_1θ_λψ+μϕ=0 Ω, Δ [(1+αθ_λ)ψ] +(λ -b_2θ_λ)ψ+μψ=0 Ω, ϕ=ψ=0 ∂Ω.By employing the change of variables h(x):=ϕ (x)-ψ(x), k(x):=(1+αθ_λ(x))ψ(x),we transform (<ref>) toΔ h+λ h-2b_1θ_λh- (2b_1+c_1-b_2)θ_λ1+αθ_λk +μ h=0 Ω, Δ k+λ -b_2θ_λ1+αθ_λk +μ1+αθ_λk=0 Ω,h=k=0∂Ω.It will be shown that the principal eigenvalue μ_* of(<ref>) (or equivalently (<ref>)) is positive. In the case where k=0,(<ref>) is reduced to-Δ h+(2b_1θ_λ-λ )h=μ h Ω,h=0∂Ω.It is possible to check that the least eigenvalue μ =μ_1(2b_1θ_λ-λ,1) is positive. Indeed, it follows from (<ref>) that θ_λ satisfies-Δθ_λ+(b_1θ_λ-λ )θ_λ=0, θ_λ>0Ω, θ_λ=0∂Ω.This fact with the Krein-Rutman theorem implies μ_1(b_1θ_λ-λ, 1)=0. Then (i) of Lemma <ref> leads toμ_1(2b_1θ_λ-λ,1)> μ_1(b_1θ_λ-λ,1)=0.Therefore, we deduce that, if h≠ 0 and k=0 in (<ref>), then μ>0.In the other case where k≠ 0, we focus on the second equation of (<ref>) to know thatμ≥μ_1( b_2θ_λ-λ1+αθ_λ, 11+αθ_λ).To show the positivity of the right-hand side, we employ the following scalingσ_1(α ):= 1αμ_1( b_2θ_λ-λ1+αθ_λ, 11+αθ_λ) =μ_1( b_2θ_λ-λ1+αθ_λ, α1+αθ_λ),and show σ_1 (α )>0 if α >0 is sufficiently large. By virtue of the Dini theorem, one can see thatlim_α→∞b_2θ_λ-λ1+αθ_λ=0 lim_α→∞α1+αθ_λ=1θ_λuniformly in any compact subset Ω' of Ω. Then (ii) of Lemma <ref> implies thatlim inf_α→∞σ_1(α ) > μ_1(0, ζ (x)1θ_λ), where ζ (x) represents a smooth function satisfying 0≤ζ (x)≤ 1 for all x∈Ω and ζ (x)=1 for all x∈Ω' and ζ⊂Ω (with ( ζ, ∂Ω )>0).Since μ_1(0, ζθ_λ^-1) is the least eigenvalue of -Δϕ =μζ(x)1θ_λϕ Ω,ϕ=0 ∂Ω,then μ_1(0, ζθ_λ^-1)>0. Then we know that σ_1(α)>0 if α>0 is sufficiently large. That is to say, if k≠ 0 in (<ref>) and α>0 is sufficiently large, then μ >0.Consequently, we can conclude that the principal eigenvalue μ_* of (<ref>) (or equivalently (<ref>)) is positive if α>0 is sufficiently large. The proof of Lemma <ref> is accomplished. § MORSE INDEX OF SMALL COEXISTENCE STATESThis section is devoted to the stability analysis for positive steady-states of small coexistence type on the branch 𝒞_α, (expressed as (<ref>)). Our goal of this section is to obtain the Morse index of each solution on 𝒞_α,. In the expression of (λ, u( ·, λ, α ),v( · ,λ, α )) ∈𝒞_α,, we employ the change of variableε:=1αto introduce (u_ε(λ ), v_ε(λ )):= (u( ·, λ, ε^-1 ),v( · ,λ, ε^-1 )) and (U_ε(λ ), V_ε(λ )):= 1ε(u_ε(λ ),v_ε(λ )).Substituting (u^*, v^*)=(u_ε(λ ),v_ε(λ )) into (<ref>), we see thatL(λ, u_ε(λ ),v_ε(λ )) [ [ ϕ; ψ ]] =-Δ( [ [ 1+V_ε(λ ) U_ε(λ ); V_ε(λ ) 1+U_ε(λ ) ]] [ [ ϕ; ψ ]] ) - λ[ [ ϕ; ψ ]]+ε[ [ 2b_1U_ε(λ )+c_1V_ε(λ ) c_1U_ε(λ ); b_2V_ε(λ ) b_2U_ε(λ )+2c_2V_ε(λ ) ]] [ [ ϕ; ψ ]].Then the linearized eigenvalue problem(<ref>) around(u^*, v^*)=(u_ε(λ ),v_ε(λ )) is reduced to the following Dirichlet problem:Δ [(1+V_ε(λ ))ϕ +U_ε(λ )ψ] +λϕ-ε[ {2b_1U_ε(λ )+c_1V_ε(λ )}ϕ +c_1U_ε(λ )ψ] +μϕ=0 Ω, Δ [V_ε(λ )ϕ+(1+U_ε(λ ))ψ] +λψ-ε[b_2V_ε(λ )ϕ+ {b_2U_ε(λ )+2c_2V_ε(λ )}ψ] +μψ=0 Ω, ϕ=ψ=0 ∂Ω.In view of (<ref>), we recall that (U_ε(λ ), V_ε(λ ))→ (U(λ ),U(λ )) in X_p as ε→ 0, where we abbreviate U(λ )=U( · ,λ). Then setting ε→ 0 in (<ref>), we obtain the limiting eigenvalue problem as follows:Δ [(1+U(λ))ϕ +U(λ )ψ] +λϕ +μϕ=0 Ω, Δ [U(λ )ϕ+(1+U(λ ))ψ] +λψ +μψ=0 Ω, ϕ=ψ=0 ∂Ω.To know the set of all the eigenvalues of (<ref>), we prepare the following lemma:Suppose that r(x) (≢0)is a nonnegative and Hölder continuous function. Let U(λ ) be the positive solution of (<ref>). Then it holds thatμ_1(-λ1+U(λ ), r)=0λ>λ_1. Setting Z(λ )=(1+U(λ ))U(λ ) in (<ref>), one can see that Z(λ ) satisfies-Δ Z(λ )-λ1+U(λ )Z(λ )=0Ω, Z(λ )=0 ∂Ω.Thanks to the positivity that Z(λ )>0 in Ω, the application of the Krein-Rutman theorem to (<ref>) implies thatμ_1( -λ1+U(λ ),q)=0λ>λ_1.The proof of Lemma <ref> is complete.Suppose that λ>λ_1. Then, all the eigenvalues of (<ref>) consist of the union of{λ_j-λ}^∞_j=1, {μ_j( -λ1+2U(λ ), 11+2U(λ )) }^∞_j=1.Furthermore, all the eigenvalues contained in the latter set are positive.By the change of variablesh(x):=ϕ(x)-ψ (x)k(x):=(1+2U(λ ))(ϕ (x)+ψ (x))we reduce (<ref>) to a pair of eigenvalue problems for single equations with separate variables:-Δ h-λ h=μ hΩ, h=0 ∂Ωand-Δ k-λ1+2U(λ )k= μ1+2U(λ )kΩ, k=0 ∂Ω.Hence the set of eigenvalues of (<ref>) coincides with the union of the sets of eigenvalues of (<ref>) and (<ref>).It is obvious that (<ref>) admits nontrivial solutions if and only if λ+μ=λ_j. This fact means that all the eigenvalues of (<ref>) are arranged as {λ_j-λ}^∞_j=1. Then, by the combination ofLemma <ref> with r=1/(1+2U(λ )) and (i) of Lemma <ref>, we deduce thatμ_1( -λ1+2U(λ ), 11+2U(λ )) > μ_1( -λ1+U(λ ), 11+2U(λ ))=0λ>λ_1.This fact implies that, if k is not trivial, then μ is necessarily positive.Therefore, the negative eigenvalues of (<ref>) coincide with the negative elements of the set {λ_j-λ}^∞_j=1. Hence we obtain the assertion of Lemma <ref>.Next we consider the Morse index of any solutions on the bifurcation curve 𝒞_α, introduced by (<ref>). The following result asserts that positive solutions with the Morse index -1bifurcate from the trivial solution at λ=λ_1,and as λ increases along 𝒞_α,,the Morse index of positive solutions decreases by onefor each time λ exceeds the bifurcation pointsβ_2,α<β_3,α<⋯<β_j, α. Hereafter the Morse index of(λ, u( · ,λ, α ), v( · ,λ, α ))∈𝒞_α, will be denoted byi(λ, u( · ,λ, α ), v( · ,λ, α )),namely, i(u( · ,λ, α ), v( · ,λ, α )) denotes the number of negative eigenvalues of (<ref>) with (u^*, v^*)=(u( · ,λ, α ),v( · ,λ, α )).Let j:=max{j∈ℕ : λ_j< }. Suppose that λ_1,λ_2,⋯, λ_j in (<ref>) are simple eigenvalues. Then there exists a large α=α ( ) such that, if α>α and λ∈ (β_j, α, β_j+1, α]∩(λ_1,] with β_1,α:=λ_1, theni(λ, u( · ,λ, α ), v( · ,λ, α ))=j. We study the linearized eigenvalue problem (<ref>) around (u^*,v^*)=(u_ε(λ ), v_ε(λ )). Repeating the argument to get (<ref>), we recall that (<ref>) is equivalent to (<ref>). By the change of variablesh(x):=ϕ(x)-ψ (x)k(x):= (1+2V_ε(λ))ϕ(x)+ (1+2U_ε(λ ))ψ (x),we reduce (<ref>) toΔ[ [ h; k ]] + (λ +μ )[ [ 1 0; U_ε(λ )-V_ε(λ )/1+U_ε(λ ) + V_ε(λ ) 1/1+U_ε(λ ) + V_ε(λ ) ]] [ [ h; k ]] -ε2(1+U_ε(λ ) + V_ε(λ ))[ [ 2b_1U_ε(λ )+ (c_1-b_2)V_ε(λ ) (c_1-b_2)U_ε(λ ) -2c_2V_ε(λ ); 2b_1U_ε(λ )+ (c_1+b_2)V_ε(λ ) (c_1+b_2)U_ε(λ ) +2c_2V_ε(λ ) ]]×[ [1+2U_ε(λ ) 1; -1-2V_ε(λ ) 1 ]] [ [ h; k ]] = [ [ 0; 0 ]]with homogeneous Dirichlet boundary conditionsh=k=0 on ∂Ω. For (λ, h, k, μ, ε )∈ (λ_1,)×X_p×ℂ×ℝ, we define G( λ, h, k, μ, ε )∈Y_p by the left-hand side of (<ref>). Furthermore, we define H(λ, h, k, μ, ε )∈Y_p×ℝ byH(λ, h, k, μ, ε ):= [ [ G(λ, h, k, μ, ε ); h^2_2+k^2_2-1 ]],where · _2 denotes the usual norm of L^2(Ω ). Since G(λ, h, k, μ, ε ) is linear with respect to (h,k), then it is reasonable to regard X_p and Y_p asX_p= [W^2,p(Ω: ℝ)∩ W^1,p_0(Ω: ℝ)] × [W^2,p(Ω: ℂ)∩ W^1,p_0(Ω: ℂ)], Y_p=L^p(Ω: ℂ)× L^p(Ω: ℂ)in the proof of Theorem <ref>.Here we recall the eigenvalue problem (<ref>) at the limit ε→ 0. It follows from Lemma <ref> that all the eigenvalues of(<ref>) with λ=λ^*∈ (λ_1, ∞)consist ofλ_1-λ^*<λ_2-λ^*<⋯and( 0< ) μ_1( -λ^*1+2U(λ^*), 11+2U(λ^*)) < μ_2( -λ^*1+2U(λ^*), 11+2U(λ^*)) <⋯.In order to count the negative eigenvalues, we setμ_i^*:=λ_i-λ^* i∈ℕ.In the case where λ^*∈ (λ_j, λ_j+1) with some j∈ℕ, all the negative eigenvalues of (<ref>)with λ=λ^* consist of {μ^*_i}^j_i=1, that is,μ^*_1<μ^*_2<⋯<μ^*_j ( <0< ) μ^*_j+1<⋯.In the other case where λ^*=λ_j with some j≥ 2, all the negative eigenvalues of (<ref>)with λ=λ^* consist of {μ^*_i}^j-1_i=1, that is,μ^*_1<μ^*_2<⋯<μ^*_j-1 <μ^*_j=0 < μ^*_j+1<⋯. Weconsider (<ref>)-(<ref>) with (λ, μ)=(λ^*, μ_i^*). Then the corresponding eigenfunctions are obtained as(h^*,k^*)=t(_i,0)t≠ 0,where _i denotes an L^2 normalized eigenfunction of -Δ with the eigenvalue λ=λ_i under the homogeneous Dirichlet boundary condition, namely, _i satisfies (<ref>) with λ=λ_i and _i_2=1. It follows thatH(λ^*, _i, 0, μ^*_i, 0)=0.Our strategy is to construct the zero-level set of H near (λ^*, _i, 0, μ^*_i, 0) ∈ℝ×X_p×ℂ×ℝ by the applications of the implicit function theorem. To do so, we need to check thatD_(h,k,μ)H(λ^*, _i, 0, μ^*_i, 0) ∈ℒ(X_p×ℂ, Y_p×ℝ)is an isomorphism. In view of the left-hand side of (<ref>), one can verify thatD_(h,k,μ)H (λ^*, _i, 0, μ^*_i, 0) [ [ h; k; μ ]] = [ [ D_(h,k,μ)G (λ^*, _i, 0, μ^*_i, 0) [ [ h; k; μ ]];2∫_Ω_ih ]]= [ [ Δ[ [ h; k ]] +(λ^*+μ^*_i) [ [ 1 0; 0 1/1+2U(λ^*) ]] [ [ h; k ]] + μ[ [ 1 0; 0 1/1+2U(λ^*) ]] [ [ _i;0 ]];2∫_Ω_ih ]].By virtue of λ^*+μ^*_i=λ_i, we see thatD_(h,k,μ)H (λ^*, _i, 0, μ^*_i, 0) [ [ h; k; μ ]] = [ [ Δ h+λ_ih+μ_i; Δ k+λ_i/1+2U(λ^*)k;2∫_Ω_ih ]].In order to verify that D_(h,k,μ)H(λ^*, _i, 0, μ^*_i, 0) ∈ℒ(X_p×ℂ, Y_p×ℝ) is an isomorphism, we suppose that D_(h.k.μ) H(λ^*, _i, 0, μ^*_i, 0)(h,k,μ)^T=0, that is,-Δ h-λ_ih=μ_iΩ,-Δ k-λ_i/1+2U(λ^*)k=0 Ω, ∫_Ω_ih=0,h=k=0∂Ω.Taking the inner product of the first equation with _i,we obtain μ=0. This fact leads to -Δ h=λ_ih Ω,h=0 ∂Ω,∫_Ω_ih=0.Owing to the assumption that λ_i is a simple eigenvalue of (<ref>), the Fredholm alternative theorem yields h=0. It follows fromLemma <ref> and (i) of Lemma <ref> that0=μ_1( -λ^*1+U(λ^*),1) < μ_1( -λ^*1+2U(λ^*),1).Whether in the case λ^*∈ (λ_j, λ_j+1) with some j∈ℕ or the caseλ^*=λ_j with some j≥ 2, it follows that λ_i≤λ^* for any i=1,2,…,j. Then (i) of Lemma <ref> with (<ref>) implies0 < μ_1( -λ^*1+2U(λ^*),1) ≤μ_1( -λ_i1+2U(λ^*),1)i=1,2,…,j.Therefore, we know from the second equation of(<ref>) thatk=(-Δ-λ_i1+2U(λ^*))^-10=0i=1,2,…,j.Consequently, we deduce that D_(h,k,μ)H(λ^*, _i, 0, μ^*_i, 0) ∈ℒ(X_p×ℂ, Y_p×ℂ) is an isomorphism for each i=1,2,…,j. With (<ref>), the implicit function theorem gives a neighborhood 𝒰^*_i of (λ^*, _i, 0, μ^*_i, 0) ∈ℝ×X_p×ℂ×ℝ, a small numberδ_i(λ^*)>0and the mappingB((λ^*, 0); δ_i(λ^*))∋ (λ, ε )↦ (h_i(λ, ε ), k_i(λ, ε ), μ_i (λ, ε )) ∈X_p×ℂof class C^1, where B((λ^*, 0); δ_i(λ^*))={ (λ, ε)∈ℝ^2 :(λ-λ^*)^2+ε^2< δ_i(λ^*)^2}, such that all the solutions ofH(λ, h, k, μ, ε )=0 in 𝒰^*_i are given by(h,k,μ)=(h_i (λ, ε ), k_i (λ, ε ), μ_i(λ, ε )) (λ, ε )∈ B((λ^*, 0); δ_i(λ^*)).Hence it holds that(h_i (λ, 0 ), k_i (λ, 0 ), μ_i(λ, 0 )) = (_i, 0, λ_i-λ ). It will be shown that{μ_i(λ, ε )}^j_i =1 are real for any (λ, ε )∈ B((λ^*, 0); δ^*_i ). Suppose for contradiction that μ_i(λ̂, ε̂)≠ 0 with some i∈{1,2,…, j} and (λ̂, ε̂)∈ B((λ^*, 0); δ^*_i ). Taking the complex conjugate of (<ref>), one can verify that the conjugateμ_i(λ̂, ε̂)( ≠μ_i(λ̂, ε̂) ) is also an eigenvalue of (<ref>)(also (<ref>)). It follows thatH(λ̂, h_i(λ̂, ε̂), k_i(λ̂, ε̂), μ_i(λ̂, ε̂), ε̂)=0H(λ̂, h_i(λ̂, ε̂), k_i(λ̂, ε̂),μ_i(λ̂, ε̂), ε̂)=0.Hence this contradicts the fact that all the solutions of H(λ, h, k, μ, ε )=0 in 𝒰^*_i are uniquely determined by (λ, h_i(λ , ε ),k_i(λ, ε ), μ_i(λ, ε ), ε ) for all (λ, ε )∈ B((λ^*, 0); δ^*_i ).We aim to count the negative eigenvaluesof (<ref>) (also (<ref>)) whenλ is close to the bifurcation point β_j(ε ):=β_j, ε with α =1/ε. By virtue of β_j(ε )→λ_j as ε→ 0, we first set λ^*=λ_j with some j≥ 2. In view of (<ref>) and (<ref>), we see that if 0<ε <δ (λ_j):=min_i=1,…, jδ_i(λ_j),then there are at least j-1negative eigenvalues of (<ref>) (also (<ref>)) asμ_1(λ_j, ε )<μ_2(λ_j, ε )<⋯< μ_j-1(λ_j, ε ) ( <0 )and μ_j(λ_j, ε ) lies in a neighborhood of zero. Here we recall that the bifurcation point(β_j,α, u( · ,β_j,α, α ), v( · ,β_j,α, α ))∈𝒞_α,satisfies β_j,α→λ_j as α→∞. Then the continuity of μ_i(λ, ε ) implies thatμ_1(β_j(ε ), ε )<μ_2(β_j(ε ), ε )<⋯< μ_j-1(β_j(ε ), ε ) ( <0 ),where β_j(ε ):=β_j,α with α=1/ε, if ε>0 is sufficiently small. Here we remark that (<ref>) (also (<ref>)) withλ=β_j(ε ) necessarily has the zero eigenvalue since(β_j,α, u( · ,β_j,α, α ), v( · ,β_j,α, α ))∈𝒞_α, is a bifurcation point. Together with the facts that μ_j(λ_j, ε ) is the unique eigenvalue near zero(for (<ref>) with λ=λ_j)and that β_j(ε )→λ_j as ε→ 0, we deduce thatμ_j(β_j(ε ), ε )=0if ε>0 is sufficiently small.Next it will be shown that ∂_λμ_j(β_j(ε ), ε )<0. Differentiating the first component of (<ref>)by λ and setting (λ, ε)=(λ_j, 0)in the resulting expression,one can see that h_λ(λ, ε ):= ∂_λ h(λ, ε ) satisfiesΔ h_λ(λ_j, 0)+(1+ ∂_λμ_j(λ_j, 0))_j+ (λ_j+μ_j(λ_j, 0))h_λ(λ_j, 0)=0Ω.Here we note μ_j(λ_j, 0)=0 by setting ε = 0 in (<ref>). Taking the inner product of (<ref>) with _j, we obtain ∂_λμ_j(λ_j, 0)=-1. We recall that μ_j(λ, ε )is of class C^1 near (λ, ε )=(λ_j, 0). Therefore, there exists a small positive number δ^#_j such that, if 0<ε<δ^#_j, then∂_λμ_j(β_j(ε ), ε )<-12. By virtue of (<ref>), (<ref>) and (<ref>), there exists τ_j>0such that, if 0<ε<δ^#_j, thenμ_1(λ, ε )<μ_2(λ, ε )<⋯< μ_j-1(λ, ε )<0λ∈(β_j(ε )-τ_j, β_j(ε )+τ_j)andμ_j(λ, ε )>0λ∈(β_j(ε )-τ_j, β_j(ε )),=0λ=β_j(ε ),<0λ∈ (β_j(ε ), β_j(ε )+τ_j). The final task of the proofis to count the negative eigenvalues of (<ref>) (also (<ref>)) when λ is away from the bifurcation point β_j(ε ). To do so, we assumeλ^*∈ ( λ_j+τ_j2, λ_j+1-τ_j+12)=:I_jwith some j∈ℕ. By virtue of (<ref>) and (<ref>), we see that if 0<ε <δ (λ^*),then there exist negative eigenvalues of (<ref>) (also (<ref>)) asμ_1(λ^*, ε )<μ_2(λ^*, ε )<⋯< μ_j(λ^*, ε ) ( <0 ).Together with the continuity of μ_i(λ, ε) (i=1,2,… , j), we find a small τ^*>0, which is depending on λ^* such that if 0<ε <δ^*, thenμ_1(λ, ε )<μ_2(λ, ε )<⋯< μ_j(λ, ε ) ( <0 )λ∈ (λ^*-τ^*, λ^*+τ^*)=:J(λ^*).By virtue of β_j(ε )→λ_j as ε→ 0, we can see that⋃^j_j=1⋃_λ^*∈ I_jJ(λ^*)∪(β_2(ε )-τ_2, β_2(ε )+τ_2) ∪⋯∪ (β_j(ε )-τ_j, β_j(ε )+τ_j)gives an open covering of the compact set [λ_1+τ_1, ]. It follows that there existλ^*_(1),…, λ^*_(m)∈⋃^j_j=1I_j such that⋃^m_n=1J(λ^*_(n))∪(β_2(ε )-τ_2, β_2(ε )+τ_2) ∪⋯∪ (β_j(ε )-τ_j, β_j(ε )+τ_j)covers [λ_1+τ_1, ]. Therefore, we can deduce from (<ref>), (<ref>) and (<ref>) that, if 0<ε<min_j=1,…,jδ^#_j 0<ε<min_n=1,…,mδ (λ^*_(n)),then, for each j=1,…,j,μ_1(λ, ε )<μ_2(λ, ε )<⋯< μ_j(λ, ε )<0λ∈(β_j(ε ), β_j+1(ε )] ∩ (λ_1+τ_1, ].By a slight modification of the argument to get(<ref>), we can show thatμ_1(λ, ε )<0λ∈ (λ_1, λ_1+τ_1]if ε>0 is small enough. Consequently, the above argument enables us to conclude thati(λ, u( · ,λ, ε^-1),v( · ,λ, ε^-1))=jλ∈ (β_j(ε ), β_j+1(ε )]∩ (λ_1, ],where β_1(ε )=λ_1. Then the proof of Theorem <ref> is complete.In the one-dimensional case, it can be proved that the Morse index i(λ, u( · ,λ, α ) v( · ,λ, α )) obtained in Theorem <ref> is equal to the dimension of the local unstable manifold of the steady-state (u( · ,λ, α ), v( · ,λ, α )) of (<ref>).AssumeΩ=(-ℓ, ℓ).If α>0 is large, then for each λ∈ (β_j, α, β_j+1, α)∩ (λ_1, ], there exists a local unstable manifold 𝒲^u((u( · ,λ, α ), v( · ,λ, α )); 𝒪) in Z_θ:=W^1+θ,2_0(Ω )× W^1+θ, 2_0(Ω ) satisfying𝒲^u((u( · ,λ, α ), v( · ,λ, α )); 𝒪)=j, where θ∈ (0,1/2) and 𝒪⊂Z_θ is a neighborhood of(u( · ,λ, α ), v( · ,λ, α )).In the one-dimensional case where Ω=(-ℓ, ℓ), it is known that (<ref>) admits a unique time-global solution in the class (u, v)∈ C([0,∞): Z_θ)∩ C^∞((0,∞)×Ω)^2, see e.g., <cit.>. Furthermore, it is also known from <cit.> that the dynamical system from the abstractevolution equation associated with (<ref>) can be determined in the universal space Z_θ. Then the result in <cit.> ensures the desired local unstable manifold. § MORSE INDEX OF SEGREGATION STATESIn this section, the Morse index of almost all solutions on𝒮^±_j, α, (in (<ref>) and (<ref>)) will be obtained. Before stating the result, it should be noted that𝒮^±_j, α, can beparameterized byλoutside the neighborhood of bifurcation point. AssumeΩ=(-ℓ, ℓ).For any small η>0 and j≥ 2, there exist smallδ^±_j(η,)>0 such that for each α>1/δ^±_j, there exists a pair of simple curves𝒮^±_j,α(λ_j+η, )( ⊂𝒮^±_j, α,) consisting of positive solutions to (<ref>) parameterized by λ∈ (λ_j+η, ) such as𝒮^±_j,α(λ_j+η, )= { (λ, u^±_j(λ,ε ),v^±_j(λ,ε)) ∈ (λ_j+η, )×X_p },where ε=1/α and(λ, u^±_j(λ,ε ),v^±_j(λ,ε))= (ξ_j(s, α ), u_j( · ,s, α ), v_j( · ,s, α ))with the expressions in (<ref>) and (<ref>).The next result gives the Morse indexi(λ, u^±_j(λ, ε ), v^±_j(λ, ε )) of any(λ, u^±_j(λ, ε ), v^±_j(λ, ε )) ∈𝒮^±_j, α(λ_j+η,):AssumeΩ=(-ℓ, ℓ). For any(λ, u^±_j(λ, ε ), v^±_j(λ, ε ))∈𝒮^±_j, α(λ_j+η,), there exists a small positiveδ which is independent of λ∈ (λ_j+η, ) and j∈{2,…,j} such that, if 0<ε<δ, then i(λ, u^±_j(λ, ε ), v^±_j(λ, ε ))=j.Our task is to count the negative eigenvalues ofL(λ,u^±_j(λ, ε ),v^±_j(λ, ε )) [ [ ϕ; ψ ]] = μ[ [ ϕ; ψ ]] λ∈ [λ_j+η,].Here we fix λ^*∈ (λ_j-η/2, +η/2) arbitrarily. In what follows,we abbreviate(u^*_ε, v^*_ε):= (u^±_j(λ^*, ε ),v^±_j(λ^*, ε ))to avoid complications with subscripts. By the change of variablesh(x):=ϕ (x)-ψ (x), k(x):=(1+2ε^-1v^*_ε(x))ϕ (x) +(1+2ε^-1u^*_ε(x))ψ (x),we reduce (<ref>) to the equationG(λ, h,k,μ, ε )=0,where the mapping (λ_j+η,)×X_p×ℂ×ℝ∋ (λ, h,k,μ, ε )↦G(λ, h,k,μ, ε ) ∈Y_pis defined byG(λ, h,k,μ, ε ):=d^2dx^2[ [ h; k ]] + (λ +μ )[ [10; u^*_ε-v^*_ε/ε+u^*_ε+ v^*_εε/ε +u^*_ε+ v^*_ε ]] [ [ h; k ]] -12(ε+ u^*_ε+ v^*_ε)[ [ 2b_1u^*_ε+ (c_1-b_2)v^*_ε (c_1-b_2)u^*_ε -2c_2v^*_ε; 2b_1u^*_ε+ (c_1+b_2)v^*_ε (c_1+b_2)u^*_ε +2c_2v^*_ε ]] [ [ ε +2u^*_ε ε; -ε-2v^*_ε ε ]] [ [ h; k ]].We recall from (<ref>) thatlim_ε→ 0(u^*_ε, v^*_ε) =(w^*_+,w^*_-)[-ℓ,ℓ ],where w^*∈𝒮^+_j,∞∪𝒮^-_j,∞ with λ=λ^*. Here we consider the limiting equation G(λ^*, h,k,μ, 0)=0, that is,h”+λ^* h-2(b_1w^*_++c_2w^*_-)h+μ h=0 (-ℓ, ℓ),k”+(λ^*+μ) ( w^*)h- 2(b_1w^*_+-c_2w^*_-)h=0 (-ℓ, ℓ),h(±ℓ )=k(±ℓ )=0,where w^*:=w^*/|w^*|. We note that the first equation of (<ref>) is corresponding to the linearized eigenvalue problem of (<ref>) around w^*, that is,-h”-f_w(λ^*, w^*)h=μ h(-ℓ, ℓ), h(±ℓ )=0,wheref(λ, w)= λ w-b_1(w_+)^2+c_2(w_-)^2=w(λ -b_1w)w≥ 0,w(λ +c_2w)w<0.Let μ_i,0∈ℝ be the i-th eigenvalue of (<ref>). By the Sturm-Liouville theory, it is well-known that♯{(<ref>)}= ♯{ z∈ (-ℓ, ℓ) : w(z)=0} =j.Then it follows that all the eigenvalues of (<ref>) are arranged asμ^*_1,0<μ^*_2,0<⋯<μ^*_j,0 ( <0< ) μ^*_j+1,0<⋯.For (<ref>) with μ=μ^*_i,0, let h^*_i,0(x) be the eigenfunction with h^*_i,0_2=1 and (h^*_i,0)'(-ℓ )>0. In view of (<ref>), we see thatG (λ^*, h^*_i,0, k^*_i,0, μ^*_i,0, 0)=0,wherek^*_i,0:=𝒢[ { 2(c_2w^*_--b_1w^*_+) +(λ^*+μ^*_i,0)w^* } h^*_i,0 ].Here 𝒢 represents the inverse operator of -d^2/dx^2 with homogeneous Dirichlet boundary conditions at x=±ℓ. In (<ref>), we set h=0 and consider the equation G(λ, 0,k,μ, ε )=0, which consists of {(2b_1+c_1-b_2)u^*_ε+ (c_1-b_2-2c_2)v^*_ε}k=0 (-ℓ, ℓ),- k” -ε2(ε +u^*_ε +v^*_ε)[ 2(λ +μ ) - {(2b_1+c_1+b_2)u^*_ε +(c_1+b_2+2c_2)v^*_ε}]k =0 (-ℓ, ℓ),k(±ℓ )=0.It can be shown thatthe corresponding eigenvalues μ_ε_n satisfy μ_ε_n→∞. Suppose for contradiction that μ_ε_n<M with some positive constant M independent of n.Indeed, multiplying the second equation of (<ref>) by 1/k_n_∞ and setting n→∞ in the resulting expression, we know from (<ref>) and the elliptic regularity thatk_n=k_n/k_n_∞, converges to some function kwhich satisfies k_∞=1 and-k”=0Ω,k(±ℓ )=0,which obviously leads to k=0. However this contradicts k_∞=1. Then we can say that {μ_ε_n} are unbounded. Suppose for contradiction that μ_ε_n→ -∞ subject to a subsequence.With the aid of (<ref>), we know that the operator-d^2dx^2 -ε_n2(ε_n +u_ε_n+v_ε_n )[ 2(λ +μ_ε_n ) - {(2b_1+c_1+b_2)u_ε_n+(c_1+b_2+2c_2)v_ε_n }] ∈ℒ(X_p, Y_p)is invertible if n is sufficiently large. Then in this situation, the second equation implies k_n=0. However, this is a contradiction because μ_ε_n are eigenvalues. Therefore, we can deduce that, if some k≠ 0 satisfies G(λ, 0,k,μ,ε )=0,then μ is positive and away from zero when ε >0 is sufficiently small. Therefore, we may assume h≠ 0 in (<ref>) when counting negative eigenvalues of (<ref>).Here we define the mapping H :(λ_j+η,)×X_p×ℂ×ℝ→Y_p×ℝ byH(λ, h,k,μ,ε ):= [ [ G(λ, h,k,μ,ε );h^2_2-1 ]],where X_p and Y_pare regarded as (<ref>). Then it follows from (<ref>) thatH (λ^*, h^*_i,0, k^*_i,0, μ^*_i,0, 0)=0.In order to construct the zero-level set ofH in a neighborhood of (λ^*, h^*_i,0, k^*_i,0, μ^*_i,0, 0)∈ℝ×X_p×ℂ×ℝwith the aid of the implicit function theorem, we need to show thatD_(h,k,μ )H (λ^*, h^*_i,0, k^*_i,0, μ^*_i,0, 0) ∈ℒ(X_p×ℂ, Y_p×ℝ)is an isomorphism. For this end, we assume that D_(h,k,μ )H (λ^*, h^*_i,0, k^*_i,0, μ^*_i,0, 0) (h,k,μ)^T=0, namely,h”+(λ^* +μ^*_i,0)h- 2(b_1w^*_++c_2w^*_-)h+μ h^*_i,0=0 (-ℓ, ℓ),k”+(λ^*+μ^*_i,0) ( w^*)h -2(b_1w^*_+-c_2w^*_-)h +μ ( w^*) h^*_i,0=0(-ℓ, ℓ ), ∫^ℓ_-ℓh^*_i,0 h=0,h(±ℓ )=k(±ℓ )=0.By taking the inner product of the first equation of (<ref>) with h^*_i,0, we use the first equation of (<ref>) to see μ =0. Then the first equation is reduced toh”+(λ^*+μ^*_i,0)h- 2(b_1w^*_++c_2w^*_-)h=0(-ℓ, ℓ).Hence it follows that h=th^*_i,0. By the integral condition of (<ref>),we see t=0, and therefore, h=0. Finally we obtain k=0 by the second equation of (<ref>). Consequently, we deduce that D_(h,k,μ )H (λ^*, h^*_i,0, k^*_i,0, μ^*_i,0, 0) is an isomorphism from X_p×ℂ to Y_p×ℝ. Therefore, for each i, the implicit function theorem yields a neighborhood𝒰^*_i,0 of (λ^*, h^*_i,0, k^*_i,0, μ^*_i,0, 0) ∈ℝ×X_p×ℂ×ℝ and a small positive number δ_i and a mappingB((λ^*, 0); δ^*_i) ∋(λ, ε )↦ (h^*_i(λ, ε ),k^*_i(λ, ε ),μ^*_i(λ, ε )) ∈X_p×ℂof class C^1 such that all the solutions ofH(λ, h,k,μ,ε )=0 in 𝒰^*_i,0 are represented by(h,k, μ)= (h^*_i(λ, ε ),k^*_i(λ, ε ),μ^*_i(λ, ε ))(λ, ε )∈B((λ^*, 0); δ^*_i).Hence it holds that (h_i(λ, 0 ),k_i(λ, 0 ),μ_i(λ, 0 ))= (h_i,0, k_i,0, μ_i,0), where (h_i,0, k_i,0, μ_i,0) is the solution of (<ref>) with λ^* replaced by λ satisfying h_i,0_2=1 and h_i,0'(-ℓ )>0. Since μ_i,0∈ℝ, then we may assume μ_i(λ, ε )∈ℝ for (λ, ε )∈ B((λ^*, 0); δ^*_i) bythe same argument just after (<ref>)in the proof of Theorem <ref>. By the continuity of μ_i(λ, ε ) with respect to (λ, ε ), we can deduce from (<ref>) thatμ_1(λ, ε )<μ_2(λ, ε ) <⋯<μ_j(λ, ε )( <0< ) μ_j+1(λ, ε )for any (λ, ε )∈B((λ^*, 0); δ^*_j). Since⋃ {(λ^*-δ^*_i, λ^*+δ^*_i) : λ^*∈(λ_j-η2, +η2)}gives an open covering of the compact set [λ_j+η,], then the similar procedure to the argument below (<ref>) ensures a small positive δ which is independent of λ∈ [λ_j+η,] and j∈{2,…, j} such that, if 0<ε <δ, then (<ref>) holds true for anyλ∈ [λ_j+η, ]. Then the proof of Theorem <ref> is complete. The Morse indexi(λ, u^±_j(λ, ε ), v^±(λ, ε )) obtained in Theorem <ref>corresponds to the dimension ofthe local unstable manifolds forthe steady-states (u^±_j(λ, ε ), v^±(λ, ε )) of (<ref>) with Ω =(-ℓ, ℓ). In the case where Ω =(-ℓ, ℓ), let (λ, u^±_j(λ, ε ), v^±_j(λ, ε )) ∈𝒮^±_j,α(λ+η,). If α=1/ε is sufficiently large, then there exist unstable local manifolds 𝒲^u((u^±_j(λ, ε ), v^±_j(λ, ε )); 𝒪^±_j) in Z_θ:=W^1+θ,2_0(Ω )× W^1+θ, 2_0(Ω ) satisfying 𝒲^u((u^±_j(λ, ε ), v^±_j(λ, ε )); 𝒪^±_j)=j, where θ∈ (0,1/2) and 𝒪^±_j⊂Z_θ are neighborhoods of(u^±_j(λ, ε ), v^±_j(λ, ε )), respectively.The proof is essentially same as that of Corollary <ref>.§ NUMERICAL RESULTSIn this section, we exhibit the numerical results on the Morse index of positive steady-statesby using the continuation software<cit.> based on an FEM discretization of the stationary problem. For (<ref>), our setting of parameters in the numerical simulationis as follows:Ω=(-0.5,0.5), α=20, b_1=3, b_2=2, c_1=2, c_2=1. Our previous paper with Inoue <cit.> has already shown the numerical bifurcation diagram as in Figure 1, where the horizontal axis represents the bifurcation parameter λ,and the vertical axis represents the L^2 norm of theu component of positive solutions (u,v) to (<ref>). In Figure 1, the blue curve is corresponding to the branch𝒞_20,of small coexistence (see (<ref>)) bifurcating from the trivial solution at λ=λ_1. The theoretical value of λ_1 with (<ref>) is equal to π^2 ( ≒ 9.8696).Figure 2, also a reprint from <cit.>,shows the profile of a positivesolution (u,v)corresponding to the point at λ=59.8286 on the blue curve 𝒞_20,.In Figure 1,the red curve exhibits the upper branch 𝒮_2,20,^+ and lower one 𝒮_2,20,^-((<ref>) and (<ref>)). The pitchfork bifurcation curve 𝒮_2, 20, bifurcates from a solution on the blue curve at β_2,20. It should be noted the L^2 norm of each component of 𝒮_2,20, ^+ and𝒮_2,20, ^- are shown overlapped. It was shown in <cit.> thatthe secondary bifurcation pointβ_2,α theoretically tends toλ_2=(2π)^2 ( ≒ 39.4784) as α→∞.In Figure 3, also a reprint from <cit.>, (a) and (b) exhibit the profiles of solutions on the upper branch 𝒮^+_2, 20,and the lower one 𝒮^-_2, 20,of the red pitchfork bifurcation curve𝒮_2, 20, at λ=40.3421, respectively. It can be observed that u and v are somewhat spatially segregated. Furthermore, as the solution moves awayfrom the bifurcation point on the blue curve,the numerical simulation shows thatthe segregation between u and v becomes more overt.Actually, (c) and (d) in Figure 3 exhibit the profiles of solutions on 𝒮^+_2, 20,and 𝒮^+_2, 20,on the red pitchfork bifurcation curve𝒮_2, 20,at λ=43.0673,where u and v considerably segregate each other.The numerical result of this paper is shown in Figure 4 exhibiting the Morse index of each solutions of (<ref>) with (<ref>). It should be noted that the blue curve in Figure 4 bifurcating from the trivial solution at λ=λ_1 (=π^2≒ 9.8696) is corresponding to the bifurcation curve of the semi-trivial solutions (u,v)=(θ_λ, 0) for λ>λ_1, which is not included in Figure 1 focusing only on positive solutions. In Figure 4, the blue part is the stable branch ofthe trivial solution and the semi-trivial solutions; the red part is the unstable branch ofthe solutions of small coexistence and of segregation with the Morse index 1; the green part is the unstable branch ofthe solutions with the Morse index 2, the light blue part is unstable branch ofthe solutions with the Morse index 3. 123456 Am1 H. Amann, Dynamic theory of quasilinear parabolic systems, III. Global existence, Math Z., 202 (1989), 219–250. Am2 H. Amann, Dynamic theory of quasilinear parabolic equations, II. Reaction-diffusion systems, Differ. Int, Eqns., 3 (1990), 13–75.Am3 H. Amann, Nonhomogeneous linear and quasilinear and parabolic boundary value problems. In: H. Schmeisser, H. Triebel eds. Function Spaces, Differential Operators and Nonlinear Analysis. Teubner-Texte Zur Math., vol. 133, Stuttgart-Leibzig: Teubner, pp. 9–126. BKS M. Breden, C. Kuehn, C. Soresina, On the influence of cross-diffusion in pattern formation, J. Comput. Dyn., 8 (2021), 213–240. CC1 R. S. Cantrell, C. Cosner, Diffusive logistic equations with indefinite weights:population models in disrupted environments, Proc. Royal Soc. Edinburgh A, 112 (1989), 293–318.CC2 R. S. Cantrell and C. Cosner, Spatial Ecology via Reaction-Diffusion Equations, Wiley Series in Mathematical and Computational Biology.John Wiley & Sons, Ltd., Chichester, 2003.DRUW T. Dohnal, J. D. M. Rademacher, H. Uecker, D. Wetzel,pde2path 2.0: Multi-parameter continuation and periodic domains, in Proceedings of the 8th European Nonlinear Dynamics Conference, ENOC, 2014 (2014).GT D. Gilbarg, N. S. Trudinger,Elliptic Partial Differential Equations ofSecond Order, Springer-Verlag, Berlin-Heidelberg,1998.HY T. Hirose, Y. Yamada, Multiple existence of positive solutions of competing speciesequations with diffusion and large interactions, Adv. Math. Sci. Appl. Vol. 12 (2002), 435-453.IKS J. Inoue, K. Kuto, H. Sato, Coexistence-segregation dichotomy in the full cross-diffusionlimit of the stationary SKT model, J. Differ. Equ., 373 (2023), 48–107.Ka Y. Kan-on, On the limiting system in the Shigesada, Kawasaki and Teramoto modelwith large cross-diffusion rates, Discrete Contin. Dyn. Syst. 40 (2020), 3561-3570.KiJ.-U. Kim, Smooth solutions to a quasi-linear system of diffusion equations for a certain population model, Nonlinear Anal., 8 (1984), 1121–1144. Ku1 K. Kuto, Limiting structure of shrinking solutions tothe stationary SKT model with large cross-diffusion, SIAM J. Math. Anal., 47 (2015), 3993–4024.Ku2 K. Kuto, Full cross-diffusion limit in thestationary Shigesada-Kawasaki-Teramoto model, Ann. Inst. Henri Poincaré C, Anal. Non Linéaire, 38 (2021), 1943–1959.Ku3 K. Kuto, Global structure of steady-states to the full cross-diffusion limitin the Shigesada-Kawasaki-Teramoto model, J. Differ. Equ., 333 (2022), 103–143. KY1 K. Kuto, Y. Yamada, Positive solutions for Lotka-Volterra competition systemswith large cross-diffusion, Appl. Anal., 89 (2010), 1037-1066.KY2 K. Kuto, Y. Yamada, On limit systems for some population models with cross-diffusion, Discrete Contin. Dyn. Syst. B,17 (2012), 2745-2769.Le D. Le, Global existence for some cross diffusion systems with equal crossdiffusion/reaction rates, Adv. Nonlinear Stud., 20 (2020), 833–845.LX Q. Li, Q. Xu, The stability of nontrivial positive steady states for the SKT model with large cross-diffusion, Acta Math. Appl. Sin. Engl. Ser., 36 (2020),657–669.LW1 Q. Li, Y. Wu, Stability analysis on a type of steady state for the SKT competition model with large cross diffusion, J. Math. Anal. Appl., 462 (2018), 1048–1078.LW2 Q. Li, Y. Wu, Existence and instability of some nontrivial steady states for the SKT competition model with large cross diffusion, Discrete Contin. Dyn. Syst.,40 (2020), 3657–3682. LN1 Y. Lou, W.-M. Ni, Diffusion, self-diffusion and cross-diffusion,J. Differ. Equ., 131 (1996), 79–131.LN2 Y. Lou, W.-M. Ni, Diffusion vs cross-diffusion: an elliptic approach,J. Differ. Equ., 154 (1999), 157–190.LNY1 Y. Lou, W.-M, Ni, S. Yotsutani, On a limiting system in the Lotka-Volterra competition with cross-diffusion, Discrete Contin. Dyn. Syst., 10 (2004), 435–458.LNY2 Y. Lou, W.-M, Ni, S. Yotsutani, Pattern formation in a cross-diffusion system, Discrete Contin. Dyn. Syst., 35 (2015), 1589–1607.LW Y. Lou, M. Winkler, Global existence and uniform boundedness ofsmooth solutions to a cross-diffusion system with equal diffusion rates, Commun. Partial. Differ. Equ., 40 (2015), 1905–1941.MSY T. Mori, T. Suzuki, S. Yotsutani, Numerical approach to existence and stability of stationarysolutions to a SKT cross-diffusion equation, Math. Models Methods Appl. Sci., 11 (2018), 2191–2210.Ni W.-M. Ni,The Mathematics of Diffusion, CBMS-NSF Regional Conference Series in Applied Mathematics 82, SIAM, Philadelphia, 2011. NWX W.-M. Ni, Y. Wu, Q. Xu, The existence and stability of nontrivial steady states for S-K-T competition model with cross diffusion, Discrete Contin. Dyn. Syst., 34 (2014), 5271–5298. OL A. Okubo, L. A. Levin,Diffusion and Ecological Problems:Modern Perspective,Second edition. Interdisciplinary Applied Mathematics,14, Springer-Verlag, New York, 2001. Ru W. H. Ruan, Positive steady-state solutions of a competing reaction- diffusion system with large cross-diffusion coefficients, J. Math. Anal. Appl., 197 (1996),558–578. SKT N. Shigesada, K. Kawasaki, E. Teramoto, Spatial segregation of interacting species, J. Theor. Biol., 79 (1979), 83–99.Ue H. Uecker, Hopf bifurcation and time periodic orbits with pde2path- algorithms and applications,Commun. Comput. Phys., 25 (2019), 812–852.UWR U. Uecker, D. Watzel, J. M. Rademacher, pde2path -A Matlab package for continuation and bifurcationin 2D elliptic systems,Numer. Math. Theory Methods Appl., 7 (2014), 58–106.WWX L. Wang, Y. Wu, Q. Xu, Instability of spiky steady states for S-K-T biological competing model with cross-diffusion, Nonlinear Anal., 159 (2017), 424–457.Wu Y. Wu, The instability of spiky steady states for a competing species model with cross-diffusion, J. Differ. Equ., 213 (2005), 289–340.WX Y. Wu, Q. Xu, The existence and structure of large spiky steady states for S-K-T competition systems with cross diffusion, Discrete Contin. Dyn. Syst., 29 (2011), 367–385.Yag A. Yagi, Abstract Parabolic Evolution Equations and their Applications, Springer-Verlag, Berlin, 2010.ZWH. Zhou, Y.-X. Wang, Steady-state problem of an S-K-T competition model with spatiallydegenerate coefficients, Internat. J. Bifur. Chaos Appl. Sci. Engrg.31 (11), 2150165 (2021).
http://arxiv.org/abs/2312.16107v2
{ "authors": [ "Kousuke Kuto", "Homare Sato" ], "categories": [ "math.AP", "92C15, 35B32, 35B35, 92D25, 35B20" ], "primary_category": "math.AP", "published": "20231226162153", "title": "Morse index of steady-states to the SKT model with Dirichlet boundary conditions" }
Convergence of Ginzburg-Landau expansions: superconductivity in the BCS theory and chiral symmetry breaking in the NJL model [ January 14, 2024 ================================================================================================================================ Addressing the challenge of balancing security and efficiency when deploying machine learning systems in untrusted environments, such as federated learning, remains a critical concern. A promising strategy to tackle this issue involves optimizing the performance of fully homomorphic encryption (HE). Recent research highlights the efficacy of advanced caching techniques, such as Rache, in significantly enhancing the performance of HE schemes without compromising security. However, Rache is constrained by an inherent limitation: its performance overhead is heavily influenced by the characteristics of plaintext models, specifically exhibiting a caching time complexity of 𝒪(N), where N represents the number of cached pivots based on specific radixes. This caching overhead becomes impractical for handling large-scale data. In this study, we introduce a novel constant-time caching technique that is independent of any parameters. The core concept involves applying scalar multiplication to a single cached ciphertext, followed by the introduction of a completely new and constant-time randomness. Leveraging the inherent characteristics of constant-time construction, we coin the term “Smuche” for this innovative caching technique, which stands for Scalar-multiplicative Caching of Homomorphic Encryption. We implemented Smuche from scratch and conducted comparative evaluations against two baseline schemes, Rache and CKKS. Our experimental results underscore the effectiveness of Smuche in addressing the identified limitations and optimizing the performance of homomorphic encryption in practical scenarios. § INTRODUCTIONBackground Machine learning systems deployed in untrustworthy environments must implement security measures with acceptable performance overhead. This is evident in the context of Federated Learning (FL) systems <cit.>, where participating clients avoid sharing their private data with the parameter server. Even the models trained on private data should remain confidential, as their exposure to the parameter server could potentially lead to the leakage of the training dataset <cit.>. A direct solution to this concern is the adoption of (fully) homomorphic encryption (HE), encrypting local models before uploading them to the parameter server <cit.>. However, HE cryptosystems address security concerns at the expense of introducing significant computational overhead, prompting recent efforts to optimize HE performance through advanced caching techniques <cit.>. MotivationWhile the state-of-the-art optimization in homomorphic encryption (HE), specifically Rache <cit.>, demonstrates significant improvements over vanilla HE deployments, a notable limitation lies in the scalability of caching ciphertexts. More precisely, the computational overhead scales on the order of 𝒪(N), where N represents the number of cached ciphertexts during encryption. This linear growth reveals scalability challenges, especially when N becomes sufficiently large. Indeed, our experiments (see Table <ref>) indicate that caching an excessive number of ciphertext messages can lead to an overall performance degradation, potentially worse than that of the vanilla HE scheme. This concrete overhead arises from both the construction of ciphertexts and the randomization procedure, both of which rely on homomorphic addition operations among ciphertexts. Although the homomorphic addition operation is approximately 10—100× less expensive than the homomorphic encryption operation, an excessive number of such addition operations can defeat the purpose of additive caching. §.§ Proposed Approach This research aligns with the ongoing exploration of enhancing the performance of homomorphic encryption (HE) schemes without compromising their security guarantees. Our objective is to boost the efficiency of state-of-the-art HE schemes while upholding their security. Specifically, we aim to eliminate the linear costs associated with two components: (i) The linear construction of a new ciphertext based on cached pivots and (ii) The linear randomization of the newly constructed ciphertext.We demonstrate that both components can be efficiently implemented in constant time. Achieving constant-time caching requires delving into the kernel of HE schemes, rather than treating existing HE schemes (e.g., CKKS <cit.>) as black boxes, as done in the case of Rache <cit.>. The fundamental idea behind constant-time construction is leveraging the scalar-multiplicative property of HE schemes. We briefly outline the high-level intuition behind both constant-time procedures:* Constant-time construction: Essentially, any additive HE scheme can support scalar multiplication between a ciphertext and an integer, as this multiplication can be trivially expanded into a series of pairwise additions. The challenge lies in whether scalar multiplication can outperform a series of additions. As detailed in later sections, this depends on the number of additions needed to construct the ciphertext message. * Constant-time randomization: Instead of manipulating the cached ciphertext as a black box, we directly modify the randomness part of the ciphertext, comprising two polynomials. This involves a meticulous decomposition of the polynomials and a discrete parameterization of the underlying HE scheme. Additionally, the success of this strategy depends on the efficiency of kernel-level randomization, i.e., whether it is more cost-effective than the black box solution found in the state-of-the-art.§.§ Contributions This paper presents the following contributions:* We introduce Smuche, a constant-time caching technique designed to facilitate high-performance homomorphic encryption of large-scale data. Smuche aims to strike a balance between performance and security in secure machine learning, representing the first algorithm of its kind. * We establish the provable security of Smuche. The proof follows the classical reduction framework by reducing the challenge of breaking Smuche to distinguishing a ciphertext encrypted by the underlying HE scheme and a random string, which is assumed to be computationally infeasible. * We implement Smuche from the ground up and conduct a comparative analysis against baseline schemes Rache and CKKS. Experimental results affirm the effectiveness of Smuche, showcasing constant-time caching of encrypted ciphertext with significantly lower overhead than the state-of-the-art. § PRELIMINARIES AND RELATED WORK§.§ Homomorphic EncryptionThe notion of homomorphism originates from the study of algebraic groups <cit.>, which are algebraic structures defined over nonempty sets. Formally, a group G over a set S is represented as a tuple (G, ⊕), where ⊕ is a binary operator. This operator adheres to four fundamental axioms or properties, expressed through first-order logical formulas: (i) For all g, h ∈ S, g ⊕ h ∈ S. (ii) There exists a unique element u ∈ S such that for all g ∈ S, (g ⊕ u = g) and (u ⊕ g = g). (iii) For every g ∈ S, there exists an element h ∈ S such that (g ⊕ h = u) and (h ⊕ g = u). This element h is often denoted as -g. (iv) For all g, h, j ∈ S, (g ⊕ h) ⊕ j = g ⊕ (h ⊕ j). Given another group (H, ⊗) and a function φ: G → H that for all g_1, g_2 ∈ G satisfies φ(g_1) ⊗φ(g_2) = φ(g_1 ⊕ g_2), we define φ as a homomorphism from G to H. This implies that φ preserves the group operation between the elements of G when mapped to the corresponding elements in H.Homomorphic encryption (HE) is a specific type of encryption where certain operations between operands can be performed directly on the ciphertexts. For example, if an HE scheme he(·) is additive, then the plaintexts with + operations can be translated into a homomorphic addition ⊕ on the ciphertexts. Formally, if a and b are plaintexts, the relationship is defined as:dec(he(a) ⊕ he(b)) = a + b,where dec denotes the decryption algorithm. As a concrete example, setting he(x) = 2^x (temporarily disregarding security considerations of he(·)) demonstrates that he(a+b) = 2^a+b = 2^a × 2^b = he(a) × he(b), indicating that ⊕ corresponds to arithmetic multiplication ×.An HE scheme enabling additive operations is termed additive. Popular additive HE schemes include Paillier <cit.>, which is an asymmetric scheme using public and private keys for encryption and decryption, respectively.An HE scheme that supports multiplication is said to be multiplicative. Symmetria <cit.> is a recent scheme proposed in the database community, which is multiplicative using a distinct scheme from the one for addition. Other well-known multiplicative HE schemes include RSA <cit.> and ElGamal <cit.>. Similarly, a multiplicative HE scheme guarantees the following equality,dec (he(a) ⊗ he(b)) = a × b,where ⊗ denotes the homomorphic multiplication over the ciphertexts.An HE scheme that supports both addition and multiplication is classified a fully HE (FHE) scheme. This requirement should not be confused with specific addition and multiplication parameters, such as Symmetria <cit.> and NTRU <cit.>. That is, the addition and multiplication must be supported homomorphically under the same scheme he(·):dec( he(a) ⊕ he(b) ) = a + b dec( he(a) ⊗ he(b) ) = a × b.Constructing FHE schemes remained a formidable challenge until Gentry <cit.> presented a feasible approach using lattice ideals. Although lattice has been extensively studied in cryptography, the combination with ring ideals was less explored; nonetheless, Gentry showed that it is possible to construct an FHE scheme although the cost to maintain the multiplicative homomorphism is prohibitively significant even with the so-called bootstrapping optimization, which essentially applies decryption for every single multiplication operation between ciphertexts.Subsequent generations of FHE schemes, e.g., <cit.>, have seen substantial improvements in encryption efficiency, partially due to the removal of ideal lattices; rather the new series of FHE schemes are based on the learning with error (LWE) <cit.> or its variant ring learning with error (RLWE), which have been proven to be as secure as hard lattice problems (e.g., through quantum or classical reduction). The positive development is that schemes that are built upon LWE or RLWE are significantly more efficient than the first-generation schemes. However, a significant performance gap remains between the current FHE cryptosystems and the desired efficiency levels. Open-source libraries of FHE schemes, such as IBM HElib <cit.> and Microsoft SEAL <cit.>, are available and represent the state-of-the-art in the field. §.§ Threat Model and Provable Security Informally, a threat model portraits the capabilities of the attackers. One widely used model is chosen-plaintext attack (CPA), where the adversary is assumed to have the ability to obtain the ciphertext corresponding to any plaintext of their choosing. In practice, it is assumed that the adversary has limitations on the number of plaintext-ciphertext pairs they can obtain. Specifically, the access of adversary should be restricted to only a polynomial number of such pairs, and their computational resources are assumed to be bounded by polynomial time. This assumption reflects the notion that adversaries should not possess unlimited computational power or the ability to amass an impractical volume of information. By making these assumptions, security analysis can be conducted under realistic constraints, allowing for a comprehensive evaluation of the encryption scheme's resilience against chosen-plaintext attacks. The goal is to design an encryption scheme that maintain its security even when confronted with an adversary who can strategically chooses plaintexts and acquires their corresponding ciphertexts within the given limitations.In the ideal scenario, even if the adversary 𝒜 successfully obtain additional information, 𝒜 should not be able to make a distinguishably better decision for the plaintexts than what would be achievable through random guess. This principle is encapsulated in the concept of a negligible function. A function μ(·) is defined negligible when, for all polynomials poly(n), the inequality μ(n) < 1/poly(n) holds for sufficiently large values of n.In other words, the function μ(·) decreases faster than the reciprocal of any polynomial as n grows. The use of negligible functions is critical in quantifying the extent of the advantage an adversary could gain over random guessing. They are crucial component in the verification of security proofs within cryptographic systems.The security of cryptographic schemes, especially homomorphic encryption, is often assessed through formal proofs based on established security notions. The Indistinguishability under Chosen Plaintext Attack (IND-CPA) security is a fundamental concept that assures the confidentiality of encrypted data even when adversaries have access to chosen plaintext-ciphertext pairs. The methodology for providing IND-CPA security incorporates reduction techniques, analysis of probabilities, and the application of negligible functions.The de facto approach to prove IND-CPA security is through reduction. The reduction approach in IND-CPA proofs template involves relating the security of a cryptographic scheme to the assumed infeasibility of a particular computational problem. Commonly, this involves the challenges of breaking the underlying cryptographic primitive. In the context of homomorphic encryption, this reduction demonstrates that if an adversary is capable to distinguish between two ciphertexts encrypted from distinct plaintexts, then it implies that they could also solve the assumed computational problem, which is considered inherently difficult.At the core of proof template are negligible functions, which are defined by their property that decrease faster than the reciprocal of any polynomial. In the context of security proofs, if the advantage of an adversary (the probability that they can distinguish between ciphertexts) is bounded by a negligible function, then the scheme is considered IND-CPA secure. This notion encapsulates the concept that as the input size grows, the negligible function diminishes more rapidly than any polynomial increase. §.§ Rache: Radix-based Additive Caching Rache <cit.> represents an optimization of the expensive operation of homomorphic encryption by calculating the desired ciphertext from existing (i.e., cached) ciphertexts. Rache comprises three stages: precomputing, construction, and randomization. We briefly review them in the following.Precomputing In this stage, the scheme selects a radix value r, calculates the integral exponentiation of r, also known as pivots, usually up to the maximal number of the plaintext space, and encrypts them in a cache. The most straightforward choice of r is 2, although other values are possible.[Rache picks a heuristic value of r = 6.] Construction The ciphertext of a new message m, denoted by enc(m), can be constructed from pivots c_i's:c' def= enc(m) = ⊕_i=0^⌈log m ⌉ - 1 (mmodr^i+1) ⊙ c_i,such that m = ∑_i=0^⌈log m ⌉ - 1 (mmodr^i+1) · r^i,where c_i is the i-th pivot in the cache and ⊙ denotes the multiplication between an integer and a ciphertext. Randomization The final randomized ciphertext of m is calculated asc def= c' ⊕_i=0^⌈log m ⌉ - 1 rnd_i ⊙ (c_i+1⊖ r ⊙ c_i),where rnd_i denotes a random boolean value uniformly sampled from {0, 1} and ⊖ denotes the minus operation in the ciphertext space. § SCALAR-MULTIPLICATIVE CACHING The proposed Smuche caching scheme also comprises three stages: precomputing, construction, and randomization. As we will see, each of these three stages adopts a completely different algorithm than that of Rache. §.§ Precomputing Smuche reuses the term pivots that are precomputed ciphertexts of “interesting” plaintext values. However, as opposed to the logarithmic intermediates adopted by Rache, Smuche chooses to cache those unit values at various decimal positions. By unit values, we mean the special invariants whose multiplications over any other value equal that value up to a scale. Such a unit is also called the multiplicative identity in an algebraic ring structure. The most intuitive identity is number one for the integers ℤ. However, float numbers could not be trivially reconstructed from one due to the fractional part. This issue can be solved by employing the scaling idea of modern HE schemes. Essentially, we introduce a scale Δ that can convert a float into an integer. For decimals, Δ can be simply set to 10^i, i ∈ℤ^+. Therefore, a float number such as 1.02 can be coded as (z,Δ) = (102,10^2). Consequently, we can set the pivot as enc(Δ^-1) and any float number with the same decimal precision can be expressed in terms of enc(Δ^-1).We present the precomputing procedure in Algorithm <ref>. Smuche caches all the exponents of the highest precision Δ^-1 until integer one, which is the multiplicative identity of the integer ring. The overall number of cached ciphertexts depends on not only the highest precision but also the radix r, which serves as the step size in the precomputing. The precomputing procedure is usually carried out at an offline stage. Therefore, the cost is usually not a concern in practice. Nonetheless, even the asymptotic complexity of the precomputing algorithm is fairly efficient: 𝒪(logΔ), which is logarithmic in the precision of the plaintexts and does not depend on the scales of the plaintexts. In other words, this cost is constant regarding the plaintext size. This is in contrast to the offline overhead of Rache, which is logarithmic in the maximal value of the plaintext space. §.§ Construction The construction procedure is straightforward: A pivot is selected according to the precision of the given plaintext, after which the ciphertext is constructed through a scalar multiplication between the pivot and the scaled value of the plaintext. We stress that the scalar multiplication can be implemented as a series of ciphertext additions and should not require a homomorphic multiplication directly over a pair of ciphertexts.The construction procedure is described in Algorithm <ref>. Lines 1–4 search for the largest pivot in the cache such that a scalar multiplication in Line 5 could construct the desired ciphertext. The reason for picking the largest pivot is to reduce the scalar coefficient of the multiplicative construction such that the noise level is the lowest possible.The complexity of the construction algorithm is similar to that of the precomputing. It should be noted that |δ| ≤ |Δ|. The loop on Lines 2–4 takes up to ⌈log_r δ⌉ steps, which are asymptotically 𝒪(logΔ). Similarly to the procomputing algorithm, this cost is independent of the plaintext size and asymptotically constant. §.§ Randomization To fully understand the proposed randomization in Smuche, it is necessary to clarify the ciphertext structure of modern HE schemes, such as BFV <cit.>, BGV <cit.>, and CKKS <cit.>. Such a low-level detail is unnecessary for Rache because the caching algorithms in Rache deal with only the public primitives (i.e., programming interface) exposed by the underlying HE schemes. On the other hand, we will not be able to cover all the details of the ciphertext format due to the limited space; we refer interested readers to the original papers of those HE schemes. Instead, in what follows we will provide a high-level discussion.A ciphertext message in HE schemes consists of two polynomials in the format of enc(m) = c = (c_1, c_2), where c_1, c_2 ∈ℤ_q/(X^N + 1), which denotes a set of polynomials with integer coefficients modulo q and degrees modulo N.[More formally, it is a quotient ring over irreducible polynomial ring X^N + 1.] The polynomial c_2 is an auxiliary piece of information to help decrypt the message that is masked in the polynomial c_1. Both c_1 and c_2 are constructed with the public key PK, which is also defined as a pair of polynomials PK = (PK_1, PK_2). To understand the full picture, we first define the private key SK as follows:SKR_2,where R_2 denotes a polynomial of degree N with coefficient in {-1, 0, +1} anddenotes an independent and uniform sampling from the right-hand-side set. Informally, we pick our private key as a “small” polynomial in the sense that the ||·||_∞ norm of coordinate-wise coefficients is less than 2. The public key PK is then defined as follows:PK_1=_q -a · SK + e PK_2=_q aR_qwhere =_q denotes the modulo q operation for the assignment, R_q is a polynomial whose coefficient is modulo q, and e is a polynomial whose coefficients are sampled from a discrete Gaussian distribution with σ = 3.2. Informally, e is a “small” polynomial while a is a “large” polynomial in the ciphertext space. We are now ready to encrypt a plaintext m.[There is usually an encoding procedure to convert a given scalar number (bit string) into a polynomial, but Smuche does not touch on that part so we will skip that discussion.] Let uR_2 and e_1, e_2 are sampled just as e in the above (that is, all of these three polynomials are “small”), the ciphertext c = (c_1, c_2) is defined asc_1 =_q PK_1· u + e_1 + mc_2 =_q PK_2· u + e_2The decryption algorithm dec is defined asdec(c) = (c_1 + c_2 · SK)_mod q.We skip the correctness of the decryption algorithm, which can be verified through elementary algebra and ignoring those “small” polynomial terms.The addition in the ciphertext space, i.e., ⊕, is defined as the element-wise addition of the two polynomials in the ciphertext. That is, the addition of two ciphertexts c^1 and c^2 is:c^1 ⊕ c^2 = ( (c^1_1 + c^2_1)_mod q , (c^1_2 + c^2_2)_mod q),where (·)_mod q denotes the modulo q of the value in the paratenses.From Eq. (<ref>), it is not hard to see that we can calculate the scalar multiplication of a ciphertext c asz⊙ c = ( (z· c_1)_mod q , (z· c_2)_mod q),where z ∈ℤ. Indeed, Line 5 of Alg. <ref> does exactly that. Now the question is, how can we make Eq. (<ref>) randomized[A randomized ciphertext is a hard requirement for any encryption scheme to be practically secure, i.e., semantically secure or IND-CPA (more on this in the next section).], and that is what we will do in the remainder of this section.Let ξ R_2. We calculate the following random tuple rnd = ( (PK_1·ξ)_mod q, (PK_2·ξ)_mod q).We claim that z ⊙ c ⊕ rnd is a valid encryption of z· m. That is, we will prove that the following equation holds.dec(z ⊙ c ⊕ rnd) =_q z· m From the definition and equations, we expand the left hand side as follows:dec(z ⊙ c ⊕ rnd )=_q dec( (z· c_1 + PK_1·ξ)_mod q , (z· c_2 + PK_2·ξ)_mod q)=_q dec((PK_1·(z· u + ξ) + z· (e_1 + m) )_mod q, (PK_2· (z· u + ξ) + z· e_2)_modq)=_q dec( (PK_1· u' + e'_1 + z· m)_mod q, (PK_2· u' + e'_2)_mod q),where u' = z· u + ξ and e'_1|2 = z· e_1|2. It should be noted that both u' and e'_1|2 are still considered “small” polynomials in z· m because u, ξ, and e_1|2 are all considered “small” polynomials for m. Indeed, if we apply Eq. (<ref>), the left hand side expands as follows:dec(z ⊙ c ⊕ rnd ) =_q PK_1· u' + e'_1 + z· m + SK·(PK_2· u' + e'_2)=_q -a· SK· u' + e'_1 + z· m + a· SK· u' + SK· e'_2=_q z· m + (e'_1 + SK· e'_2)=_q z· m + ϵ=_q z· m,where ϵ = e'_1 + SK· e'_2 is a negligible term for z· m. Back to our Smuche scheme, we simply apply Eq. (<ref>) to the output of Alg. <ref> to randomize the ciphertext. That is, we output c = c' ⊕ rnd as the final ciphertext of m. This randomization procedure takes a constant number of operations over the polynomials and is independent of the plaintext size. We term the entire encryption algorithm SmucheEnc, following the naming convention of other state-of-the-art schemes RacheEnc and CkksEnc.§ SECURITY ANALYSIS Only because SmucheEnc is correct and randomized does not mean that it is secure. To demonstrate that Smuche is (semantically) secure, i.e., the ciphertext is indistinguishable from a random string even if an adversary obtains a polynomial number of plaintext-ciphertext pairs by arbitrarily choosing the plaintexts (IND-CPA), we need to reduce the breaking of SmucheEnc output into an efficient algorithm for the underlying mathematical problem that is believed to be computationally infeasible to solve. In the context of modern HE schemes whose security is all based on the learning-with-error (LWE) problem, such a reduction would entail a probabilistic polynomial time (PPT) algorithm to solve the LWE problem. However, in this work we will assume the underlying HE enc is IND-CPA and we will attempt to reduce breaking SmucheEnc to breaking enc.Formally, we will prove the following theorem.Smuche is IND-CPA secure as long as the underlying HE encryption is IND-CPA secure. Suppose SmucheEnc is not IND-CPA secure but the underlying HE's enc is INC-DPA secure. Then there exists an algorithm 𝒜 such that: * 𝒜 chooses two distinct plaintext messages m^0 and m^1;[The literature usually use subscript to denote two distinct messages; here we use superscript for this purpose because subscripts are reserved for the multiple polynomials inside a single ciphertext.]* 𝒜 can make a polynomial number of queries for the ciphertext of an arbitrary plaintext message (including m^0 and m^1);* Smuche randomly picks i ∈{0,1} and return SmucheEnc(m^i) to 𝒜;* 𝒜 can make the correct decision about i ∈{0, 1} with non-negligible advantage such that dec(c) = m^i.On the other hand, we assume that no such algorithm exists for the underlying HE's encryption algorithm enc. Our goal here is to derive a contradiction to the above assumption.If 𝒜 can make the correct guess of i with a non-negligible advantage, let 1/poly(n) denote the probability where n denotes the security parameter. There are two possible cases regarding Eq. (<ref>): * If rnd is repeated in the queries of 𝒜 and in SmucheEnc, then 𝒜 can immediately guess the correct i for z· enc(m^i). Because z is a constant, this entails that 𝒜 can efficiently distinguish enc(m^i) from a random string. The advantage for this case is poly(n)/2^n. Note that in this case, 𝒜 can make the distinction for certain. * If rnd is not repeated in the polynomial number of queries as in SmucheEnc, this means that 𝒜 can still make the correct guess with an advantage of γ(n) = 1/poly(n) - poly(n)/2^n, which is not a negligible function in n. In other words, 𝒜 can make a correct guess of i with a non-negligible advantage γ(n) such that c = SmucheEnc(m^i) without ever seeing the ciphertext before. Note that 𝒜 could succeed no matter whatever value is applied to z· c, including zeros. Let ℬ be such an algorithm where rnd = (0, 0). As in the first case, z is a constant, meaning that ℬ can efficiently guess i ∈{0, 1} such that c = enc(m^i), with the advantage of γ(n). Also note that the ℬ can be simulated by 𝒜 in constant time, meaning that ℬ is also a PPT algorithm. However, we assume that the underlying HE enc is IND-CPA secure; therefore, such an algorithm ℬ cannot exist, leading to a contradiction.§ EVALUATION§.§ System Implementation We have implemented the proposed Smuche caching scheme from scratch along with baseline prototypes of Rache <cit.> and CKKS <cit.>. Our implementation comprises about 3,000 lines of Java code. The project is compiled and executed with OpenJDK 21.0.1. We will publish the implementation as an open-source project: <https://github.com/hpdic/smuche>.The baseline HE scheme we choose to use is CKKS <cit.>, which is based on arithmetic circuits. There are a few important parameters we must discuss. * Slot. CKKS supports a more general set of plaintext messages than scalar floats: CKKS supports a vector of complex numbers. However, in our experiments, the workloads only consist of scalar floats. As a result, we set the number of slots in CKKS as 1. Inside the CKKS encoding procedure, however, there is a technical requirement to extend the original values with their complex conjugates. Therefore, although we set the slot to 1, what CKKS does is encode the single slot into two placeholders. We will not go into the details of this and we refer interested readers to the original CKKS paper <cit.>. * Level. CKKS is one of the so-called leveled HE schemes, meaning that there is an upper bound for the number of multiplications between a pair of ciphertext messages. The reason why ciphertext multiplication ⊗ matters is that the inherent noise increase is quadratic. However, this is not a concern for scalar multiplication ⊙, which is essentially a series of ciphertext addition ⊕. In our implementation, the level is set to 10. In production systems, this value is usually set to a much larger value such that the ciphertexts remain valid to decrypt. An alternative approach is bootstrapping, which resets the noise level; however, bootstrapping is rarely used in practice due to the overwhelming computational overhead. * Precision. This metric refers to the number of digits to be considered accurate. Smuche supports both integers and float numbers, so there are two precision levels for the integer and fractional parts, respectively. We set both precision levels to 10 in our experiments. It should be clear that the proposed Smuche is not bound to a specific HE scheme, although we choose to implement it on top of CKKS. The primary reason for our decision is that CKKS is the only scheme that supports efficient homomorphic operations on float numbers. Smuche can be built on top of all HE schemes that are based on arithmetic circuits: Recall that Smuche only manipulates a pair of polynomials in the ciphertext message in addition to the homomorphic addition operation of the underlying HE scheme. §.§ Experimental Setup For our system prototype deployment, we utilize the Chameleon Cloud infrastructure <cit.>. Our assigned node offers the following specifications. CPUs: Two Intel Gold 6240R CPUs running at a frequency of 2.40 GHz, providing a total of 96 threads. Storage: A 480 GB Micron SATA SSD. RAM: 192 GB of memory. The operating system image is Ubuntu 22.04 LTS. In addition to microbenchmarks where we simply encrypt random numbers, we will evaluate the proposed Smuche scheme on three real-world data sets. The first real-world application is the U.S. national Covid-19 statistics from April 2020 to March 2021 <cit.>. The data set has 341 days of 16 metrics, such as death increase, positive increase, and hospitalized increase.The second real-world application is the history of Bitcoin trade volume <cit.> since it was first exchanged in the public in February 2013. The data consists of the accumulated Bitcoin exchange on a 3-day basis from February 2013 to January 2022, totaling 1,086 floating-point numbers.The third real-world application is the human genome reference 38 <cit.>, commonly known as hg38, which includes 34,424 rows of singular attributes, e.g., transcription positions, coding regions, and number of exons, last updated in March 2020.Some important parameters we use are as follows. The radix value r is set to 2 in our experiments. The number of pivots, i.e., the number of cached ciphertexts, is denoted by nPivot, which ranges between 5 and 30. Unless otherwise stated, 40 integers are encrypted by different schemes. We did not turn on any parallel processing features such as OpenMP <cit.>. All experiments are repeated at least five times and we report the average numbers. §.§ Scalability Limitation of Rache Our first experiment demonstrates the scalability issues with Rache <cit.>, which is important as it is the motivation of this paper. In particular, we will compare the computational cost of Rache and CKKS when a variety number of pivots are employed. Our Java implementation of both CKKS and Rache is only a proof-of-concept and is far from optimal. Therefore, the raw performance is not comparable with popular frameworks such as HElib <cit.> and SEAL <cit.>. However, since all the baseline schemes and Smuche are built up on the same code base, the relative comparison among them remains valid.Table <ref> shows that when the number of pivots exceeds 32 the overhead of reconstructing the ciphertext outweighs the cost of the vanilla HE scheme, which makes it meaningless to apply Rache. The vanilla CKKS takes 1.74 ms to encrypt 40 numbers. It should be noted, however, that we did not turn on OpenMP in our implementation of Rache; therefore, in practice, the real threshold would be much larger than 32 before Rache becomes ineffective. The point is that the RacheEnc time is proportional to the number of cached ciphertexts nPivot and Rache is not applicable for a large plaintext space where a lot of cached ciphertexts are expected. §.§ Computational Overhead of Smuche This experiment investigates the scalability of Smuche. Because Smuche only touches on one cached ciphertext, there is no point in scaling the number of pivots (as we did for Rache). Rather, we are more interested in whether the computational overhead is independent of the number of plaintext messages. This is called weak-scaling in the literature, where we scale the number of plaintext messages from 40 to 120.Table <ref> reports the encryption time of Smuche when a different number of plaintext messages are encrypted. We note that the overall encryption time is linearly proportional to the number of messages and the unit time to encrypt a single message remains constant, i.e., 0.02 ms. This result confirms the claimed property of Smuche: The per-message encryption performance is independent of the caching parameters such as the number of cached pivots or the number of messages. §.§ Real-World Data Sets We compare the end-to-end encryption performance of three HE schemes (CKKS, Rache, Smuche) with three real-world data sets (Covid-19, Bitcoin, Human Gene #38). We set the number of pivots as 32 (i.e., nPivot = 32 for Bitcoin and a smaller one nPivot = 16 to Covid-19 and Human Gene #38 because the trade volume in the Bitcoin data set is very large, i.e., up to one trillion. It should be noted that such a large nPivot will like degrade Rache into even lower performance than the vanilla CKKS.Table <ref> reports the performance of those HE schemes on our real-world data sets. As expected, Rache does exhibit some performance degradation due to the large number of pivots on the Bitcoin data set.However, Smuche shows a clear advantage over both CKKS and Rache on all of the three data sets.The speed up of Smuche ranges between 1.44 (Human Gene #38) and 2.68 (Covid-19).§ CONCLUSION In conclusion, this paper presents Smuche, a constant-time caching technique designed to bolster the efficiency of homomorphic encryption (HE) in secure machine learning for large-scale data. The provable security of Smuche is established through a reduction framework linked to the IND-CPA security of the underlying HE scheme. Comparative experiments with Rache and CKKS affirm the effectiveness of Smuche, showcasing its ability to achieve constant-time caching with markedly reduced overhead. The findings emphasize Smuche's practical utility, contributing insights into the ongoing challenge of optimizing the delicate balance between security and performance in the deployment of machine learning systems. named
http://arxiv.org/abs/2312.16352v1
{ "authors": [ "Dongfang Zhao" ], "categories": [ "cs.CR", "cs.LG", "cs.PF" ], "primary_category": "cs.CR", "published": "20231226231125", "title": "Smuche: Scalar-Multiplicative Caching in Homomorphic Encryption" }
IEEEexample:BSTcontrol IEEE Robotics and Automation Letters. Preprint Version. Accepted January, 2023 Yasutomi et al.: Visual Spatial Attention and Proprioceptive Data-Driven RL for Robust Peg-in-Hole Task t]@c@  2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other workVisual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions André Yuji Yasutomi^1,2, Hideyuki Ichiwara^1,2, Hiroshi Ito^1,2, Hiroki Mori^3 and Tetsuya Ogata^2,4 Manuscript received: October 1, 2022; Revised December 13, 2022; Accepted January 29, 2023.This paper was recommended for publication by Editor Hyungpil Moon upon evaluation of the Associate Editor and Reviewers' comments. This work was supported by Hitachi, Ltd.^1André Yuji Yasutomi, Hideyuki Ichiwara and Hiroshi Ito are with the R&D Group, Hitachi, Ltd, [email protected]^2André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito and Tetsuya Ogata are with the Graduate School of Fundamental Science and Engineering, Waseda University, Japan [email protected] ^3Hiroki Mori is with the Future Robotics Organization, Waseda University, Japan [email protected] ^4Tetsuya Ogata is with the Waseda Research Institute for Science and Engineering (WISE), Waseda University, Japan Digital Object Identifier (DOI): https://doi.org/10.1109/LRA.2023.324352610.1109/LRA.2023.3243526January 14, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Anchor-bolt insertion is a peg-in-hole task performed in the construction field for holes in concrete. Efforts have been made to automate this task, but the variable lighting and hole surface conditions, as well as the requirements for short setup and task execution time make the automation challenging. In this study, we introduce a vision and proprioceptive data-driven robot control model for this task that is robust to challenging lighting and hole surface conditions. This model consists of a spatial attention point network (SAP) and a deep reinforcement learning (DRL) policy that are trained jointly end-to-end to control the robot. The model is trained in an offline manner, with a sample-efficient framework designed to reduce training time and minimize the reality gap when transferring the model to the physical world. Through evaluations with an industrial robot performing the task in 12 unknown holes, starting from 16 different initial positions, and under three different lighting conditions (two with misleading shadows), we demonstrate that SAP can generate relevant attention points of the image even in challenging lighting conditions. We also show that the proposed model enables task execution with higher success rate and shorter task completion time than various baselines. Due to the proposed model's high effectiveness even in severe lighting, initial positions, and hole conditions, and the offline training framework's high sample-efficiency and short training time, this approach can be easily applied to construction.Robotics and automation in construction; reinforcement learning; deep learning for visual perception. § INTRODUCTIONAnchor-bolt insertion is a peg-in-hole task for holes in concrete that is extensively performed in the construction field <cit.>. Since it is repetitive, dirty, and dangerous <cit.>, efforts have been made to automate this task with industrial robots <cit.>. However, automating anchor-bolt insertion is particularly difficult in this field because the robot must be able to adapt to variable conditions such as drastic changes in lighting caused by natural light or various light sources, and different hole surface conditions due to the brittle nature of concrete. Furthermore, the robot should be setup (e.g., trained) and be able to execute the task in short time in order to reach a lead time close to the time taken by humans.To this end, in our prior study <cit.>, we proposed a peg-in-hole strategy that used a deep neural network (DNN) trained via deep reinforcement learning (DRL) to generate discrete robot motions that adapted to different hole conditions. This method used force, torque, and robot displacement as input to avoid visual-detection problems caused by harsh lighting conditions. While the method was effective for finding holes, it required long time to be executed.The use of visual feedback with image feature detection algorithms is a promising approach to reduce the task completion time because it enables direct estimation of the hole position to effectively guide the robot. However, traditional algorithms such as template matching <cit.> struggle to perform well in poorly lit conditions. Subsequent approaches <cit.> involving deep convolutional neural networks (CNNs) have been used successfully in challenging lighting conditions, but they require large amounts of data to learn to be invariant to irrelevant image features <cit.>.Another strategy to reduce task completion time is to extensively train DRL algorithms in simulation environments and transfer the trained model to the real world <cit.>. Some peg-in-hole approaches <cit.> have accomplished this transfer by reducing the reality gap using techniques such as domain randomization <cit.>. However, in the present study, it is difficult to model the friction of concrete to be close enough to reality to use these approaches successfully. This is because the concrete's brittleness makes the wall surface to be inconsistent, with areas of high friction coefficient that prevent the peg from sliding, or make it slide unsteadily that results in excessively noisy forces and torques.Recent deep learning approaches integrate spatial attention, autoencoders, and robot motion generation networks <cit.> to learn to generate motion based on image features as well as task-dependent image features. These approaches have proven to be sample-efficient and generalize well to different image conditions. Using these approaches as our basis, we propose a sample-efficient vision-based robot control model for performing peg-in-hole tasks in concrete holes (Fig. <ref>). The model is robust to challenging lighting conditions (with misleading shadows), is trained in an offline manner and in a short time, and contributes to the improvement of the task success rate and completion time.Our main contributions are as follows: (1) modifications of a spatial attention point network (SAP), introduced in prior work <cit.> to extract important points (referred to as “attention points”) of images even without a recurrent layer; (2) the introduction of a model that consists of SAP integrated to a DRL policy, and a method to jointly train them end-to-end to teach SAP to generate task-specific attention points, and the DRL policy for generating robot motion based on the attention points; (3) an evaluation of SAP and the proposed model for images with misleading shadows; and (4) an offline framework to train models in short time, with minimal data extraction from the environment and a minimal reality gap when transferring the models to the actual field. §.§ Related work§.§.§ Peg-in-hole with DRL DRL has been commonly applied to real robots to perform peg-in-hole tasks without visual input <cit.>. These “blind” approaches typically use force, torque, and position feedback to train DRL policies (e.g., deep q-network (DQN) <cit.>, deep deterministic policy gradients (DDPG) <cit.>, and soft actor critic (SAC) <cit.>) to control the robot by outputting position/orientation subgoals, force/moment subgoals, or parameters for robot compliance control <cit.>. Although these approaches are effective, they rely on simulators and sliding the peg around the hole to search for it, which is difficult for holes in concrete due to the concrete's brittleness and high friction coefficient.DRL has also been applied to peg-in-hole tasks with visual feedback <cit.>. However, these studies rely on cameras fixed apart from the robot end effector and require prior motion information from classical controllers, humans demonstrations <cit.>, or extensive simulation <cit.> to train safe and sample-efficient visuomotor policies. In addition, they did not fully evaluate the robustness of their approaches in challenging lighting variations.In contrast, in this study we move the robot in a hopping manner to avoid the friction against concrete, and we avoid simulators by extracting small amounts of proprioceptive data and images (from a camera fixed to the end effector) from the field for offline training. We show that a model trained with this motion and training method can enable a robot to accomplish a peg-in-hole task in harsh lighting conditions. §.§.§ Attention for motion generation Several models have been proposed that use visual attention to extract image features to provide highly relevant input for motion controllers <cit.>. Levine et al. <cit.> proposed the use of CNNs and soft argmax to obtain position coordinates in an image for image feature extraction. They showed that jointly training the image feature extraction model and a reinforcement learning (RL) policy end-to-end results in higher success rates than training them separately. Their method enabled a robot to accomplish multiple tasks (e.g., coat hanging and cube fitting) but it required at least 15000 images, 3 to 4 hours of training time, and does not generalize to drastically different settings, especially when visual distractors occlude the target objects.Ichiwara et al. <cit.> proposed a method of jointly training a point-based attention mechanism that predicts attention points based on image reconstruction and a long-short term memory (LSTM) layer end-to-end to generate motion. Their method enabled a robot to perform simple pick and place tasks and a zipper closing task with variable backgrounds, light brightness, and distractor objects. Their method requires small amounts of data, but it also requires expert human demonstrations (for imitation learning) and high computation cost due to the LSTM layer (up to 3 h of training in 2 GPUs of 16GB of memory). Moreover, it was not tested with misleading shadows.In this study, we propose a visual attention-based model which has low computational cost and is trained with small amounts of data acquired automatically from the environment (no expert demonstration). By evaluating this model in an experimental setup that replicates a construction site, which includes misleading shadows and variable target surface conditions, we demonstrate that it is able to effectively conduct a peg-in-hole task even in these conditions.§ METHODS §.§ Hole search and peg insertion strategy The hole search method used is shown in Fig. <ref>. It consists of the following steps: (a) moving the peg to a hole position roughly estimated by a camera; (b) attempting to insert the peg by moving it toward the hole; (c) separating the peg from the wall and moving it to the next search position if the peg touches the wall; (d) attempting to insert the peg again; and repeating steps (c) and (d) until (e) the hole is found. By separating the peg from the wall, the unwanted effect of the high friction coefficient of the concrete surface can be avoided when moving the peg between hole search positions. In anchor-bolt insertion, after the hole is found, the peg (i.e., anchor bolt) is completely inserted into the wall by hammering, which overcomes the peg “wedging” and “jamming” problems considered in classic studies <cit.>. §.§ Motion generation model §.§.§ Model inputs and outputsTo control the robot to move toward the hole, we propose the model shown in Fig. <ref>. The model, namely the reinforcement learning (RL) agent, is given a set of six sequential observations o_t-5t of the environment (time t-5 to t). This window of observations is used because: (1) fully connected layers are used to generate sequential robot motion instead of LSTM as in <cit.>, (2) it reduces computational cost as the LSTM's back propagation through time is avoided, (3) it is effective for inferring temporal statistics <cit.>, and (4) it simplifies data storage and random batch sampling from the replay buffer to train the agent. Each observation consists of an image i, forces and torques FT, robot displacement D_z, and the previous robot action a. FT includes the forces in the x, y, and z axes and the moments in the x and y axes. D_z is used because it increases as the peg approaches the hole since the brittle borders of holes in concrete often become chamfered. Previous actions are used because stochastic policies should also condition their action on the action history <cit.>, and the previous actions prevent the robot from looping back to the same place <cit.>. After the model is input with this window of observations, it predicts the images of the next time î_t-4t+1 and the actions a(dir,ss)_t+1 to be taken by the robot (RL environment). The actions are peg movements in one of eight directions (dir = up, down, left, right, or diagonals) and one of three discrete step sizes (ss = 1, 2, or 3 mm), for a total of 24 different action options.§.§.§ Model structureThe proposed model consists of a spatial attention point network (SAP) integrated to a deep RL policy (Fig. <ref>). Both SAP and the RL policy are trained end-to-end; thus, we call it SAP-RL-E. Unlike prior work <cit.>, we refer to SAP as the network that extracts the attention points. Thus, we removed the LSTM layer of the prior model so SAP could work alone (Fig. <ref>) or with the RL policy (Fig. <ref>). SAP consists of an attention point extraction block (hereinafter, attention block), an image feature extraction block, and an image prediction block. The attention block uses a CNN-based encoder block and soft argmax to extract the positional coordinates of maximum activation from an input image <cit.>, which are the attention points p. The image feature extraction block also extracts features f of the input image with an encoder block and inputs it into the image prediction block in a skip connection manner. The attention points from the current time p_t-5t that are output by the attention block are input to the RL policy together with FT, D_z, and a (hereinafter referred to as proprioceptive data). The RL policy predicts the attention points of the next time p̂_t-4t+1 as well as the action to be taken by the robot a_t+1. The image prediction block uses the predicted attention points p̂_t-4t+1 and the image features f_t-5t to predict (i.e., reconstruct) the image of the next time î_t-4t+1. This block does this by: (i) generating a heat map with an inverse argmax modified to be differentiable (heat map generator), (ii) multiplying the heat map element-wise with f_t-5t, and (iii) passing the result image through a transposed CNN block (decoder) that generates î_t-4t+1. The images are reconstructed because it was shown that the reconstruction improves the model performance <cit.>, stabilizes the attention points <cit.>, and avoids manual labelling the points <cit.>. This end-to-end approach makes it possible to simultaneously train SAP and the RL policy, enabling SAP to predict attention points that are relevant to the task and the RL policy to predict RL actions that are guided by those attention points. §.§.§ DRL algorithmThe DRL algorithm used for the RL policy was the double deep q-network (DDQN) as its effectiveness for this task was shown in our prior work <cit.>. The hyper-parameters used are shown in Table <ref>. Under a given policy π, Q-values are estimated by Q_π(s,a)= 𝔼_π [∑_t = 0^∞γ^t r_t|s_t=s, a_t=a], where r_t is the reward at time t, γ∈ [0,1] is the discount factor, a is the action, and s is the input state of the DDQN (p + proprioceptive data of total size 3x23). The DNN of DDQN is trained to approximate the optimal Q-function Q^*(s,a)=max_πQ_π(s,a), by updating its weights θ with the Bellman equation given by θ_t+1 = θ_t + α∇ J, where ∇ is the gradient function, α is the learning rate, and J is the loss function which will be presented in Section <ref>. The policy used to choose the actions based on the Q-function was the Boltzmann exploration policy, in which parameter τ was set to induce exploration at the beginning of the training and annealed every 50 episodes. A replay buffer was used to store images and proprioceptive data to provide random samples that stabilize DNN training. The reward function implemented for each step is r=-1 when the DRL episode does not end and is given by the following when the episode ends: r= r_found hole, .17!if hole found, 0, .4!if d ≤ d_0and hole not found, -r_found hole·d-d_0/d_lim-d_0 , .4!if d > d_0and hole not found, Here, r_foundhole is the reward when the hole is found, d_0 is the initial distance from the hole, d is the final distance, and d_lim is the distance limit. The negative reward at the end of each step encourages the DNN to minimize the number of steps. The comparison of d_0 and d at the end of the episode induces the model to approximate the distance from the peg to the hole position. The DRL episode ends when (i) the peg reaches distance limit d_lim from the hole center, (ii) the number of steps taken exceeds 100, or (iii) the peg is inserted into the hole which is identified when F_z>F_z,th and D_z> D_z,th (see Table <ref>). Note that our model is not limited to DDQN and can be used with complex algorithms such as DDPG and SAC. However, more time and elaborate training methods for continuous actions are required <cit.>. §.§.§ Loss functionThe loss function is defined as:J = w_i · J_i + J_Q + w_p · J_p,J_i =1/H · W· C · Lî'̂ - i' _2^2, J_Q =1/2( r +γ Q^-(s',_a Q(s',a)) - Q(s,a))^2, J_p =1/K· L p - p̂'_2^2.Here, i is the image, p is the attention points, Q is the q value of the main DDQN, Q^- is the q value of the target DDQN (a copy of the main DDQN), and r is the reward of the RL algorithm <cit.>. The variables with an apostrophe are the variables of the next time. J_i and J_Q are the mean square error of the image prediction and the temporal difference (TD), respectively. J_p is the mean square error of the Euclidean distance between the attention points of the RL policy input p and output p̂', and w_i and w_p denote the loss weight of J_i and J_p, respectively. By adding J_p, the model can learn to predict attention points in positions near their predecessors. This reflects the reality since the position of important points in the images do not change significantly in one time-step. In this study, RGB images with a resolution of 64×64 were used (height(H) = 64, width(W) = 64, and channels(C) = 3). There are eight attention points, and because they are xy coordinates, K=2×8. The window size L is six, and w_i and w_p are annealed from 0.0001 to 0.1 and 1.5 to 1.0, respectively, within episodes 0 and 1000. Starting with a high w_i encourages the model to learn to encode images more optimally before learning to generate motion. §.§ Offline training framework Because short lead time and robustness are the key requirements for applying the proposed model to construction, we propose a framework that enables offline training of the proposed model with minimal data extraction from the environment. The offline training reduces training time, and the data extraction reduces the reality gap when using the robot in the real world. The framework is made up of three steps: (i) attempting anchor-bolt insertion at multiple points around the hole for a single hole (Fig. <ref>); (ii) storing observations (images, proprioceptive data) and search results (“hole found”, “searching”, or “out-of-bounds”) from each attempt along with the position of the attempt in order to create a hole map; and (iii) using the hole map data as input to train the model via DRL in place of the real robot.This hole map can be created because, in this study, the robot moves in discrete steps, and such movement limits the positions around the hole that the robot (i.e., peg) can touch. Thus, the robot movement enables hole mapping that is data efficient. The mapping is performed by attempting insertions following a decided trajectory that is shown in the supplementary video. In contrast to our prior work<cit.>, the hole map of this study also stores images in addition to the proprioceptive data. §.§ Models for comparisonFor comparison with the proposed model, a model that uses only proprioceptive data (P-RL), one that uses an autoencoder (AE) trained separately from the RL policy (AE-RL), and one that uses SAP also trained separately from the RL policy (SAP-RL) were used. P-RL only consists of the RL policy and is trained in conditions similar to that of our prior work <cit.>. AE-RL has its AE trained first with the images of a hole map, and then it uses the trained AE to generate image features to train the RL policy (see Fig. <ref>). In SAP-RL, SAP is trained in the same manner as that of AE and uses the trained SAP to generate attention points to train the RL policy. The CNNs of SAP and AE are of the same structure as the CNNs of SAP-RL-E. The RL policy of both AE-RL and SAP-RL are also input with proprioceptive data because it improves the network performance during target occlusions as shown in <cit.>. Note that unlike the proposed model, both the AE and SAP learn to generate image features or attention points of the current image and do not consider the robot motion for the predictions. All comparison models are trained with a window size of six similar to the proposed model.§ EXPERIMENTAL SETUP AND CONDITIONS §.§ Experimental setupThe experimental setup used is shown in Fig. <ref>. The setup includes a Denso robot (VM-60B1) equipped with an air gripper (Airtac HFZ20), a force-torque (FT) sensor (DynPick® WEF-6A1000-30), and a camera with a wide-angle lens (Basler acA1920-40gc/ Kowa Lens LM6HC F1.8 f6 mm 1"). The camera was set with the default parameters, but its region of interest was set to capture only the area around the peg tip (64x64 px). The robot was programmed to grasp a wedge-type anchor bolt and perform the peg-in-hole task in 13 holes in a concrete wall. The holes were opened with a drill bit of the same diameter as a 12-mm anchor bolt (about 0.2 mm clearance between peg and hole).During the hole search, the robot was moved following the flow presented in Subsection <ref>. The forces and moments were measured by the FT sensor, and D_z was calculated from the xyz position of the tip of the anchor bolt obtained through forward kinematics. Before each DRL training episode, the robot was placed in a home position perpendicular to the wall, the FT sensor was zeroed (to compensate for gravity), and then the robot was moved to an initial search position. After the episode ended, the robot was retracted and the aforementioned steps were repeated until the maximum number of episodes was reached. The three lighting conditions used are shown in Fig. <ref>. They replicate the variations of lighting that occur in the field, which is our definition of variable lighting conditions. The room lighting, generated by multiple sources of white light on the ceiling, replicates a well illuminated field and provided the setup with images with almost no shadow. The left and bottom lighting conditions, generated by a flood light (DENSAN PDS-C01-100FL) set to emit white light at the left and bottom side of the peg respectively, provided images with misleading shadows. These two lighting conditions were chosen because they were considered the most challenging. They are possible if a light source external to the robot system is used. Right and top lights were not used because they did not cast shadows that were visible to the camera.The PC used for training was equipped with an Intel i7-8700 CPU, 16GB of RAM memory and an NVIDIA GeForce GTX 1070Ti GPU with 8 GB of memory. §.§ Training conditionsTo train the models, only the map of hole 1 taken in regular room light was used, which is in accordance with the proposed training framework. They were trained offline for 4000 episodes, starting from initial positions randomly chosen within the “searching” search result illustrated in Fig. <ref>. To avoid overfitting, Gaussian noise was added to the inputs. §.§ Evaluation conditions To evaluate the proposed model, offline tests (with the hole maps) and online tests (with the real robot) were conducted with all models. Both tests were conducted by attempting to find 12 unknown holes (2 to 13) in all three lighting conditions with the peg starting from the initial positions illustrated in Fig. <ref>. For the offline tests, hole maps were extracted from all the 12 holes. The tests were repeated ten times per initial position (160 tests per hole per lighting condition, for a total of 5760 tests per model).The initial positions were set at an absolute distance of 3 and 4 mm away from the hole origin in the x, y, or both axes (hereinafter referred to as initial positions at 3 and 4 mm, respectively). This is because, although the maximum initial positioning error estimated for the current setup is 3 mm, distances that are larger than this error have also to be overcame by the models in field applications.§ RESULTS§.§ Training results The offline training results are shown in Fig. <ref> in terms of the average cumulative reward of 100 episodes (hereinafter, cumulative reward). The figure also lists the number of episodes in which the cumulative reward was above 96 (the maximum reached by P-RL) and stable. Overall, all models nearly reached the maximum cumulative reward of 100 during training, demonstrating that the models learned to control the robot to find the holes effectively. However, P-RL took more episodes to converge (3571 episodes), which was expected since it is provided with less input information on the environment. The cumulative reward of AE-RL, SAP-RL, and SAP-RL-E (proposed model) increased quickly, but AE-RL and SAP-RL converged more quickly than SAP-RL-E. This was also expected because, while AE-RL and SAP-RL are trained with image features from pre-trained image encoding networks, SAP-RL-E is trained directly from the image input, a condition which was previously observed to require a longer training time <cit.>. Nevertheless, considering the high number of episodes for P-RL to converge and the fact that SAP-RL-E trains the image encoding networks in parallel with the RL policy, it can be inferred that SAP-RL-E is sample efficient as its cumulative reward increased quickly and converged with only few episodes more than SAP-RL.The time required to train the models is shown in Table <ref>. “RL train” is the RL policy training time until convergence (i.e., until a stable cumulative reward above 96 is reached). The online training time, estimated only for P-RL (online P-RL) from the time taken by the robot during online tests, was considerably longer than the time needed to train the other models with the offline training framework. This demonstrates that the framework reduces the training time considerably. The time needed to train SAP-RL-E is almost the same as the time taken by P-RL and AE-RL, which suggests they have similar computational costs. This is because, unlike AE-RL and SAP-RL, SAP-RL-E does not require pre-training an image encoding network, and it takes fewer number episodes to converge than P-RL (2004 versus 3571 for P-RL, as shown in Fig. <ref>). This fewer number of episodes compensates for the longer time SAP-RL-E takes to train each episode. As a result, the proposed model can easily substitute P-RL at construction sites considering the lead time. Note that the training time of AE is shorter than of SAP because AE has fewer weights (2072 versus 3176 for SAP). §.§ Offline evaluation results The offline test results are listed in Table <ref>. In the room light, all vision-based models outperformed the model that only used proprioceptive feedback (P-RL). They presented higher success rates (SRs) and shorter completion times (CTs). In particular, AE-RL presented a considerably short CT of less than three steps (2.5 s/step), which is quite close to the performance of humans. However, AE-RL presented low SR in harsh lighting conditions, making it unsuitable for construction sites. The models that used SAP (SAP-RL and SAP-RL-E), however, demonstrated robustness to the challenging lighting conditions; namely, they could execute the peg-in-hole task with high SR and short CT, outperforming P-RL in both metrics. The proposed model (SAP-RL-E) yielded the most optimal results with an average SR of 97.4% and CT of 7.65 s, which is a 9.2% higher SR and 4.43 s shorter CT than that of the baseline P-RL. These results suggest that the SAP enables the extraction of relevant points of the image even under harsh lighting conditions, and the proposed SAP-RL-E can substitute the P-RL for executing peg-in-hole tasks in the construction field.§.§ Online evaluation resultsTable <ref> lists the online test results of the models. The tests with the real robot showed that, as in the offline test results, the proposed SAP-RL-E model was the most successful overall. AE-RL had the lowest CT, but its low SR in challenging lighting makes it ineffective in the field. The success rate of P-RL decreased the farther the peg initial position was from the hole center (average success rate of the initial positions 4 mm from the hole was 79.4%). We hypothesize that this is because the forces and moments of the x and y axes (in the wall plane) become smaller the farther the peg is from the hole center. This issue did not greatly affect the vision-based models, which suggests they can generalize more effectively to different initial positions.Although SAP-RL-E and SAP-RL performed similarly, their confidence interval (CI) did not overlap. Also, the p-values for SR and CT calculated via two-tailed z-tests were 0.026 and 0.002 (<0.05), respectively. These results suggest that the differences between the results of the models are statistically significant; thus, SAP-RL-E is more effective than SAP-RL.§.§ Image reconstruction and attention point analysisFig. <ref> presents the image reconstruction and attention points generated by the vision-guided models. In the room light, all models were able to reconstruct the images adequately, and SAP and SAP-RL-E were able to generate attention points focused on the region between the peg and the hole (particularly the blue and orange points for SAP-RL-E). However, in the other lighting conditions, AE failed to reconstruct the image adequately, either predicting the hole to be in the shadow position or failing to reconstruct a clear image. In contrast, SAP-RL and SAP-RL-E were able to reconstruct the image clearly; they partially “erased” the shadow or created an image in which the shadow and the hole positions were clearly distinguishable. The positions of attention points of SAP-RL-E were also minimally affected by the shadows. The attention points from the current (cross-shaped) and predicted images (x-shaped) of SAP-RL-E provided a sense of direction for the robot's movement. The combination of these attention point-based directions may be related to the robot’s movement direction, but this will need to be verified in future work. The results suggest that SAP was most effective for reconstructing the image and generating attention points relevant to the task. § DISCUSSION§.§ Application to construction Although it was demonstrated that the proposed model is the most suitable for a wide range of construction environments, different models can be chosen depending on the environment. If a light source is placed close to the camera to reduce the influence of challenging external lights (e.g., dawn light), the AE-RL may be chosen because of its shorter CT. However, preliminary evaluations of AE-RL with a camera light showed that the SR dropped more than 20% when an external light was added, so the proposed model still yields a higher SR. If the lighting conditions are extremely poor, P-RL can be chosen since it is independent of lighting conditions, given that the robot can make contact with the hole borders without vision (e.g., by using CAD-obtained hole positions). However, it is difficult to accurately estimate position without vision because construction environments are highly unstructured. Thus, if vision is already used to approach the hole, the choice of a vision-based hole search model is more optimal. §.§ Hole map size The hole map size affects the mapping time and the image encoding capability of the image encoding networks (AE and SAP). If the map is large,due to a higher map resolution, for example, more time is required for mapping (extracting data). However, in this case, more images are available for training the image encoding networks, enabling them to learn to encode the images more robustly <cit.>. On the contrary, if the hole map is small, the mapping time is short, but the image encoding networks are not able to adequately encode the input images. Small resolution maps with interpolation functions could enable more detailed maps with less data extraction, but this approach creates synthetic data that may be different from reality, resulting in reduced robot performance in the real world. The results of this study suggest that the map size used provided a balance between mapping time and image encoding capability, which led to robust peg-in-hole task execution. To further improve the robustness of the proposed model, multiple holes maps can be used to train a model.§ CONCLUSION We proposed a vision and proprioceptive data-driven model to accomplish the peg-in-hole task in concrete holes under challenging lighting conditions. The model uses a spatial attention point network (SAP) and a RL policy trained jointly end-to-end in order to extract task-specific attention points from images and generate robot motion based on those attention points and proprioceptive data. We also proposed a training framework that involves mapping a target hole and using this hole map to train the model via DRL in an offline manner. The hole mapping minimizes the reality gap when transferring the model to the real world, and the offline training reduces training time. Through evaluations with an experimental setup that replicates a construction environment, we demonstrated that by training the proposed model with a single hole map and in room light, the model can successfully control an industrial robot to perform the peg-in-hole task in 12 unknown holes, starting the peg from 16 different initial positions and under three different lighting conditions (two with misleading shadows). The proposed model reached a success rate (SR) of 93.9% and completion time (CT) of 8.21 s, outperforming a series of baseline models, including one that only used proprioceptive data as input (SR:87.4%, CT:11.57 s). Although the proposed model was used for anchor-bolt insertion, it may be applied to similar tasks where peg sliding problems due to complex surfaces and light variations can occur, such as insertion of pins in furniture, or connectors in circuits in poorly lit assembly lines.IEEEtran
http://arxiv.org/abs/2312.16438v1
{ "authors": [ "André Yuji Yasutomi", "Hideyuki Ichiwara", "Hiroshi Ito", "Hiroki Mori", "Tetsuya Ogata" ], "categories": [ "cs.RO", "cs.AI" ], "primary_category": "cs.RO", "published": "20231227065723", "title": "Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions" }
Victor Liu [email protected] Department of Astronomy, Yale University, 219 Prospect St, New Haven, CT 06511, USA HEASARC, Code 6601, NASA/GSFC, Greenbelt, MD 20771, USA CRESST II, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA HEASARC, Code 6601, NASA/GSFC, Greenbelt, MD 20771, USA CRESST II, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Department of Astronomy, University of Maryland, College Park, MD 20742, USA Department of Astronomy, University of Michigan, 1085 South University Ave, Ann Arbor, MI 48103, USA Iron Kα (Fe Kα) emission is observed ubiquitously in AGN, and it is a powerful probe of their circumnuclear environment. Examinations of the emission line play a pivotal role in understanding the disk geometry surrounding the black hole. It has been suggested that the torus and the broad line region (BLR) are the origins of emission. However, there is no universal location for the emitting region relative to the BLR. Here, we present an analysis of the narrow component of the Fe Kα line in the Seyfert AGN MCG-5-23-16, one of the brightest AGN in X-rays and in Fe Kα emission, to localize the emitting region. Spectra derived from Chandra/HETGS observations show asymmetry in the narrow Fe Kα line, which has only been confirmed before in the AGN NGC 4151. Models including relativistic Doppler broadening and gravitational redshifts are preferred over simple Gaussians and measure radii consistent with R ≃ 200-650 r_g. These results are consistent with those of NGC 4151 and indicate that the narrow Fe Kα line in MCG-5-23-16 is primarily excited in the innermost part of the optical broad line region (BLR), or X-ray BLR. Characterizing the properties of the narrow Fe Kα line is essential for studying the disk geometries of the AGN population and mapping their innermost regions. § INTRODUCTION Iron Kα emission lines (Fe Kα) at 6.4 keV are observed ubiquitously in AGN <cit.>. The line is produced in dense and cold material when illuminated by an X-ray source, making it a powerful probe of circumnuclear environments in AGN. A significant portion of the information we can learn about the inner regions of AGNs and their geometries comes from modeling the line and understanding what emission processes may be responsible for it.However, the location where the Fe Kα line is emitted is not well established. It is thought that the dust sublimation radius forms an outer envelope to the emitting region <cit.>. Although <cit.> found no correlation between the width of the Fe Kα and the optical Hβ line in a sample of sources, implying that the Fe Kα line originates in the torus and not the optical BLR, other observations suggest otherwise (e.g <cit.>). <cit.> presented an extensive analysis of the Chandra grating spectra of 36 sources, finding that there is no universal location for the Fe Kα line-emitting region relative to the optical BLR. They found that the Fe Kα line may have contributions from parsec-scale distances from the black hole (i.e. the torus), down to matter on the optical BLR scale or smaller. Direct measurements of the location of the line emitting regions was recently obtained for NGC 4151, the brightest Seyfert galaxy in X-rays, and hence the AGN with the brightest narrow Fe Kα line. Using Chandra grating spectra, <cit.> found that the narrow Fe Kα line in NGC 4151 is asymmetric, with the data suggesting that the emission could originate in a region as small as 50–130 r_g during high flux intervals[The gravitational radius r_g is a characteristic length scale associated with black holes and represents the radius at which the gravitational pull of a black hole becomes significant compared to other forces. The formula for calculating r_g is: r_g = GM/c^2.]. On a parallel front, <cit.> measured the time delay between the narrow Fe Kα line and the hard X-ray continuum in NGC 4151 to constrain the location of the narrow Fe Kα region–a technique called reverberation mapping–and found a time delay of 3.4^+2.5_-0.8 days. <cit.> conducted a similar reverberation mapping study 13 years prior, but of the optical BLR in NGC 4151, and found a time delay of 6.6^+1.1_-0.8 days between the optical Hβ line and the continuum. These results indicate that the narrow Fe Kα line in NGC 4151 is primarily excited in either the innermost part of the optical broad line region (BLR), the X-ray BLR, or closer. The central question is now whether other AGN exhibit asymmetry and reverberation in their narrow Fe Kα lines, which will crucially inform us about where the line is emitted more broadly in the population of all AGN.Before proceeding further, we wish to make a distinction between the narrow Fe Kα line and the broad Fe Kα line. The narrow component of the line originates at large distances from the black hole (∼ 100's to 1000's of r_g) while the broad component originates less than 10 r_g in the inner disk <cit.>. While asymmetry and reverberation delays have been seen before in several sources' broad Fe Kα lines, in this study, we are focusing on only the narrow Fe Kα line.Among the next brightest sources in the Fe Kα band is MCG-5-23-16 (z = 0.00849). MCG-5-23-16 is a Seyfert 1.9-AGN <cit.> with a central black hole mass of 10^7.9 M_ <cit.>. There have been many studies of the narrow and broad components of the Fe Kα line in MCG-5-23-16 (e.g. <cit.>), but none using the high-energy resolution afforded by Chandra, which is crucial in order to accurately model the narrow Fe Kα line. In this paper, we use 12 Chandra observations of MCG-5-23-16, nine of which were recently taken in 2020, to model the narrow Fe Kα line and address the question of whether MCG-5-23-16 also has asymmetry in its narrow Fe Kα line like NGC 4151. We find that MCG-5-23-16 does indeed have line asymmetry, and that the strength of this line asymmetry varies with time.Here, we report on the narrow Fe Kα line and its model fit parameters in MCG-5-23-16 using observations with Chandra.Our specific aims are:* To determine whether the narrow Fe Kα line in MCG-5-23-16 exhibits asymmetry, just like it does in NGC 4151.* To examine any potential variability in the narrow Fe Kα line.* To constrain the emission processes at play in the disk and the geometry of the AGN by trying out different models for the line and examining their best-fit parameters. § OBSERVATIONS AND REDUCTIONAll archival Chandra/HETG observations of MCG-5-23-16 were downloaded directly from the Chandra archive. In addition to these archival observations, we observed MCG-5-23-16 nine times from October to November 2020 with the Chandra X-ray Observatory, Chandra <cit.> using the Advanced CCD Imaging Spectrometer (ACIS) optimized for spectroscopy (ACIS-S) in faint mode using the High Energy Grating Transmission Spectrometer (HETGS). The full dataset used in this paper is available for download at [DOI: 10.25574/cdc.186]https://doi.org/10.25574/cdc.186. The observation identification number (ObsID), start date, and duration of each exposure is given in Table <ref>.HETGS observations typically provide data from both the high-energy gratings (HEG) and the medium-energy gratings (MEG). However, the MEG has less effective area in the Fe K band and lower resolution, and is therefore less suited to our analysis. As a result, we limited our analysis to only the HEG. The standard CIAO tools (version 4.15.1) were used to reduce the Chandra observations <cit.>. For each observation, with the exception of ObsID 2121, we ran the tool “chandra_repro” to produce the necessary “evt2”, “pha2”, RMF and ARF files. For ObsID 2121, “chandra_repro” produced evt2 and pha2 files with erroneous exposure times, so we manually generated the spectrum and filtered for bad grades and for a "clean" status column using the tools “tgdetect2”, “tg_create_mask”, “tg_resolve_events”, “dmcopy”, “dmappend”, and “tgextract” as according to the CIAO HETG/ACIS-S Grating Spectra thread.[https://cxc.cfa.harvard.edu/ciao/threads/spectra_hetgacis/<https://cxc.cfa.harvard.edu/ciao/threads/spectra_hetgacis/>] The RMF and ARF files were then generated using “mktgresp”.For each exposure, we added the first-order (+1 and -1) HEG spectra, RMF files, and ARF files using “combine_grating_spectra”. We found that individual spectra lacked sufficient data to constrain spectral features and fit complex physical models. When fit to a continuum using the model zphabs*zpowerlw, individual spectra had a consistent continuum, with each spectra having a photon index value between 1.64 and 1.70. Therefore, we further combined the spectra using “combine_spectra” into a “Total” spectrum, “2000,2005” spectrum and “2020” spectrum, corresponding to all 12 observations, the three observations in 2000 and 2005, and the nine observations taken recently in 2020 respectively. This grouping by time allows us to investigate how the Fe Kα line in MCG-5-23-16 has changed from the earlier observations in 2000 and 2005 to the recent observations in 2020. From henceforth, “the spectra” will refer to the “Total”, “2000,2005”, and “2020” groups.The spectra were then binned to achieve a minimum signal-to-noise ratio (SNR) of 5.0 for each energy bin using the tool “ftgrouppha” within the standard HEASFOFT suite, version 6.31.1. We set the “grouptype” parameter within “ftgrouppha” to be “optsnmin”, which uses the optimal binning algorithm designed by <cit.>.All spectra fits were performed using PyXspec, version 2.1.2 <cit.>, on the HEAsoft environment <cit.>. Unless otherwise noted, the errors quoted in this work reflect the value of the parameter at its 1σ confidence interval. Errors were derived using the standard PyXspec “Fit.error()” command. The script used to perform the spectral fits and generate Figures <ref> and <ref> in this paper is available on GitHub[<https://github.com/victorliu1231/mcg_5_23_16_chandra_script>] and version v1 is archived in Zenodo <cit.>.l|l|l[t] Chandra Observations of MCG-5-23-16ObsID Date Exp. (ks) 2121 2000 Nov. 14 76.22 6187 2005 Dec. 08 30.087240 2005 Dec. 09 20.2522553 2020 Oct. 11 16.3024753 2020 Oct. 11 42.0822554 2020 Oct. 17 29.0824833 2020 Oct. 18 27.0822555 2020 Oct. 26 25.2324849 2020 Oct. 28 30.0822556 2020 Nov. 17 20.0824863 2020 Nov. 18 15.0824864 2020 Nov. 20 25.08The list of Chandra observations that we used for MCG-5-23-16. Exp. denotes the total exposure time for each observation in ks (kiloseconds). The total exposure time is 356.64 ks. § SPECTRAL MODELING§.§ Spectral Fitting Range and Setup The spectra in this work are restricted to the 1.5-8.0 keV range due to the low SNR below and above these energies. Additional bad data points were ignored with the command “AllData.ignore(`bad')”.We fit the spectra using three Gaussian models and three reflection models. Of the Gaussian models, the first model is Model A: zphabs*(zpowerlw+zgauss) with fixed σ=0, a narrow Gaussian model that probes whether the Fe Kα line originates at the torus or outer BLR and tests whether the line is resolved by Chandra/HETGS. The next model is Model B: zphabs*(zpowerlw+zgauss) with free σ, a broad Gaussian model that models emission from the intermediate regions of the AGN slightly closer to the central black hole. These first two models are both simple Gaussians and assume the Fe Kα line is symmetric. The next models we use are more physically motivated and account for potential asymmetry in the line. We next try Model C: zphabs*(zpowerlw+rdblur*zgauss), which fits to asymmetry in the line by taking into account relativistic Doppler broadening near the black hole with the “rdblur” component (“rdblur” is a convolution model extracted from the diskline model described in <cit.>).We also attempt variants of Models A, B, and C, where instead of fitting a single Gaussian to the Fe Kα line, we fit a double Gaussian. Fitting a double Gaussian accounts for the fact that the Fe Kα has two line energies, one at 6.404 keV (Fe Kα1) and another at 6.391 keV (Fe Kα2). However, when we attempt these double Gaussian models, we find that the fits worsen compared to the single Gaussian models, likely due to the difference between these two line energies being unresolved in our spectra.Afterwards, we try reflection models. We first attempt Model D: zphabs*(zpowerlw+xillver), which fits to asymmetry in the Fe Kα line by taking into account Compton scattering of line photons reflecting off the disk with the “xillver” model <cit.>. The specific table we use here is “xillver-a-Ec5.fits”. Next, we use Model E: zphabs*(zpowerlw+mytorus) using the table “mytl_V000010nEp000H500_v00.fits” <cit.>. The “mytorus” component models Compton-scattering off of a toroidal structure around the disk. Finally, we use Model F: zphabs*(zpowerlw+rdblur*mytorus) to determine whether including relativistic Doppler broadening makes a difference to the fit.For all models, we obtained the Fe Kα line flux by multiplying the component cflux by the last additive component (e.g. cflux*zgauss, cflux*rdblur*zgauss,cflux*xillver, cflux*mytorus and cflux*rdblur*mytorus). The E_min and E_max parameters for cflux were set to 6.2 and 6.45 keV, respectively, which generally bounded the entirety of the Fe Kα emission line. In all fits, the emission line energy was frozen at 6.4 keV, the intrinsic line energy. Within Model C, σ was fixed to 0 since the R_in parameter from rdblur was responsible for capturing the width of the line instead of zgauss, and the outer radius of the disk R_out was fixed at 10^6 r_g to give the biggest range for R_in to vary. When fitting to the “2000,2005” spectrum, letting inclination range from 0^∘≤ i ≤ 90^∘ produced a physically implausible lower bound of less than 0^∘, so we restricted the inclination to range from 3^∘≤ i ≤ 87^∘ for the “2000,2005” spectrum. Within Model D, the photon index of zpowerlw and xillver were linked to get a consistent photon index for the continuum; the ionization index logξ was fixed at 0 since the Fe Kα line is a neutral line; and the high energy cutoff E_cut was fixed to 300 keV. In Model E and Model F, the hydrogen column density n_H of mytorus was tied to the n_H of zphabs, and the photon index of mytorus was linked to the photon index of zpowerlw. In Model E, the inclination was fixed to 60^∘ and in Model F, the inclination and line emissivity was free to vary. §.§ Gaussian Models §.§.§ Model A Model A is zphabs*(zpowerlw+zgauss) with σ fixed to 0. In addition to testing whether the Fe Kα line emitting region originates at the torus or outer BLR, fixing sigma to 0 also examines whether the line width is below the instrumental resolution of Chandra/HETGS. Fixing σ = 0 eV assumes that the line is not resolved and that its width is due to instrumental broadening and not any physical effects from the AGN itself. Figure <ref> shows that the line is clearly broader than the fit for the “2000,2005”, “2020”, and “Total” spectra, proving that the line is resolved. It also shows that the origin for the line is not located in the torus or outer BLR. These fits also illustrate the asymmetry of the Fe Kα line in MCG-5-23-16, as there is a clear red wing.§.§.§ Model B Model B is zphabs*(zpowerlw+zgauss) with free σ. In Model B, we let σ vary in order to fit a broad Gaussian to the line. For the “2000,2005” spectrum, this only marginally improves the fits, as the red wing of the line is still not captured (Figure <ref>). For the “2020” and “Total” spectra, however, letting σ vary vastly improves the fits, since most of the red wing is now captured. These results are supported by the F-tests in Table <ref>: the change from Model A to Model B for the “2000,2005” spectrum has a 1.95σ level of confidence, and a 5.33σ and 5.41σ level of confidence for the “2020” and “Total” spectra, respectively. In the “2020” and “Total” spectra, the width is consistent with σ≃ 22 eV, corresponding to a FWHM of 52 eV and a projected velocity of v ≃ 2427 km/s. These values are very similar to those obtained by <cit.> for NGC 4151 – the width and FWHM for the broad Gaussian fits in their work is 23 eV and 54 eV, respectively, emphasizing the similarity between the Fe Kα line in MCG-5-23-16 and in NGC 4151.As stated in <cit.>, it's important to emphasize that the neutral iron Kα line is actually a composite of two distinct lines at laboratory energies of 6.391 keV and 6.404 keV <cit.>. The difference between these lab energies constitutes just 25% of the measured FWHM = 52 eV, so it is unnecessary to model for both lines and modeling just a single broad Gaussian is sufficient. Noteworthy is the fact that the separation between these lines falls also below the predicted resolution of the first-order High Energy Grating (HEG) in the iron K band, which is around 45 eV, making modeling both lines infeasible. This decision is also followed in previous work on the narrow Fe Kα line in AGN (e.g. <cit.> and <cit.>).§.§.§ Model C Model C is zphabs*(zpowerlw+rdblur*zgauss). In Model C, we multiply zgauss with the rdblur convolution model, which introduces a relativistic Doppler broadening factor to the Gaussian <cit.>. This broadening factor creates a red tail in the Gaussian and thus fits to the asymmetry in the line. We first try fixing the line emissitivity at q = 3 (Model C.1), which models an isotropically radiative and flat accretion disk.We find that the fit for the “2000,2005” spectrum significantly improves when using Model C.1. Visually, the red wing of the Fe Kα line is better captured by Model C.1 than all the previous models (top panel of Fig. <ref>) and the F-test between Model B and Model C.1 indicates an improvement in the fit at the 2.74σ level of confidence (Table <ref>). With this evidence, we take Model C.1 as the benchmark fit for the “2000,2005” spectrum. The fit for “2020” also improves when using Model C.1: visually, the red wing is captured fully by Model C.1 (middle panel of Fig. <ref>). However, since the amplitude of the red wing is much smaller in the “2020” spectrum, the fit with Model C.1 is not significantly better than the fit with Model B. More convincingly, the F-test between Model B and Model C.1 gives only a 1.97σ level of confidence (Table <ref>), statistically indicating that Model B sufficiently fits the line and that the asymmetry in the line for the “2020” spectrum is weak. The “2020” spectrum also gives much larger uncertainties in the inclination and inner radius R_in than the “2000,2005” spectrum, further indicating that the asymmetry in the line for “2020” is much weaker than in the line for “2000,2005”. The collected counts of the 2020 spectrum also are approximately 1.3 times higher than those in the 2000-2005 spectrum, lending further support to the weaker asymmetry of the Fe Kα line in “2020” compared to “2000,2005”. Therefore, we consider Model B instead of Model C.1 as the benchmark fit for the “2020” spectrum since further fits do not significantly improve the fit.Most importantly, the F-test between the Model B and Model C.1 for the “Total” spectrum gives a 3.05σ level of confidence (Table <ref>), indicating that when the data from the observations in 2020 is added to the “2000,2005” group, there is just enough statistical significance to reach the standard minimum 3σ level of confidence to support a conclusion. Based off of the 3.05σ level of confidence, we can conclude that the Fe Kα line in MCG-5-23-16 as a whole is asymmetric.The difference between the Model B to Model C.1 F-tests and the best-fit inner disk radius R_in values in the “2000,2005” group and the “2020” group demonstrate that the Fe Kα region in this source is variable, and that the emitting region has moved farther from the center of the AGN between 2000+2005 and 2020.Afterwards, we also try letting the line emissitivity q vary within the range 2.0 ≤ q ≤ 4.0 (Model D.2). Comparing the reduced χ^2 results listed in Table <ref>, only marginally better fits are achieved when the emissivity varies for all spectra. In fact, evolving from Model B to Model C.2 is slightly worse than evolving from Model B to Model C.1, as seen by how the F-test values are slightly lower in the evolution from Model B to Model C.2 as compared to Model B to Model C.1 (Table <ref>). Relative to Model C.1, where q is fixed, smaller radii are found, but the small improvement in χ^2/υ signals that smaller radii are not required. Therefore, we take Model C.1 as our benchmark fit, and all conclusions made about the inclination and inner disk radius R_in will reference the values in Model C.1 as seen in Table <ref>. §.§ Reflection Models §.§.§ Model D Model D is zphabs*(zpowerlw+xillver). Model D replaces the zgauss component of the previous three models with the reflection model xillver <cit.>. From Figure <ref>, it is clear there is asymmetry in the line in the “2000,2005” spectrum. xillver models the asymmetry in the line by taking into account Compton scattering of line photons due to reflection off the disk. The xillver model parameters include the photon index of the illuminating power law spectrum, the iron abundance, the ionization of the disk, the high-energy cutoff of the power law, the inclination at which the emitting region is viewed, and the redshift of the source. The specific table used is “xillver-a-Ec5.fits”, the ionization logξ is frozen to 0, and the high-energy cutoff for the power law E_cut is frozen to 300 keV.When the line was fitted to Model D, the fits worsened substantially compared to the Gaussian models (see Figure <ref> and Figure <ref>). The reduced χ^2 values increased across all three spectra compared to Model B and is even higher than the reduced χ^2 value for Model A in most of the spectra (Table <ref> and Table <ref>). Because Model D fits the data so poorly, using a F-test to compare it to Models A and B is meaningless, so we did not Model D in our comparisons of models in Table <ref>. This also means that the parameter values in Table <ref> for Model D should not be quoted as physically real.The disk reflection model pexmon gave a similarly bad fit <cit.>. Therefore, we can conclude that the line asymmetry in MCG-5-23-16 line cannot be from a disk with only intrinsic reflection broadening.§.§.§ Model E Model E is zphabs*(zpowerlw+mytorus). Model E uses the toroidal reflection model mytorus <cit.> instead of the disk reflection models xillver and pexmon. When fitting to the narrow Fe Kα line, we found that the fit is insensitive to the inclination angle. Therefore, for each spectrum, we fixed the inclination of Model E at 60^∘. For consistent hydrogen column density and photon index values, we also tied the nH and PhoIndx parameters of the mytorus component to those of the zphabs and zpowerlw components.We found that the Model E does not capture the red wing of the Fe Kα line in all three spectra and also doesn't capture the full blue wing in the “2020” and “Total” spectra (Fig. <ref>). The reduced χ^2 values for Model E are also higher than those of both Model C.1 and Model C.2 (Table <ref>), indicating that Model E is a worse fit than Model C. This demonstrates that the relativistic Doppler broadening introduced by rdblur is responsible for the asymmetry we observe in the narrow Fe Kα line, and not the Compton-scattering off of a toroidal structure as modeled by mytorus.We did not include the F-tests involving Model E in Table <ref> because the number of free parameters in Model E is less than or equal to the number of free parameters in all other models, and the F-test is used only when a new model has more free parameters than previous models (e.g. when the new model constitutes an increase in complexity).§.§.§ Model F Model F is zphabs*(zpowerlw+rdblur*mytorus). Since introducing relativistic Doppler broadening improved the fits for the Gaussian models (from Model B to Model C), we convolved the mytorus component by rdblur in Model F to see if the Doppler broadening will similarly improve the mytorus fits. Unlike the Model E fit, the Model F fit was not insensitive to the inclination so we let the inclination freely vary in Model F. We also tried both fixing the line emissivity q to 3.0 and letting q freely vary, and found that when q was able to freely vary, the fit was better able to capture the red wing. Similarly to Model F, we tied the nH and PhoIndx parameters of the mytorus component to those of the zphabs and zpowerlw components.We found that adding the rdblur component to the mytorus model in Model F significantly improved the fit to the asymmetry in the Fe Kα line compared to Model E (see Fig. <ref>). This is supported by the F-tests between Model E and Model F, which have a level of confidence of 3.75σ for the “2000,2005” spectrum and 4.19σ for the “Total” spectrum (Table <ref>). The Doppler broadening component introduced by rdblur was able to fully capture the red wing of the line, while the mytorus model alone was not. This strongly indicates that the asymmetry in the Fe Kα line is due to Doppler broadening and not to Compton scattering, as both Model D and Model E were unable to fit the red wing in the line, but when introducing the rdblur component in Model F, the model was able to fully capture the asymmetry.llh||ll|lllll|ll|l[t]Best-Fit Parameters Spectrum Model ObsID Γ N_PL (10^-2) σ (eV) A_Fe Incl. (deg.) q R_inF_l (10^-13) EW_l (eV) χ^2/υ 34em2000,2005 Model A 2121,6187,7240 1.64±0.04 2.30^+0.14_-0.13 0* – – – – 5.64±0.89 51.0^+8.9_-7.5 0.6285Model B 2121,6187,7240 1.65±0.04 2.33^+0.14_-0.13 19^+11_-9 – – – – 7.3^+1.5_-1.4 66^+12_-10 0.6253Model C.1 2121,6187,7240 1.66±0.04 2.35±0.14 0* – 6.4^+1.5_-1.2 3.0* 240^+110_-60 8.4^+1.3_-1.3 77^+15_-12 0.6192Model C.2 2121,6187,7240 1.66±0.04 2.34^+0.15_-0.13 0* – ≤ 6.012.50^+0.43_-0.29 190^+100_-130 8.2^+1.4_-1.3 75^+19_-12 0.6185 Model D 2121,6187,7240 ≤ 1.682.41^+0.11_-0.14 – 5.0^+5.0_-3.9 ≤ 18.19– – 7.1±1.2 180^+910_-80 0.6370Model E 2121,6187,7240 1.66±0.04 2.35±0.14 – – 60* – – 7.2±1.0 79 ^+13_-11 0.6205 Model F 2121,6187,7240 1.66^+0.04_-0.03 2.37^+0.17_-0.12 – – ≤22.39 ≤2.65 ≤5657 8.6^+1.4_-1.3 96^+30_-13 0.6178 34em2020 Model A 22553,24753,22554,24833,22555,24849,22556,24863,24864 1.69±0.03 2.00±0.09 0* – – – – 4.93±0.55 55.2^+7.1_-5.4 0.6879Model B 22553,24753,22554,24833,22555,24849,22556,24863,24864 1.70±0.03 2.04^+0.10_-0.09 22.1^+4.6_-4.1 – – – – 7.27^+0.84_-0.81 81.9^+8.9_-8.3 0.6680Model C.1 22553,24753,22554,24833,22555,24849,22556,24863,24864 1.70±0.03 2.04^+0.10_-0.09 0* – 9.0^+3.3_-2.0 3.0* 620^+620_-220 7.30^+0.78_-0.77 82.6^+7.8_-8.8 0.6653Model C.2 22553,24753,22554,24833,22555,24849,22556,24863,24864 1.70±0.03 2.05^+0.10_-0.09 0* – 9.2^+2.3_-1.8 2.38^+0.68_-0.15 190^+550_-90 7.92±0.89 90^+16_-9 0.6650Model D 22553,24753,22554,24833,22555,24849,22556,24863,24864 ≤ 1.732.11^+0.12_-0.10 – 5.0^+5.0_-3.2 ≤ 18.19– – 7.14^+0.95_-0.75 81.9^+8.9_-8.3 0.6842Model E 22553,24753,22554,24833,22555,24849,22556,24863,24864 1.70±0.03 2.05^+0.10_-0.09 – – 60* – – 6.17±0.65 84.3^+9.4_-8.6 0.6775 Model F 22553,24753,22554,24833,22555,24849,22556,24863,24864 1.72±0.02 2.12^+0.03_-0.12 – – 13.2^+8.7_-4.2 2.49^+0.70_-0.36 950^+950_-620 8.04±0.82 111^+41_-10 0.6649 34emTotal Model A 2121,6187,7240,22553,24753,22554,24833,22555,24849,22556,24863,24864 1.67±0.02 2.11±0.07 0* – – – – 5.23^+0.46_-0.45 54.1^+4.3_-4.2 0.7686Model B 2121,6187,7240,22553,24753,22554,24833,22555,24849,22556,24863,24864 1.68±0.02 2.14±0.07 21.6^+3.8_-3.5 – – – – 7.33^+0.68_-0.67 76.1^+7.8_-6.7 0.7505Model C.1 2121,6187,7240,22553,24753,22554,24833,22555,24849,22556,24863,24864 1.68±0.02 2.14±0.07 0* – 7.5^+1.6_-1.2 3.0* 440^+220_-120 7.47^+0.66_-0.65 78.0^+7.0_-5.4 0.7447Model C.2 2121,6187,7240,22553,24753,22554,24833,22555,24849,22556,24863,24864 1.68±0.02 2.15±0.07 0* – 7.8^+2.0_-1.4 2.45^+0.30_-0.17 210^+180_-80 7.85^+0.76_-0.77 81.8^+9.4_-6.7 0.7436Model D 2121,6187,7240,22553,24753,22554,24833,22555,24849,22556,24863,24864 ≤ 1.712.22^+0.08_-0.09 – 4.9^+2.5_-2.5 ≤ 18.19– – 7.15^+0.59_-0.62 210^+820_-70 0.7719Model E 2121,6187,7240,22553,24753,22554,24833,22555,24849,22556,24863,24864 1.68±0.02 2.15±0.07 – – 60* – – 6.47±0.53 81.5^+6.3_-7.0 0.7563 Model F 2121,6187,7240,22553,24753,22554,24833,22555,24849,22556,24863,24864 1.70±0.02 2.19^+0.08_-0.07 – – 12.0^+8.2_-3.7 8.21±0.69 600^+9300_-400 8.21±0.69 105^+22_-10 0.7429 Best fit properties of fitting the spectra from 2000+2005 (top), from 2020 (middle), and from all observations (bottom) using PyXspec. Model A represents a Gaussian with sigma fixed to 0, zphabs*(zpow + zgauss); Model B is the same as Model A, but with sigma able to vary freely, zphabs*(zpow + zgauss); Model C.1 is the same as Model B but with the Gaussian multiplied by “rdblur”, zphabs*(zpow + rdblur*zgauss) and line emissitivity q fixed to 3. Model C.2 is the same as Model C.1, but with line emissitivity free to vary between 2 ≤ q ≤ 4, zphabs*(zpow + rdblur*zgauss). Model D represents the disk reflection model xillver, which accounts for Compton scattering of the Fe Kα line off a disk, zphabs*(zpow + xillver); Model E represents the reflection model mytorus, which accounts for Compton scattering off of a torus, zphabs*(zpow + mytorus). Model F is the same as Model E but with “mytorus” multiplied by “rdblur” and line emissivity and inclination free to vary. In all cases, the energy of the Fe Kα line is fixed at 6.40 keV, and redshifts of all components are set to 0.00849, the redshift of MCG-5-23-16 according to <https://ned.ipac.caltech.edu>. The inner radius R_in is given in units of gravitational radii r_g = GM/c^2, the power law normalization N_PL in units of 10^-2 photons/keV/cm^2/s, and line flux F_l in units of 10^-13 ergs cm^-2 s^-1. The minimum and maximum energy ranges of “cflux” are 6.2 keV and 6.45 keV, respectively. 1σ errors are given in superscripts and subscripts. Greater than and less than signs indicate the 1σ upper bound and lower bound of the parameter, respectively. Parameters marked with an asterisk were fixed at the value given. l||ll|l|lllF-tests between Models Spectrum Previous Model New Model Change F-statistic p-value Confidence 24em2000,2005 Model A Model B Allowed sigma to vary 4.99 0.0257 1.95σ Model B Model C.1 Convolved Gaussian with rdblur with fixed q = 3 8.80 0.00310 2.74σ Model B Model C.2 Convolved Gaussian with rdblur with free q 5.39 0.00472 2.60σ Model E Model F Convolved mytorus with rdblur with free q 2.13 0.0956 1.31σ 24em2020 Model A Model B Allowed sigma to vary 30.2 4.85e-08 5.33σ Model B Model C.1 Convolved Gaussian with rdblur with fixed q = 3 5.07 0.0246 1.97σ Model B Model C.2 Convolved Gaussian with rdblur with free q 3.23 0.0399 1.75σ Model E Model F Convolved mytorus with rdblur with free q 7.20 8.78e-05 3.75σ 24emTotal Model A Model B Allowed sigma to vary 31.0 3.18e-08 5.41σ Model B Model C.1 Convolved Gaussian with rdblur with fixed q = 3 10.6 0.00113 3.05σ Model B Model C.2 Convolved Gaussian with rdblur with free q 6.79 0.00117 3.04σ Model E Model F Convolved mytorus with rdblur with free q 8.47 1.42e-05 4.19σ F-tests were performed in order to determine the statistical significance of evolving from one model to the next. See caption of Table 1 for a detailed description of each model and spectrum. F-tests with Model D were not performed because Model D gave significantly worse fits to the data than all the other models, and thus the F-test would've given an negative value for the F-statistic. Evolving from Model B to Model C.1 in the “Total” group gives a 3.05σ confidence level, while the same model change for the “2000,2005” group gives a 2.74σ level. This indicates that the addition of the data from the observations in 2020 to the “2000,2005” group just gave enough statistical significance to reach the minimum 3σ level needed to conclude that the Fe Kα line in MCG-5-23-16 is asymmetric.§ DISCUSSION§.§ Second-Ever Evidence for Asymmetry in the Narrow Fe Kα Line We have analyzed deep Chandra/HETG spectra of the Type 1.9 Seyfert AGN, MCG-5-23-16. We find that the narrow Fe Kα emission line is asymmetric, likely due to relativistic Doppler broadening. Models A and B are simple Gaussians, are physically unmotivated, and do not fully account for the red wing in all three spectra. Model D gives bad fits to the spectra, indicating that the asymmetry in the line is physical and not due to Compton scattering of the emission line photons off the disk. Model E improves the fits but still does not fully capture the red wing. Model F includes relativistic Doppler broadening and fits the spectra much better. Model C.1 seems to best fit the spectra, as it captures the red wing while preserving the simple disk geometry assumed by fixing the line emissivity to q = 3. Model C.2 only improves the fits marginally but complicates the disk model by letting q vary freely, so we consider Model C.1 as our benchmark fit rather than Model C.2. The best fit Model C.1 suggests R ≃ 200-650 r_g, where r_g = GM/c^2, which suggests that the line originates from the innermost extents of the optical BLR or closer.The Gaussian and reflection models with relativistic Doppler broadening fully fit the asymmetry in the Fe Kα line. Without the Doppler broadening component, the Gaussian models and even the reflection models were unable to fully capture the asymmetry in the line (e.g. they underfit the red wing). This result indicates that the asymmetry in the Fe Kα line in MCG-5-23-16 is due to solely relativistic Doppler broadening near the black hole and not due to Compton scattering.Our results also indicate the asymmetry in the Fe Kα line in MCG-5-23-16 is likely to be variable over time, as the “2000,2005” spectrum is best fit by an asymmetric line but the “2020” spectrum is sufficiently fit by a broad Gaussian. The background-subtracted net count rate for the “2000,2005” spectrum is 0.4079±0.0028 counts s^-1 and for the “2020” spectrum is 0.2903±0.0015 counts s^-1; the “2000,2005” spectrum has higher flux than the “2020” spectrum, so this variability in the line could also depend on source flux level as well. This line variability would be interesting to follow-up in a future observing campaign of MCG-5-23-16, e.g. with the recent launch of the XRISM satellite, an X-ray telescope with even higher energy resolution than Chandra and thus more detailed study of the line.This narrow Fe Kα line asymmetry has only been confirmed previously in NGC 4151, the brightest Seyfert galaxy in X-rays. Using Chandra grating spectra, <cit.> found that the line asymmetry depends on source flux level, and that the emitting region could be located as close as 50-130 r_g during the high flux intervals. Additional study of the Fe Kα emitting region by <cit.> using reverberation mapping demonstrated that the inner disk radius is smaller than the optical BLR radius, supporting the conclusion that in NGC 4151, the line originates in the inner regions.Our results provide the second-ever evidence of Fe Kα line asymmetry. Asymmetry in the Fe Kα line in both NGC 4151 and in MCG-5-23-16 are dependent on flux and the inner radius are both constrained to be on the order of 100s to 1000 r_g. The emitting region in MCG-5-23-16 is thus likely to be smaller than the optical BLR as well, allowing it to be able to be used as a probe for the inner AGN geometry.The identification of asymmetry in the narrow Fe Kα line only in MCG-5-23-16 and NGC 4151 thus far raises intriguing questions regarding the prevalence of the line asymmetry across the broader population of X-ray AGN. It is possible that we have only detected this line asymmetry in MCG-5-23-16 and NGC 4151 thus far because of a selection bias: these two AGN were intentionally selected because of they are among the brightest AGN in X-rays and thus made natural choices for analyzing the narrow Fe Kα line. As long as these two sources are not drastically different from the rest of the X-ray AGN population in metrics such as inclination or Eddington fraction, it is not out of the question that less X-ray bright AGN would also exhibited a skewed Fe Kα line. The absence of such observations may also be attributed to low flux. Notably, in the case of NGC 4151, the narrow Fe Kα line displayed a more pronounced red wing during high flux states compared to low flux states, emphasizing the potential correlation between the line profile and X-ray flux levels <cit.>. With higher spectral resolution from XRISM, it is possible that asymmetric narrow Fe Kα lines could be unveiled even in dimmer AGN.Another explanation could be that both MCG-5-23-16 and NGC 4151 are observed to have X-ray absorption, albeit not reaching Compton-thick extremes. This intermediate absorption level may suggest an intermediate inclination for these two AGN where the asymmetry in the narrow Fe Kα line is most discernible. High inclinations, where the BLR obstructs the Fe Kα emitting region, or low inclinations, where the line-of-sight does not pass through enough of the redshifted material might hinder clear detection of the Fe Kα line asymmetry. It is possible that other X-ray AGN possess inclinations closer to either extreme, and thus we have not been able to clearly detect the line asymmetry in other sources. §.§ Disk Geometry In the best fits to the spectra, a low inclination is strongly preferred in all cases: the best fit Model C.1 to the “2000,2005” spectrum gives i = 6.4^+1.5_-1.2 degrees, i = 9.2^+2.3_-1.8 degrees for the “2020” spectrum, and i = 7.8^+2.0_-1.4 degrees for the “Total” spectrum (Table <ref>). These inclinations are consistent with those of the emitting region in NGC 4151, the only other source confirmed to have an asymmetric narrow Fe Kα line: the best fit inclinations for NGC 4151 are ≲ 10^∘ <cit.>.We would like to highlight that these inclinations for the narrow component of the Fe Kα line are much less than the inclinations of the broad component of the line in MCG-5-23-16: for the broad component, <cit.> and <cit.> cite an inclination of i ∼ 50^∘. The low inclination is also not consistent with the classification of the source as a Seyfert 1.9-AGN, since in the classical Seyfert picture, Seyfert 1.9-AGN are high inclination sources. One possible explanation for the discrepancy is since the narrow component originates from the intermediate disk or outer disk and the broad component from the inner disk, the difference in inclination hints that this source contains a possible warp or a disk whose scale height increases from the inner regions to the outer regions (e.g. a flared disk), with the viewing angle towards the source at roughly 50^∘.We also note that the Fe Kα line flux is consistent between the “2000,2005” spectrum and the “2020” spectrum across all models A-F, which indicates a stable AGN environment around the emitting region between 2000 and 2020, such as a a stable X-ray source and/or consistent geometry surrounding the emitting region (Table <ref>).§ CONCLUSIONS Our results provide the second-ever evidence of asymmetry in the narrow Fe Kα line. The only previous confirmed detection of asymmetry in the narrow component of this line is in the source NGC 4151 <cit.>, the brightest Seyfert galaxy in X-rays. As one of the next brightest Seyfert galaxies in X-rays, MCG-5-23-16 serves as a prime candidate for verifying that the asymmetry in the line in NGC 4151 may be able to be generalized to other X-ray bright AGNs. In the best fit models to the Fe Kα line, the inner radius of the line emitting region in both NGC 4151 and in MCG-5-23-16 is constrained to be on the order of R ≃ 100s to 1000 r_g, making the narrow Fe Kα line originate in the innermost extents of the optical BLR, the X-ray BLR. This paper in tandem with <cit.> illustrates a novel method for studying the circumnuclear environment around AGN in an unprecedented manner and that the narrow Fe Kα line can be asymmetric in AGN.Interestingly, we find that the asymmetry in the line is due to relativistic Doppler broadening and not Compton scattering, as both the Gaussian models and reflection models only fully fit the Fe Kα line for all three spectra when Doppler broadening was included in the models. We also see evidence that the line emitting region in MCG-5-23-16 is variable over time. We detect asymmetry on the 2000 and 2005 observations but not in the 2020 data, where the flux dropped by a factor of ∼1.4. This indicates that the line and emitting region are changing over time. Follow-up observations of MCG-5-23-16 are necessary to examine how the line may change on different timescales.It also does not escape our attention that there may be an additional emission line at ∼6.48 keV in the “2000,2005” and “Total” spectra and a weak absorption feature at ∼6.68 keV in the “2020” and “Total” spectra (Fig. <ref>).Our main results are as follows: * In the AGN MCG-5-23-16, we have found the second-ever evidence of asymmetry in the narrow Fe Kα line. The only previously confirmed detection of asymmetry in the narrow component of the line is in the source NGC 4151.* Interestingly, this asymmetry is due to solely relativistic Doppler broadening near the black hole and not Compton scattering.* With a radius of R ≃ 200-650 r_g, the narrow Fe Kα line emitting region in MCG-5-23-16 is located in the innermost extents of the optical BLR, or closer.* The asymmetry in the line is variable over time, indicating that the distance of line emitting region from the black hole may be changing.* The narrow component of the line strongly prefers low inclinations of ≲ 10^∘, while the broad component strong prefers inclinations of ∼ 50^∘. This suggests that the AGN has a warp or a flared disk.§.§ Future Steps We now have two confirmed sources of asymmetry in the narrow Fe Kα line in two of the brightest AGN in X-rays, NGC 4151 and MCG-5-23-16. Future steps to study the asymmetry in the line include, but are not limited to: * Studying the line shape in other X-ray bright sources.* Studying the variability of the line using NICER.* Studying the detailed shape of the line in MCG-5-23-16 and other sources using XRISM.This work has been supported by the CRESST II program at the NASA Goddard Space Flight Center. The material is based upon work supported by NASA under award numbers 80GSFC21M0002 and GO0-21086A. The research in this article has made use of data obtained from the Chandra Data Archive, and the software CIAO provided by the Chandra X-ray Center (CXC) <cit.> and Python package PyXspec <cit.> and HEAsoft environment <cit.> developed by the HEASARC Software Development team at NASA/GSFC.aasjournal
http://arxiv.org/abs/2312.16354v2
{ "authors": [ "Victor Liu", "Abderahmen Zoghbi", "Jon M. Miller" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231226231433", "title": "Detection of Asymmetry in the Narrow Fe K$α$ Emission Line in MCG-5-23-16 with Chandra" }
Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions André Yuji Yasutomi^1,2, Hideyuki Ichiwara^1,2, Hiroshi Ito^1,2, Hiroki Mori^3 and Tetsuya Ogata^2,4 Manuscript received: October 1, 2022; Revised December 13, 2022; Accepted January 29, 2023.This paper was recommended for publication by Editor Hyungpil Moon upon evaluation of the Associate Editor and Reviewers' comments. This work was supported by Hitachi, Ltd.^1André Yuji Yasutomi, Hideyuki Ichiwara and Hiroshi Ito are with the R&D Group, Hitachi, Ltd, [email protected]^2André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito and Tetsuya Ogata are with the Graduate School of Fundamental Science and Engineering, Waseda University, Japan [email protected] ^3Hiroki Mori is with the Future Robotics Organization, Waseda University, Japan [email protected] ^4Tetsuya Ogata is with the Waseda Research Institute for Science and Engineering (WISE), Waseda University, Japan Digital Object Identifier (DOI): https://doi.org/10.1109/LRA.2023.324352610.1109/LRA.2023.3243526January 14, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Contrastive learning has shown to be effective to learn representations from time series in a self-supervised way. However, contrasting similar time series instances or values from adjacent timestamps within a time series leads to ignore their inherent correlations, which results in deteriorating the quality of learned representations. To address this issue, we propose SoftCLT, a simple yet effective soft contrastive learning strategy for time series. This is achieved by introducing instance-wise and temporal contrastive loss with soft assignments ranging from zero to one. Specifically, we define soft assignments for 1) instance-wise contrastive loss by the distance between time series on the data space, and 2) temporal contrastive loss by the difference of timestamps. SoftCLT is a plug-and-play method for time series contrastive learning that improves the quality of learned representations without bells and whistles. In experiments, we demonstrate that SoftCLT consistently improves the performance in various downstream tasks including classification, semi-supervised learning, transfer learning, and anomaly detection, showing state-of-the-art performance. Code is available at this repository: <https://github.com/seunghan96/softclt>. § INTRODUCTIONTime series (TS) data areubiquitous in many fields, including finance, energy, healthcare, and transportation <cit.>. However, annotating TS data can be challenging as it often requires significant domain expertise and time. To overcome the limitation and utilize unlabeled data without annotations, self-supervised learning has emerged as a promising representation learning approach not only innatural language processing <cit.> andcomputer vision <cit.>,but also in TS analysis <cit.>. In particular, contrastive learning (CL) has demonstrated remarkable performance across different domains <cit.>. As it is challenging to determine similarities of instances in self-supervised learning, recent CL works apply data augmentation to generate two views per data and take views from the same instance as positive pairs and the others as negatives <cit.>. However, we argue that the standard CL objective might be harmful for TS representation learning, because inherent correlations in similar TS instances and values nearby timestamps within a TS, which could be a strong self-supervision, are ignored in CL. For example,distance metrics such as dynamic time warping (DTW) have been widely used formeasuring the similarities of TS data, and contrasting TS data might lose such information. Also, values with close timestamps are usually similar in natural TS data, so contrasting all values with different timestamps with the same degree of penalty as in previous CL methods <cit.> might not be optimal. Motivated by this, we explore the following research question: how can we take account of the similarities of time series data for better contrastive representation learning? To this end, we propose Soft Contrastive Learning for Time series (SoftCLT). Specifically, we propose to consider the InfoNCE loss <cit.>not only for the positive pairs but also all other pairs and compute their weighted summation in both instance-wise CL and temporal CL, where instance-wise CL contrasts the representations of TS instances, while temporal CL contrasts the representations of timestamps within a single TS, as shown in Figure <ref>. We propose to assign soft assignments based on the distance between TS for the instance-wise CL, and the difference of timestamps for the temporal CL. This formulation can be seen as a generalization of the standard contrastive loss,as the proposed loss becomes the contrastive loss if we replace soft assignments with hard assignments of either zero for negative or one for positive.We conduct extensive experiments in various tasks, including TS classification, semi-supervised classification, transfer learning, and anomaly detection tasks to prove the effectiveness of the proposed method. Experimental results validate that our method improves the performance of previous CL methods, achieving state-of-the-art (SOTA) performance on a range of downstream tasks.The main contributions of this paper are summarized as follows: [itemize]leftmargin=0.3cm * We propose SoftCLT, a simple yet effective soft contrastive learning strategy for TS.Specifically, we propose soft contrastive losses for instance and temporal dimensions, respectively, to address limitations of previous CL methods for TS.* We provide extensive experimental results on various tasks for TS, showing that our method improves SOTA performance on a range of downstream tasks. For example, SoftCLT improves the average accuracy of 125 UCR datasets and 29 UEA datasets by 2.0% and 3.9%, respectively, compared to the SOTA unsupervised representation for classification tasks.* SoftCLT is easily applicable to other CL frameworks for TS by introducing soft assignments and its overhead is negligible, making it practical for use. § RELATED WORKSelf-supervised learning.In recent years, self-supervised learning has gained lots of attention for its ability to learn powerful representations from large amounts of unlabeled data.Self-supervised learning is done by training a model to solve a pretext task derived from a certain aspect of data without supervision. As a self-supervised pretext task, next token prediction <cit.> and masked token prediction <cit.> are commonly used in natural language processing, while solving jigsaw puzzles <cit.> and rotation prediction <cit.> are proposed in computer vision. In particular, contrastive learning <cit.> has shown to be an effective pretext task across domains, which maximizes similarities of positive pairs while minimizing similarities of negative pairs <cit.>.Contrastive learning in time series.In the field of TS analysis, several designs for positive and negative pairs have been proposed for CL, taking into account the invariant properties of TS. Table <ref> compares various CL methods in TS including ours in terms of several properties. T-Loss <cit.> samples a random subseries from a TS and treats them as positive when they belong to its subseries, and negative if belong to subseries of other TS. Self-Time <cit.> captures inter-sample relation between TS by defining augmented sample of same TS as positive and negative otherwise, and captures intra-temporal relation within TS by solving a classification task, where the class labels are defined using the temporal distance between the subseries. TNC <cit.> defines temporal neighborhood of windows using normal distribution and treats samples in neighborhood as positives. TS-SD <cit.> trains a model using triplet similarity discrimination task, where the goal is to identify which of two TS is more similar to a given TS, using DTW to define similarity. TS-TCC <cit.> proposes a temporal contrastive loss by making the augmentations predict each other's future, and CA-TCC <cit.>, which is the extension of TS-TCC to the semi-supervised setting, adopts the same loss. TS2Vec <cit.> splits TS into two subseries and defines hierarchical contrastive loss in both instance and temporal dimensions. Mixing-up <cit.> generates new TS by mixing two TS, where the goal is to predict the mixing weights. CoST <cit.> utilizes both time domain and frequency domain contrastive losses to learn disentangled seasonal-trend representations of TS. TimeCLR <cit.> introduces phase-shift and amplitude change augmentations, which are data augmentation methods based on DTW. TF-C <cit.> learns both time- and frequency-based representations of TS and proposes a novel time-frequency consistency architecture.In the medical domain, Subject-Aware CL <cit.> proposes an instance-wise CL framework where the temporal information is entangled by architecture design, and CLOCS <cit.> proposes to consider spatial dimension specifically available in their application, which is close to the channels in general TS. While previous CL methods for TS compute hard contrastive loss, where the similarities between all negative pairs are equally minimized, we introduce soft contrastive loss for TS.Soft contrastive learning.CL is typically done by batch instance discrimination, where each instance is considered to be in a distinct class. However, this approach can pose a risk of pushing similar samples farther apart in the embedding space.To address this issue, several methods have been proposed, including a method that utilizes soft assignments of images <cit.> based on feature distances and geometric proximity measures. NNCLR <cit.> defines additional positives for each view by extracting top-k neighbors in the feature space.NCL <cit.> finds neighbors using supervision from the medical domain knowledge and jointly optimizes two conflicting losses with a trade-off: the neighbor alignment loss maximizing the similarity of neighbors as well as positive pairs, and the neighbor discriminative loss maximizing the similarity of positive pairs while minimizing the similarity of neighbors. SNCLR <cit.>, which extends NNCLR with soft assignments, employs an attention module to determine the correlations between the current and neighboring samples. CO2 <cit.> introduces consistency regularization to enforce relative distribution consistency between different positive views and all negatives, resulting in soft relationships between samples.ASCL <cit.> introduces soft inter-sample relations by transforming the original instance discrimination task into a multi-instance soft discrimination task. Previous soft CL methods in non-TS domains compute soft assignments on the embedding space, because similarities of instances on the data space are difficult to measure, particularly in computer vision <cit.>. In contrast, we propose to compute soft assignments based on the distance between TS instances on the data space.Masked modeling in time series.Other than CL, masked modeling has recently been studied as a pretext task for self-supervised learning in TS by masking out a portion of TS and predicting the missing values. While CL has demonstrated remarkable performance in high-level classification tasks, masked modeling has excelled in low-level forecasting tasks <cit.>. TST <cit.> adopts the masked modeling paradigm to TS, where the goal is to reconstruct the masked timestamps.PatchTST <cit.> aims to predict the masked subseries-level patches to capture the local semantic information and reduce memory usage.SimMTM <cit.> reconstructs the original TS from multiple masked TS.§ METHODOLOGYIn this section, we propose SoftCLT by introducing soft assignments to instance-wise and temporal contrastive losses to capture both inter-sample and intra-temporal relationships, respectively. For instance-wise CL, we use distance between TS on the data space to capture the inter-sample relations, and for temporal CL, we use the difference between timestamps to consider the temporal relation within a single TS. The overall framework of SoftCLT is illustrated in Figure <ref>. §.§ Problem DefinitionThis paper addresses the task of learning a nonlinear embedding function f_θ: x → r, given a batch of N time series 𝒳={x_1, , x_N}. Our goal is to learn f_θ mapping a time series x_i ∈ℝ^T × D to a representation vector r_i = [ r_i, 1, , r_i, T]^⊤∈ℝ^T × M, where T is the sequence length, D is the input feature dimension, and M is the embedded feature dimension. §.§ Soft Instance-Wise Contrastive LearningContrasting all instances within a batch might be harmful for TS representation learning because similar instances are learned to be far away from each other on the embedding space. Unlike other domains such as computer vision, the distance between TS data computed on the data space is useful for measuring the similarity of them. For example, the pixel-by-pixel distance of two different images is not related to their similarities in general, that of two TS data is useful to measure their similarities. With a min-max normalized distance metric D(·, ·), we define a soft assignment for a pair of data indices (i,i^') for the instance-wise contrastive loss using the sigmoid function σ(a) = 1/(1+exp(-a)):w_I(i,i^') = 2 α·σ(- τ_I· D(x_i,x_i^') ),where τ_I is a hyperparameter controlling the sharpness and α is the upper bound in the range of [0,1] to distinguish pairs of the same TS and pairs of different TS close to each other; when α=1, we give the assignment of one to the pairs with the distance of zero as well as the pairs of the same TS. Note that distances between TS are computed with the original TS rather than the augmented views, because the pairwise distance matrix can be precomputed offline or cached for efficiency.For the choice of the distance metric D, we conduct an ablation study in Table <ref>, comparing 1) cosine distance, 2) Euclidean distance, 3) dynamic time warping (DTW), and 4) time alignment measurement (TAM) <cit.>. Among them, we choose DTW as the distance metric throughout the experiments based on the result in Table <ref>. While the computational complexity of DTW is 𝒪(T^2) for two TS of length T which might be costly for large-scale datasets, it can be precomputed offline or cached to facilitate efficient calculations, or its fast version such as FastDTW <cit.> with the complexity of 𝒪(T) can be used. We empirically confirmed that the output of DTW and FastDTW is almost the same, such that the CL results also match.Let r_i,t=r_i+2N,t and r̃_i,t = r_i+N,t be the embedding vectors from two augmentations of x_i at timestamp t for conciseness. Inspired by the fact that the contrastive loss can be interpreted as the cross-entropy loss <cit.>, we define a softmax probability of the relative similarity out of all similarities considered when computing the loss as:p_I((i,i^'),t) = exp (r_i,t∘ r_i^',t)/∑_j=1, j≠ i^2Nexp (r_i, t∘ r_j, t),where we use the dot product as the similarity measure ∘. Then, the soft instance-wise contrastive loss for x_i at timestamp t is defined as:ℓ_I^(i, t) =- log p_I((i,i+N),t) - ∑_j=1,j ≠{i, i+N}^2N w_I(i,jmodN) ·log p_I((i,j),t).The first term in ℓ_I^(i, t) corresponds to the loss of the positive pair, and the second term corresponds to that of the other pairs weighted by soft assignments w_I(i,i^'). Note that this loss can be seen as a generalization of the hard instance-wise contrastive loss, which is the case when ∀ w_I(i,i^')=0. §.§ Soft Temporal Contrastive LearningFollowing the intuition that values in adjacent timestamps are similar, we propose to compute a soft assignment based on the difference between timestamps for temporal contrastive loss. Similar to the soft instance-wise contrastive loss, the assignment is close to one when timestamps get closer and zero when they get farther away. We define a soft assignment for a pair of timestamps (t, t^') for the temporal contrastive loss as:w_T(t, t^')= 2 ·σ( - τ_T·| t-t^'| ),where τ_T is a hyperparameter controlling the sharpness. As the degree of closeness between timestamps varies across datasets, we tune τ_T to control the degree of soft assignments. Figure <ref> illustrates an example of soft assignments with respect to timestamp difference with different τ_T.Hierarchical loss.For temporal CL, we consider hierarchical contrasting on intermediate representations in the network f_θ as done in prior CL methods for TS. Specifically, we adopt the hierarchical contrastive loss proposed in TS2Vec <cit.>, where the losses are computed on intermediate representations after each max-pooling layer along the temporal axis and then aggregated. As shown in Figure <ref>, similarities between adjacent time step decrease after pooling, we adjust τ_Tby multiplying m^k in Eq. <ref>, i.e., τ_T = m^k ·τ̃_T where m is the kernel size of pooling layers, k is the depth, and τ̃_T is the base hyperparameter.Now, let r_i,t=r_i,t+2T and r̃_i,t = r_i,t+T be the embedding vectors from two augmentations of x_i at timestamp t for conciseness. Similar to Eq. <ref>, we define a softmax probability of the relative similarity out of all similarities considered when computing the loss as:p_T(i, (t,t^')) = exp (r_i,t∘ r_i,t^')/∑_s=1, s ≠ t^2Texp (r_i, t∘ r_i, s).Then, the soft temporal contrastive loss for x_i at timestamp t is defined as:ℓ_T^(i, t) =- log p_T(i, (t,t+T))- ∑_s=1, s ≠{t, t+T}^2Tw_T(t,smodT) ·log p_T(i, (t,s)).Similar to the soft instance-wise contrastive loss, this loss can be seen as a generalization of the hard temporal contrastive loss, which is the case when ∀ w_T(t,t^')=0.The final loss for SoftCLT is the joint of the soft instance-wise and temporal contrastive losses:ℒ=1/4NT∑_i=1^2N∑_t=1^2T (λ·ℓ_I^(i, t)+ (1-λ) ·ℓ_T^(i, t)),where λ is a hyperparameter controlling the contribution of each loss, set to 0.5 unless specified. The proposed loss has an interesting mathematical interpretation that it can be seen as the scaled KL divergence of the softmax probabilities from the normalized soft assignments, where the scale is the sum of soft assignments. We provide more details in Appendix <ref>.§ EXPERIMENTSWe conduct extensive experiments to validate the proposed method and assess its performance in different tasks: (1) classification with univariate and multivariate TS, (2) semi-supervised classification by (i) self-supervised learning followed by fine-tuning and (ii) semi-supervised learning, (3) transfer learning in in-domain and cross-domain scenarios, and (4) anomaly detection in normal and cold-start settings. We also conduct ablation studies to validate the effectiveness of SoftCLT as well as its design choices. Finally, we visualize pairwise distance matrices and t-SNE <cit.> of temporal representations to show the effect of SoftCLT over previous methods. We use the data augmentation strategies of the methods we apply our SoftCLT to: TS2Vec generates two views as TS segments with overlap, and TS-TCC/CA-TCC generate two views with weak and strong augmentations, using the jitter-and-scale and permutation-and-jitter strategies, respectively. §.§ ClassificationWe conduct experiments on TS classification tasks with 125[Some of the previous methods cannot handle missing observations, so three of the 128 datasets are omitted.] UCR archive datasets <cit.> for univariate TS and 29[One of the 30 datasets is omitted for a fair comparison with some of the previous methods.] UEA archive datasets <cit.> for multivariate TS, respectively.Specifically, we apply SoftCLT to TS2Vec <cit.>, which has demonstrated SOTA performance on the above datasets. As baseline methods, we consider DTW-D <cit.>, TNC <cit.>, TST <cit.>, TS-TCC <cit.>, T-Loss <cit.>, and TS2Vec <cit.>. The experimental protocol follows that of T-Loss and TS2Vec, where the SVM classifier with the RBF kernel is trained on top of the instance-level representations obtained by max-pooling representations of all timestamps.Table <ref> and the critical difference (CD) diagram based on the Wilcoxon-Holm method <cit.> shown in Figure <ref> demonstrate that the proposed method improves SOTA performance by a significant margin on both datasets in terms of accuracy and rank. In Figure <ref>,the best and second-best results for each dataset are in red and blue, respectively. We also connect methods with a bold line if their difference is not statistically significant in terms of the average rank with a confidence level of 95%, which shows that the performance gain by the proposed method is significant. §.§ Semi-Supervised Classification We conduct experiments on semi-supervised classification tasks by adopting SoftCLT to TS-TCC <cit.> and its extension CA-TCC <cit.>, which are the methods that incorporate CL into self- and semi-supervised learning, respectively. As baseline methods, we consider SSL-ECG <cit.>, CPC <cit.>, SimCLR <cit.> and TS-TCC <cit.> for self-supervised learning, and Mean-Teacher <cit.>, DivideMix <cit.>, SemiTime <cit.>, FixMatch <cit.> and CA-TCC <cit.> for semi-supervised learning. Note that both TS-TCC and CA-TCC perform instance-wise and temporal contrasting, however, their temporal contrasting is achieved by predicting one view's future from another, which is different from the conventional contrastive loss with positive and negative pairs. Therefore, we adopt our soft temporal contrastive loss as an additional loss to both methods. For evaluation, we utilize the same experimental settings and datasets of CA-TCC, which includes eight datasets <cit.>, six of which are from the UCR archive. We consider two semi-supervised learning scenarios, (1) self-supervised learning with unlabeled data followed by supervised fine-tuning with labeled data and (2) semi-supervised learning with both labeled and unlabeled data, following CA-TCC <cit.>. Table <ref> presents the experimental results with both methods in scenarios with 1% and 5% labeled datasets, showing that applying SoftCLT achieves the best overall performance across most of the datasets in both scenarios. §.§ Transfer LearningWe conduct experiments on transfer learning for classification in in-domain and cross-domain settings which are used in previous works <cit.>, by adopting our SoftCLT to TS-TCC and CA-TCC. As baseline methods, we consider TS-SD <cit.>, TS2Vec <cit.>, Mixing-Up <cit.>, CLOCS <cit.>, CoST <cit.>, LaST <cit.>, TF-C <cit.>, TS-TCC <cit.>, TST <cit.> and SimMTM <cit.>. In in-domain transfer learning, the model is pretrained on SleepEEG <cit.> and fine-tuned on Epilepsy <cit.>, where they are both EEG datasets and hence considered to be in a similar domain. In cross-domain transfer learning, which involves pretraining on one dataset and fine-tuning on different datasets, the model is pretrained on SleepEEG, and fine-tuned on three datasets from different domains, FD-B <cit.>, Gesture <cit.>, and EMG <cit.>.Also, we perform transfer learning without adaptation under self-and semi- supervised settings, where source and target datasets share the same set of classes but only 1% of labels are available for the source dataset, and no further training on the target dataset is allowed. Specifically, models are trained on one of the four conditions (A,B,C,D) in the Fault Diagnosis (FD) datasets <cit.> and test on another. Table <ref> shows the results of both in- and cross-domain transfer learning, and Table <ref> shows the results of both self- and semi-supervised settings with FD datasets. Notably, SoftCLT applied to CA-TCC improves average accuracy of twelve transfer learning scenarios with FD datasets by 10.68%. §.§ Anomaly DetectionWe conduct experiments on univariate TS anomaly detection (AD) task by adopting SoftCLT to TS2Vec <cit.> under two different settings: the normal setting splits each dataset into two halves according to the time order and use them for training and evaluation, respectively, and the cold-start setting pretrains models on the FordA dataset in the UCR archive and evaluates on each dataset. As baseline methods, we consider SPOT <cit.>, DSPOT <cit.>, DONUT <cit.>, SR <cit.>, for the normal setting, and FFT <cit.>, Twitter-AD <cit.>, Luminol <cit.> for the cold-start setting, and TS2Vec <cit.> for both. The anomaly score is computed by the L1 distance of two representations encoded from masked and unmasked inputs following TS2Vec. We evaluate the compared method on the Yahoo <cit.> and KPI <cit.> datasets. We found that suppressing instance-wise CL leads to better AD performance on average, so we report TS2Vec and SoftCLT performances without instance-wise CL; more details can be found in the Appendix <ref>. As shown in Table <ref>, SoftCLT outperforms the baselines in both settings in terms of the F1 score, precision, and recall. Specifically, SoftCLT applied to TS2Vec improves the F1 score approximately 2% in both datasets under both normal and cold-start settings.§.§ Ablation StudyEffectiveness of SoftCLT.Table <ref> shows the effect of soft assignments from the standard hard CL. Applying soft assignments to instance-wise or temporal CL provides a performance gain, and applying them to both dimensions results in the best performance, improving the accuracy on the UCR and UEA datasets by 2.7% and 3.7%, respectively. r0.43< g r a p h i c s > Designs for soft temporal CL. Design choices for soft temporal CL.Table <ref> compares different choices of the soft assignment w_T. Neighbor takes neighborhood within a window around the reference point as positive and the others as negative. Linear gives soft assignments linearly proportional to the time difference from the reference point, where the most distant one gets the value of zero. Gaussian gives soft assignments based on a Gaussian distribution with the mean of the reference point and the standard deviation as a hyperparameter. Among them, Sigmoid in Eq. <ref> shows the best performance as shown in Table <ref>.Upper bound for soft instance-wise CL.In the soft instance-wise contrastive loss, α is introduced to avoid giving the same assignment to pairs of the same TS and pairs of the different TS with the distance of zero, where α=1 makes both cases to have the same assignment. Table <ref> studies the effect of tuning α. Based on the results, α=0.5 is the best, i.e., the similarity of the pairs of the same TS should be strictly larger than other pairs, but not by much.Distance metrics for soft instance-wise CL.Table <ref> compares different choices of the distance metric D in Eq. <ref>: cosine distance (COS), Euclidean distance (EUC), dynamic time warping (DTW), and time alignment measurement (TAM) <cit.> on 128 UCR datasets, where the baseline is TS2Vec and the hard or best soft temporal CL is applied together. The result shows that the improvement by soft instance-wise CL is robust to the choice of the distance metric. We use DTW throughout all other experiments because DTW is well-studied, commonly used in the literature and fast algorithms such as FastDTW are available. §.§ AnalysisComparison with soft CL methods in computer vision. While soft CL methods have been proposed in other domains, they compute soft assignments on the embedding space because it is difficult to measure the similarities on the data space, particularly in computer vision. However, we argue that the similarities on the data space is indeed a strong self-supervision, leading to better representation learning. To confirm this, we compare SoftCLT withsoft CL methods proposed in other domains working on the embedding space: NNCLR <cit.> and ASCL <cit.>, on UCR datasets.For a fair comparison, we apply all compared methods to TS2Vec under the same setting. As shown in Table <ref>, different from the proposed method, NNCLR and ASCL deteriorate the performance of TS2Vec, implying that similarities measured on the data space is strong self-supervision, while similarities measured on the learnable embedding space might not be useful in some domains. To further investigate the failure modes of the previous methods, we categorize datasets by the average TS length of 200 in Table <ref>, and observe that previous methods fail to capture the similarities of long TS data.Robustness to seasonality.An assumption behind the proposed soft temporal CL is that values in adjacent timestamps are similar,which may raise a concern that seasonality in TS might not be captured. To address this, we categorize UCR datasets based on seasonality by ADF test <cit.> at the significance level of p=0.05. As shown in Table <ref>, the performance gain by SoftCLT is consistent regardless of the seasonality. Our conjecture is that TS in the real world usually do not exhibit the perfect seasonality, as indicated by the ADF test result, such that SoftCLT takes advantage of the non-seasonal portions. Meanwhile, previous works have tried to decompose trend and seasonality in TS for representation learning <cit.>. However, this may not be realistic for TS that are neither simultaneously auto-regressive nor stationary <cit.>. In summary, we do not consider seasonality in TS directly, because it is not only challenging to extract but we can still achieve good performance without considering it in practice.Instance-wise relationships.To see whether instance-wise relationships are preserved in the encoder, we visualize the pairwise instance-wise distance matrices of representations on the InsectEPGRegularTrain dataset from the UCR archive <cit.> extracted from each layer, where the brighter color indicates the lower distance between instances. The top and bottom panels of Figure <ref> show the changes in pairwise distance matrices of representations as depth progresses when adopting hard and soft CL, respectively.The results indicate that SoftCLT preserves the relationships between TS instances throughout encoding, while the standard hard CL fails to preserve them.Temporal relationships.To assess the quality of temporal relationships captured by SoftCLT, we apply t-SNE <cit.> to visualize the temporal representations, which are representations of each timestamp in a single TS. Figure <ref> compares t-SNE of the representations learned with hard and soft CL over different training epochs, with the points getting darker as time progresses.While hard CL finds coarse-grained neighborhood relationships such that it fails to distinguish late timestamps in dark red, soft CL finds more fine-grained relationships.§ CONCLUSIONIn this paper, we present a soft contrastive learning framework for time series.In contrast to previous methods that give hard assignments to sample pairs, our approach gives soft assignments based on the instance-wise and temporal relationships on the data space. We demonstrate the effectiveness of our method in a range of tasks, leading to significant improvements in performance. We hope our work enlightens the effectiveness of self-supervision from the data space and motivates future works on contrastive representation learning in various domains to take account of it.§ ETHICS STATEMENTThe proposed soft contrastive learning algorithm for time series has a potential to make a significant impact on the field of representation learning for time series data.The ability to apply this algorithm to various tasks and solve the general problem of time series representation learning is promising. In particular, the algorithm can be applied to transfer learning, which may be useful in scenarios with small datasets for downstream tasks. Furthermore, we expect that the idea of utilizing self-supervision from the data space for contrastive representation learning motivates future works in various domains.However, as with any algorithm, there are ethical concerns to be considered. One potential ethical concern is a potential for the algorithm to perpetuate biases that may exist in the datasets used for pretraining.For example, if the pretraining dataset is imbalanced with respect to certain demographic attributes, this bias may be transferred to fine-tuning, potentially leading to biased predictions. It is essential to evaluate and address potential biases in the pretraining dataset before using the algorithm in real-world scenarios.To ensure responsible use of the algorithm, we will make the datasets and code publicly available.Public availability of datasets and code allows for transparency and reproducibility, allowing other researchers to evaluate and address potential biases and misuse.iclr2024_conferencetablesection figuresection equationsection§ DATASET DESCRIPTION §.§ ClassificationFor time series classification, we use the UCR archive <cit.> and UEA archive <cit.>. The UCR archive contains 128 univariate datasets, while the UEA archive contains 30 multivariate datasets. Among them, some datasets cannot be handled by T-Loss <cit.>, TS-TCC <cit.>, and TNC <cit.> due to missing observations, such as DodgerLoopDay, DodgerLoopGame, and DodgerLoopWeekend. Additionally, there is no reported result for the DTW-D <cit.> on the InsectWingbeat dataset in the UEA archive. Hence, the comparison is conducted using the remaining 125 UCR datasets and 29 UEA datasets in the main paper. However, TS2Vec works well on all UCR and UEA datasets, so we experiment with all 128 UR datasets and 30 UEA datasets for ablation studies for our method on top of TS2Vec. §.§ Semi-supervised ClassificationTable <ref> describes the summary of the statistical information for eight datasets <cit.> used in semi-supervised classifiaction, including the number of training and testing samples, data length, the number of sensor channels, and the number of classes. §.§ Transfer LearningWe evaluate our approach on various datasets, which cover a wide range of application scenarios, including neurological healthcare, human activity recognition, mechanical fault detection, and physical status monitoring. Table <ref> describes the datasets for in-domain and cross-domain transfer learning. Fault Diagnosis (FD) datasets were used for transfer learning under self- and semi-supervised settings. The data statistics are described below. (1) SleepEEG <cit.> dataset contains EEG recordings of 153 whole-night sleep sessions from 82 healthy individuals. We segmented the EEG signals using a non-overlapping approach, following the same preprocessing method as (Zhang et al., 2022), to obtain 371,055 univariate brainwaves, each sampled at 100 Hz and categorized into one of five sleep stages: Wake, Non-rapid eye movement (3 sub-states), and Rapid Eye Movement. When using SleepEEG dataset as a source dataset in transfer learning task, we use cosine similarity instead of DTW due to the property of EEG datasets <cit.>.(2) Epilepsy <cit.> dataset monitors brain activity using a single-channel EEG sensor on 500 subjects, with each subject being recorded for 23.6 seconds. The dataset is sampled at 178 Hz and contains 11,500 samples. We followed the same preprocessing method as (Zhang et al., 2022) and classified the first four classes (eyes open, eyes closed, EEG measured in the healthy brain region, and EEG measured in the tumor region) of each sample as positive, while the remaining classes (whether the subject has a seizure episode) were classified as negative.(3) FD-B <cit.> dataset is collected from electromechanical drive systems and monitors the condition of rolling bearings to detect their failures based on monitoring conditions such as speed, load torque, and radial force. It consists of 13,640 samples, each recorded at 64k Hz and categorized into three classes: undamaged, inner damaged, and outer damaged.(4) Gesture <cit.> dataset includes data on 8 hand gestures based on hand movement paths recorded by an accelerometer. The eight gestures are hand swiping left, right, up, and down, hand waving in a counterclockwise or clockwise circle, hand waving in a square, and waving a right arrow. The dataset contains 440 balanced classification labels, with each sample having eight different categories of gestures.(5) EMG <cit.> dataset consists of 163 single-channel EMG recordings from the tibialis anterior muscle of three healthy volunteers suffering from neuropathy and myopathy. Each sample is associated with one of three classes, with each class representing a different patient. The dataset is sampled at 4K Hz.(6) FD <cit.> dataset was obtained by monitoring the sensor readings of a bearing machine while it operated under four distinct working conditions. Each working condition can be regarded as a separate domain since they exhibit unique features, such as variations in rotational speed and load torque. Within each domain, there are three categories: two fault classes, inner and outer fault, and one healthy class. The FD dataset has 8,184 training samples, 2,728 test samples, a data length of 5,120, one channel, and three classes. Our main goal is to use this dataset to conduct transferability experiments under both self- and semi-supervised settings and demonstrate the efficiency of our approach in transfer learning situations. §.§ Anomaly DetectionWe employed Yahoo <cit.> and KPI <cit.> for the anomaly detection task.Yahoo is a benchmark dataset that contains 367 hourly sampled time series with annotated anomaly points.This dataset covers a wide range of anomaly types, including outliers and change-points.KPI is a competition dataset released by AIOPS Challenge in 2019.It contains several minutely sampled real KPI curves from diverse internet companies. § BASELINE METHODS Classification:The results of all baseline methods for the classification task (DTW-D <cit.>, TNC <cit.>, TST <cit.>, TS-TCC <cit.>, T-Loss <cit.>, and TS2Vec <cit.>) are reported in <cit.>. * DTW-D <cit.>: DTW-D (Dynamic Time Warping-Delta) is a variant of DTW under semi-supervised learning settings.* TNC <cit.>: TNC (Temporal Neighborhood Coding) defines temporal neighborhood of window using normal distribution, and defines samples in neighborhood and non-neighborhood as positives and negatives, respectively.* TST <cit.>: TST (Time Series Transformer) adopts the masked modeling paradigm to time series domain, where the goal is to reconstruct the masked time stamps. * TS-TCC <cit.>: TS-TCC (Time-Series representation learning framework via Temporal and Contextual Contrasting) proposes a new temporal contrastive loss by making the augmentations predict each other's future.* T-Loss <cit.>: T-Loss is a triplet loss designed for time series. It samples a random subseries from a time series and treats them as positive when they belong to its subseries, and negative if belong to subseries of other time series.* TS2Vec <cit.>: TS2Vec splits time series into several subseries and defines hierarchical contrastive loss in both instance-wise and temporal dimensions.Semi-supervised classification:The results of all baseline methods for semi-supervised classification using self-supervised methods (SSL-ECG <cit.>, CPC <cit.>, SimCLR <cit.>, TS-TCC <cit.>) and semi-supervised methods (Mean-Teacher <cit.>, DivideMix <cit.>, SemiTime <cit.>, FixMatch <cit.>, CA-TCC <cit.>) are reported in <cit.>. * SSL-ECG <cit.>: SSL-ECG (Self-supervised ECG Representation Learning for Emotion Recognition) proposes ECG-based emotion recognition using multi-task self-supervised learning* CPC <cit.>: CPC (Contrastive Predictive Coding) combines predicting future observations (predictive coding) with a probabilistic contrastive loss.* SimCLR <cit.>: SimCLR proposes a simple framework for contrastive learning of visual representations, without requiring specialized architectures or a memory bank.* Mean-Teacher <cit.>: Mean-Teacher is an algorithm for semi-supervised algorithm, that averages model weights instead of predictions.* DivideMix <cit.>: DivideMix uses a mixture model to divide training data into labeled clean samples and unlabeled noisy samples, and trains a model on both sets in a semi-supervised way.* SemiTime <cit.>: SemiTime conducts supervised classification on labeled time series data and self-supervised prediction of temporal relations on unlabeled time series data. It achieves this by sampling segments of past-future pairs from the same or different candidates and training the model to distinguish between positive and negative temporal relations between those segments.* FixMatch <cit.>: FixMatch generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images, and retain the pseudo-label with a high-confidence prediction. Then, the model is trained to predict the pseudo-label when fed a strongly-augmented version of the same image.* CA-TCC <cit.>: CA-TCC (Self-supervised Contrastive Representation Learning for Semi-supervised Time-Series Classification) is the extension of TS-TCC to the semi-supervised settings, and adopts the same contrastive loss as TS-TCC.Transfer learning:The results of baseline methods for transfer learning in both in-domain and cross-domain settings (TS-SD <cit.>, TS2Vec <cit.>, Mixing-Up <cit.>, TF-C <cit.>, TS-TCC <cit.>, TST <cit.>, SimMTM <cit.>) using SleepEEG dataset as the pre-training dataset, are reported in <cit.>, except for results of TS-SD which are reported in <cit.>. The results of baseline methods for transfer learning in both self-supervised and semi-supervised settings (Supervised, TS-TCC <cit.>, CA-TCC <cit.>), using FD dataset as the pre-training dataset, are reported in <cit.>.* TS-SD <cit.>: TS-SD utilizes a triplet similarity discrimination task to train a model. The objective is to determine which of the two TS is more similar to a given TS, with DTW employed as a means to define the similarity.* Mixing-Up <cit.>: Mixing-up generates new time series by mixing two time series, and predicts the mixing weights.* TF-C <cit.>: TF-C generates both time-based and frequency-based representations of time series and proposes a novel time-frequency consistency architecture. * SimMTM <cit.>: SimMTM adopts the masked modeling paradigm to time series domain, where the goal is to reconstruct the original time series from multiple masked series.Anomaly detection:The results of all baseline methods for the anomaly detection task (SPOT <cit.>, DSPOT <cit.>, DONUT <cit.>, SR <cit.>, FFT <cit.>, Twitter-AD <cit.>, Luminol <cit.>, TS2Vec <cit.>) are reported in <cit.>. * SPOT <cit.>: SPOT is a novel outlier detection approach for streaming univariate time series, based on Extreme Value Theory, which does not rely on pre-set thresholds, assumes no distribution, and only requires a single parameter to control the number of false positives.* DONUT <cit.>: DONUT is an unsupervised anomaly detection algorithm based on variational autoencoder.* SR <cit.>: SR is a time-series anomaly detection algorithm that is based on the Spectral Residual (SR) model and Convolutional Neural Network (CNN), where the SR model is borrowed from visual saliency detection and combined with CNN to improve its performance.* FFT <cit.>: FFT uses fast fourier transform to detect the areas with high frequency change.* Twitter-AD <cit.>: Twitter-AD automatically detects long-term anomalies in cloud data by identifying anomalies in application and system metrics.* Luminol <cit.>: Luminol is a Python library for time series data analysis that provides two main functionalities - anomaly detection and correlation - and can be utilized to investigate the potential causes of anomalies. § IMPLEMENTATION DETAILSThe table of hyperparameter settings that we utilized can be found in Table <ref>. We made use of five hyperparameters: τ_I, τ_T, λ, batch size (bs), and learning rate (lr). For semi-supervised classification and transfer learning, we set the weight decay to 3e-4, β_1 = 0.9, and β_2 = 0. The number of optimization iterations for classification and anomaly detection tasks is set to 200 for datasets with a size less than 100,000; otherwise, it is set to 600. Additionally, the training epochs for semi-supervised classification are set to 80, while for transfer learning, it is set to 40.Since we utilized soft contrastive loss as an auxiliary loss for TS-TCC and CA-TCC, which are the methods involved in solving semi-supervised classification and transfer learning tasks, we introduced an additional hyperparameter λ_aux to control the contribution of the auxiliary loss to the final loss.§ PROBABILISTIC INTERPRETATION OF SOFT CONTRASTIVE LOSSES Inspired by the fact that the contrastive loss can be interpreted as the cross-entropy loss with virtual labels defined per batch, or equivalently, the KL divergence of the predicted softmax probability from the virtual label or hard assignment <cit.>, we define a softmax probability of the relative similarity out of all similarities considered when computing the loss, and interpret our soft contrastive losses as a weighted sum of the cross-entropy losses. In this section, we show that the proposed contrastive loss can also be seen as the scaled KL divergence of the predicted softmax probabilities from the normalized soft assignments, where the scale is the sum of soft assignments. When hard assignment is applied, the loss becomes the standard contrastive loss, which is often called InfoNCE <cit.>. §.§ Probabilistic Interpretation of Soft Instance-Wise Contrastive LossTo simplify indexing, we extend soft assignments to incorporate the positive sample and anchor itself:w^'_I(i,i^')=0,ifi=i^'; 1,ifi≠ i^' andi≡ i^' (modN); w_I(i,i^' modN),otherwise;and let q_I(i,i^')= w^'_I(i,i^')/Z_I be its normalization, where Z_I= ∑_j=1^2N w^'_I(i,j) is the partition function. Then, we can rewrite the proposed soft instance-wise contrastive loss as follows:ℓ_I^(i, t) = - log p_I((i,i+N),t) - ∑_j=1,j ≠{i, i+N}^2N w_I(i,jmodN) ·log p_I((i,j),t) = - ∑_j=1^2N w^'_I(i,j) ·log p_I((i,j),t) = - Z_I·∑_j=1^2Nw^'_I(i,j)/Z_I·log p_I((i,j),t) = Z_I·∑_j=1^2N q_I(i,j) ·logq_I(i,j)/p_I((i,j),t) - = constantq_I(i,j) log q_I(i,j). equation Let Q_I and P_I be the probability distributions of q_I(i,j), and p_I((i,j),t), respectively. Then, we can rewrite the above loss as:ℓ_I^(i, t) = Z_I· KL ( Q_I || P_I ) + const,which is the scaled KL divergence of the predicted softmax probability from the soft assignments. §.§ Probabilistic Interpretation of Soft Temporal Contrastive LossTo simplify indexing, we extend soft assignments to incorporate the positive sample and anchor itself:w^'_T(t,t^')=0,ift=t^'; 1,ift≠ t^' andt≡ t^' (modT);w_T(t,t^' modT),otherwise;and let q_T(t,t^')= w^'_T(t,t^')/Z_T be its normalization, where Z_T= ∑_s=1^2T w^'_T(t,s) is the partition function. Then, we can rewrite the proposed soft temporal contrastive loss as follows:ℓ_T^(i, t) = - log p_T(i,(t,t+T)) - ∑_s=1,s ≠{t, t+T}^2T w_T(t,smodN) ·log p_T(i,(t,s)) = - ∑_s=1^2T w^'_T(t,s) ·log p_T(i,(t,s)) = - Z_T·∑_s=1^2Tw^'_T(t,s)/Z_T·log p_T(i,(t,s)) = Z_T·∑_s=1^2T q_T(t,s) ·logq_T(t,s)/p_T(i,(t,s)) - = constantq_T(t,s) log q_T(t,s). equationLet Q_T and P_T be the probability distributions of q_T(t,s), and p_T(i,(t,s)), respectively. Then, we can rewrite the above loss as:ℓ_T^(i, t) = Z_T· KL ( Q_T || P_T ) + const,which is the scaled KL divergence of the predicted softmax probability from the soft assignments.These answer to a concern that targets are fixed while the predicted softmax probabilities are relative to the samples in the batch: the formulation with fixed targets is proportional to the formulation with relative targets, and their difference is only in the optimization speed by the scale Z_I and Z_T.§ HIERARCHICAL SOFT TEMPORAL CONTRASTIVE LOSSr0.40 max width=1.00 c|c Sharpness Avg. Acc.(%) τ_T83.3m^k·τ_T83.7type=table Effect of hierarchical τ_TIn our approach, we adopt the hierarchical contrastive loss proposed in TS2Vec <cit.>, where we apply max-pooling on the representations along the temporal axis and contrastive learning is performed at each level. However, as max pooling proceeds, semantic similarities between adjacent time step decrease, so the sharpness needs to be adjusted based on the hierarchy depth and kernel size.To address this, we increase the value of sharpness in soft temporal contrastive loss as the depth of the network increases, thereby reflecting the hierarchy of the time series.That is, we use m^k·τ_T instead of τ_T for the sharpness value in soft temporal contrastive loss, where m is the kernel size of pooling layers and k is the depth.For all datasets, we set m to 2, while the value of k depends on the length of each specific dataset. We conducted an ablation study to assess the effect of using hierarchical sharpness, by comparing the performance of using hierarchical sharpness (m^k·τ_T) against a constant sharpness (τ_T) using 128 datasets in UCR archive <cit.>. To solely observe the effect of hierarchical temporal contrastive loss, we employ the original hard instance-wise contrastive loss for this experiment. The results presented in Table <ref> demonstrate that increasing τ_T as the depth of the network increases leads to improved performance.§ DESIGN FOR INSTANCE-WISE CONTRASTIVE LOSSr0.40 max width= c|c Method Avg. Acc.(%) w/o kernel 79.1 Gaussian 82.6Laplacian 83.1 Sigmoid 83.9 type=table Design for instance-wise CL In this study, we explore different options for the soft assignments used in the soft instance-wise contrastive loss: without kernel (w/o kernel), Laplacian kernel, and Gaussian kernel.For w/o kernel, we use w_I(i,j ) = 1-D(x_i,x_j), where D is a min-max normalized distance metric. For the Laplacian kernel, we use w_I(i,j ) = exp(-D(x_i,x_j)/σ), and for the Gaussian kernel, we use w_I(i,j )=exp(-( D(x_i,x_j) )^2/2 σ^2), where σ is a hyperparameter.For all kernels, sample pairs with a lower distance tends to have soft assignments closer to one.We conduct an ablation study by comparing the performance of using the above kernels to model the soft assignments using the UCR archive datasets, and the results are presented in Table <ref>. For this experiment, we employ the original hard temporal contrastive loss to solely observe the effect of the functions used for the instance-wise contrastive loss.§ CONTRASTIVE LEARNING FOR ANOMALY DETECTION TASK Table <ref> indicates that employing only temporal contrastive loss, while excluding instance-wise contrastive loss, yields better performance in the majority of hard CL and soft CL settings for anomaly detection tasks. This can be attributed to the nature of the anomaly detection task, which involves detecting anomalies within a time series, and is less concerned with other time series. § TIME SERIES FORECASTINGThe tasks mentioned in the main paper, except for anomaly detection, can be classified as high-level tasks, which requires capturing instance-wise representations. High-level tasks generally perform better with CL methods than with masked modeling methods <cit.>.However, we can perform low-level tasks such as time series forecasting, when using encoder architectures that can obtain representations of each timestamp.For TS forecasting, we apply SoftCLT to both TS2Vec and CoST <cit.>.Capturing temporal information within time series is crucial for time series forecasting, so we use soft CL in two ways for TS2Vec: by adopting only temporal contrastive loss and by using both temporal and instance-wise contrastive loss. For the experiment, we use four datasets, ETTh1, ETTh2, ETTm1, and electricity dataset <cit.> for TS2Vec, and ETTh1, ETTh2, ETTm1, and Weather dataset <cit.> for CoST, under both univariate and multivariate settings. Table <ref> describes the summary of the statistical information for the five datasets. As demonstrated in Table <ref> and Table <ref>, our method results in performance gains compared to hard CL in both univariate and multivariate TS forecasting for both TS2Vec and CoST.§ INSTANCE-WISE VISUALIZATIONSHard CL vs. Soft CL.To assess the quality of instance-wise relationships captured by SoftCLT, we apply t-SNE <cit.> to visualize the instance-wise representations, which are representations of whole time series obtained by max-pooling the representations of all time stamps, to both hard and soft CL. For this experiment, we apply our method to TS2Vec <cit.> with the UWaveGestureLibraryZ dataset from UCR archive <cit.>. The results shown in Figure <ref> demonstrate that soft CL finds more fine-grained neighborhood relationships and distinguishes them better than hard CL.Embedding space vs. Input space.To assess the relationship between the shape of time series and their positions in the embedding dimension, we employ t-SNE <cit.> to embed instance-wise representations of time series using the InsectEPGRegularTrain dataset from UCR archive <cit.>. Figure <ref> illustrates the results, with the left panel displaying the points in the embedding space and the right panel presenting line plots of the original TS. The colors of the points and lines are assigned based on the distances with their neighbors in the embedding space. From this figure, we observe that TS with the same color not only exhibit similar shapes, but also as the points in the embedding space move towards the upper right, the line plots of the original TS shift towards the upper left. This demonstrates that our method effectively captures detailed neighborhood relationships while maintaining alignment between the distances in the embedding space and the original input space.§ TRANSFER LEARNING UNDER SEMI-SUPERVISED SETTINGSr0.30< g r a p h i c s > TL results In this study, we perform transfer learning in the semi-supervised settings using SleepEEG <cit.> and EMG <cit.> datasets as the source and target datasets, respectively. Specifically, we apply our SoftCLT to TS-TCC under semi-supervised settings where we perform fine-tuning using partially labeled datasets. Figure <ref> presents the results, which indicate that by using only 10% of labeled data with the soft CL framework (red line), we are able to achieve an accuracy of 92.69%, which is approximately 15% higher than the accuracy obtained from the hard CL framework (blue line) under fully supervised settings. Furthermore, using only 50% of the labeled dataset allowed us to achieve 100% accuracy, whereas the state-of-the-art performance using fully labeled datasets is 97.56%.§ EFFECT OF DISTANCE METRICS BY TIME SERIES WITH VARYING LENGTHWe compare the average accuracy of 128 UCR datasets, where 11 datasets have varying time-length, and the other 117 datasets have the same time-length.As shown in Table <ref>, DTW and TAM, both capable of comparing time series of different lengths using time warping, demonstrate better performance.§ DESIGN CHOICES FOR SOFT TEMPORAL CONTRASTIVE LEARNINGr0.44< g r a p h i c s > Design for soft temporal CL Various design choices can be considered for assigning soft labels in soft temporal contrastive learning.In this paper, we explore four different choices for the experiment, all of which assign high values to adjacent timestamps.Figure <ref> illustrates these four different choices. For Neighbor, Gaussian, and Sigmoid,we conduct a search for the optimal hyperparameter within the following range: * Neighbor:A certain range within the reference point, with 10%, 30%, 50% of the sequence length.* Gaussian: Standard deviation values of [0.5, 1.0, 1.5, 2.0, 2.5].* Sigmoid: τ_T of [0.5, 1.0, 1.5, 2.0, 2.5]. § SOFT CONTRASTIVE LEARNING WITH NON-STATIONARY TIME SERIESAs our proposed soft instance-wise CL method generates labels by considering the distances between the original TS, global information from the entire TS is encapsulated within the representation of a single time step of that TS, which might enables to take account of non-stationarity, such as seasonality or distribution shifts present in the TS.Time series with seasonality.Figure <ref> displays a single TS of Adiac data from the UCR archive <cit.> and its visualization of temporal representations obtained from TS2Vec (Hard CL) and TS2Vec applied with our method (Soft CL).Note that obvious seasonal pattern are observed from the left panel of the figure. Each point in the right panel represents a representation of a single timestamp.The figure indicates that while hard CL fails to capture the seasonal patterns in the TS (as similar values, regardless of which phase they are in, are located closely), our proposed soft CL can grasp the global pattern, enabling it to capture the seasonal patterns (as similar values but with different seasonal phases are located differently). Time series with distribution shift.Figure <ref> displays a single TS of EMD data from the UCR archive <cit.> and its visualization of temporal representations obtained from TS2Vec (Hard CL) and TS2Vec applied with our method (Soft CL).Note that distribution shifts exist in this data, and six different phases are observed from the left panel of the figure. Each point in the right panel represents a representation of a single timestamp. The figure indicates that while hard CL fails to capture the sudden change in the TS (as all points are located gradually regardless of the sudden change in the TS), our proposed soft CL can detect such distribution shifts (as points before/after a certain change are clustered in groups). § APPLYING SOFT CONTRASTIVE LEARNING TO TNCIn this section, we apply our SoftCLT to temporal neighborhood coding (TNC) <cit.>, which employs temporal CL by leveraging the local smoothness of a signal's generative process to define neighborhoods, or positive pairs, in time with stationary properties. We apply our method to TNC by assigning soft temporal assignments based on the difference between the centroids of two time windows. Note that TNC does not include instance-wise CL, so we only apply soft temporal CL. To evaluate the effectiveness of this application, we conduct experiments using two datasets: the simulation dataset constructed in TNC and HAR [<https://archive.ics.uci.edu/dataset/240/human+activity+recognition+using+smartphones>].The results are shown in Table <ref>, demonstrating that SoftCLT applied to TNC improves performance on both datasets in terms of accuracy (Acc.) and AUPRC.
http://arxiv.org/abs/2312.16424v1
{ "authors": [ "Seunghan Lee", "Taeyoung Park", "Kibok Lee" ], "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "primary_category": "cs.LG", "published": "20231227061500", "title": "Soft Contrastive Learning for Time Series" }
From Text to Multimodal: A Comprehensive Survey of Adversarial Example Generation in Question Answering Systems]From Text to Multimodal: A Comprehensive Survey of Adversarial Example Generation in Question Answering Systems [1,2]Gulsum [email protected] These authors contributed equally to this work.2]Mehmet Fatih [email protected] These authors contributed equally to this work. *[1]Department of Computer Engineering, Kadir Has University, Istanbul,Turkey[2]Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey Integrating adversarial machine learning with Question Answering (QA) systems has emerged as a critical area for understanding the vulnerabilities and robustness of these systems.This article aims to comprehensively review adversarial example-generation techniques in the QA field, including textual and multimodal contexts. We examine the techniques employed through systematic categorization, providing a comprehensive, structured review. Beginning with an overview of traditional QA models, we traverse the adversarial example generation by exploring rule-based perturbations and advanced generative models. We then extend our research to include multimodal QA systems, analyze them across various methods, and examine generative models, seq2seq architectures, and hybrid methodologies. Our research grows to different defense strategies, adversarial datasets, and evaluation metrics and illustrates the comprehensive literature on adversarial QA. Finally, the paper considers the future landscape of adversarial question generation, highlighting potential research directions that can advance textual and multimodal QA systems in the context of adversarial challenges. [ [   =====§ INTRODUCTION In recent years, natural language processing (NLP) has experienced a remarkable transformation through revolutions in deep learning architectures and the availability of vast amounts of textual data. These advancements have led to the development of sophisticated question-answering (QA) systems that aim to bridge the gap between human language and machine understanding.State-of-the-art QA models, such as transformer-based architectures like BERT (Bidirectional Encoder Representations from Transformers) and its variants, have demonstrated remarkable performance improvements. These models utilize pre-trained language models to capture complex linguistic patterns and contextual dependencies, enabling them to generate accurate and relevant answers to user questions. Moreover, integrating multimodal data, where textual, visual, or audio information joins, has also raised the capabilities of QA systems. Such multimodal QA models, leveraging textual, visual, or audio signals, have shown promising results in analyzing and generating questions. However, a vital vulnerability has arisen in the form of adversarial examples among their outstanding improvements. Initially presented in computer vision, adversarial examples are meticulously formulated inputs created to fool machine learning models <cit.>. These inputs are modified in malicious forms, almost invisible to humans but are influential in yielding AI systems to develop inaccurate outcomes. Recent research has brought to light the utilization of adversarial examples in various scenarios. To illustrate, adversaries can simulate physical adversarial examples, thereby confusing autonomous vehicles by manipulating the appearance of a stop sign within a traffic sign recognition system <cit.>. Additionally, malicious actors show expertise in generating adversarial commands aimed at sabotaging automatic speech recognition systems and voice-controllable systems <cit.>, including crucial platforms such as Apple Siri, Amazon Alexa, etc. The influence of adversarial examples quickly attracted the researchers' attention, defense mechanisms started to develop, and researchers investigated the underlying causes of the vulnerabilities. With the increased interest inQA systems, the research has shifted to innovative approaches, such as adversarial question generation models, to enhance the quality, diversity, and effectiveness of question generation across textual and multimodal contexts.Adversarial examples in QA systems are maliciously formulated inputs created to fool the model and generate inaccurate answers. These adversarially designed inputs are constructed by applying subtle transformations to the original question, usually in a form that is invisible to humans but drastically modifies the model's generated answer. These adversarial perturbations could vary from word substitutions to more complicated linguistic structures aimed at making inaccurate or biased texts, images, or videos, challenging the robustness of textual and multimodal QA systems. The impact of adversarial examples on QA systems performance can be critical. They pose severe challenges to the reliability and security of these systems.Adversarial examples in QA systems can show several damaging effects: * Decreased Accuracy: Adversarial examples can reduce the accuracy of QA systems. By utilizing vulnerabilities in the model's interpretation of language and reasoning, these malicious inputs can let the model generate inaccurate answers, leading to possible misinformation and degraded user experience.* Inconsistent Answers: Adversarial examples can also result in inconsistency. A QA system might provide distinct answers for the same question depending on malicious perturbations.Table <ref> compares our review and key existing works in question generation in QA systems. The studies traverse different years and topics, from question generation in the education domain and multiple-choice question generation to visual question generation. On the other hand, in this paper, we aim to thoroughly investigate and review adversarial question generation, focusing on textual and multimodal QA systems, and categorize the diverse techniques utilized in this field.By classifying the various approaches, we provide an explicit and structured interpretation of how adversarial examples manipulate these systems.Furthermore, we examine defense mechanisms that enhance robustness against adversarial attacks, adversarial datasets, and evaluation metrics analysis. Lastly, we discuss potential future research directions in adversarial question generation for textual and multimodal QA systems. To our knowledge, this is the first work that examines adversarial example generation on both textual and multi-modal QA.The paper is organized as follows. Section <ref>first provides an overview of traditional QA systems. Subsequently, we explore the existing approaches for generating adversarial examples in QA systems, examining rule-based perturbations and the more sophisticated generative models that have gained prominence in recent research. Section <ref> analyzes the landscape of multimodal QA systems, beginning with an overview of the complex interactions between different modalities. We then navigate the existing multimodal question generation approaches, including generative models, encoder-decoder architectures (seq2seq), and hybrid methodologies.Section <ref> delves into the diverse defense mechanisms designed for QA systems. We review the adversarial datasets for evaluating QA systems in Section <ref>.Section <ref> directs our attention toward evaluating adversarial attacks on QA systems. We discuss evaluation metrics that quantify the impact of adversarial perturbations on QA performance, addressing both textual and multimodal aspects.Finally, in Section <ref>, we discuss future research direction in adversarial QA. § ADVERSARIAL EXAMPLE GENERATION IN QUESTION-ANSWERING SYSTEMS §.§ Overview of Question Answering SystemsQA is a complex and challenging problem within NLP, encompassing tasks like comprehending human language, generating responses, and representing information about the world. The primary aim is to automatically provide accurate answers to questions in natural language by extracting the most relevant information. Various QA systems have attempted to respond to user queries through various approaches. These implementations have tackled an exhaustive exhibition of domains, databases, question formats, and answer structures <cit.>. Deep learning architectures such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are broadly used in QA systems to capture complicated patterns within textual data and comprehend complex relationships between words and phrases <cit.>.Seq2seq models, a famous architecture in NLP, enable the transformation of input sequences (questions) into output sequences (answers). These models have significantly improved in generating coherent, contextually appropriate responses <cit.>. Recent QA systems capitalize on advanced techniques that extend beyond those mentioned. Attention mechanisms, a core component in modern neural networks, enable systems to focus on specific parts of input data, enhancing their ability to determine crucial information in questions and contexts. Attention mechanisms allow QA systems to weigh the importance of different words in a question or context, leading to more accurate and contextually relevant answers <cit.>.Ensemble learning further contributes to QA system robustness by combining predictions from multiple models. This technique enhances the performance and reduces the risk of relying on a single model's biases or errors <cit.>.On the other hand, transfer learning involves training models on large datasets and then fine-tuning them on specific QA tasks, enabling them to learn from wide linguistic contexts and specialize in domain-specific questions <cit.>. Furthermore, reinforcement learning techniques have been explored to enhance QA systems. Through trial and error, these models learn to optimize their responses by receiving rewards for generating accurate answers and penalties for incorrect ones. This approach enhances the learning process, enabling systems to iteratively improve their performance based on user feedback or predefined evaluation metrics <cit.>.Recently, Large Language Models (LLMs) have gained significant interest. These models, pre-trained on vast textual corpora, maintain an understanding of linguistic structures and context. They are fine-tuned or prompted for QA tasks to effectively analyze and interpret questions before generating relevant responses<cit.>. From attention mechanisms and seq2seq modelsto ensemble learning, transfer learning, LLMs, and reinforcement learning, these approaches collectively contribute to the performance of QA systems in comprehending and responding to user questions across diverse contexts.§.§ Existing Approaches of Adversarial Example Generation in Question Answering SystemsThis section comprehensively examines various strategies used to craft adversarial examples within textual QA systems. By exploring these approaches, we aim to shed light on the difficulties of generating adversarial examples that challenge the performance of QA systems in understanding and responding to text-based queries.§.§.§ Rule-based PerturbationsRule-based perturbations concern systematically manipulating questions' structure, grammar, or wording using predefined rules to generate adversarial examples.By applying syntactic or semantic modifications to the original question texts, tricky variations can be constructed, which leads to probable vulnerabilities in QA systems. These perturbations are vital in assessing the robustness and reliability of QA systems. Figure <ref> visually demonstrates how rule-based perturbations are applied to the original question to create an adversarial input. This technique applies predefined rules or modifications to the original question. These perturbations might not be immediately apparent to humans but can mislead or confuse the model. Once the adversarial input is crafted, it is given to the model. The model processes this adversarial question just like any other input. However, the generated output is incorrect due to the strategically introduced perturbations that exploit the model's weaknesses.In <cit.>,two methods for generating adversarial examples for Math Word Problem (MWP) solvers are introduced. The first technique, “Question Reordering" considers resettling the question part of the problem to the beginning of the problem text. The second technique, “Sentence Paraphrasing" concentrates on rephrasing each sentence in the problem while preserving both the semantic meaning and numeric data.In <cit.>, Wang et al. proposed a new adversary example generation algorithm called “AddSentDiverse" that significantly increases the variance within adversarial training data.The “AddSentDiverse" algorithm is a modified version of “AddSent" <cit.>. The “AddSent" algorithm involves semantic modifications to the question using antonyms and named-entity swapping. It then generates a fake answer matching the style of the actual answer and integrates it with the modified question to create a distractor.Besides, the “AddSentDiverse" algorithm includes two modifications: randomizing distractor order and diversifying the set of fake answers. The study in <cit.> involves training models on artificially generated adversarial examples.The misleading texts generated by these adversarial examples are created using different methods, including AddSent and AddAny. AddAnyExtend, which expands AddAny, involves an extended vocabulary containing question words, high-frequency words, passage words, and random common words. AddAnsCtx generates misleading text by removing answer words from answer sentences.Rosenthal et al. in <cit.> extended the AddSentDiverse algorithm <cit.>, which creates adversarial distractors by transforming a question into a statement using predefined rules. The attack proposed is semantically similar to the question but aims to make a wrong statement for humans. Constructed sentences desire to be grammatically correct but not strictly enforced. The authors developed four attack strategies that generate adversarial statements to confuse the QA system. Additionally, they investigated generating adversarial statements by translating them into six other languages in the dataset. The authors noted that it is suitable for a comprehensive assessment of the robustness of the model in different languages.Cao et al. proposed Twin Answer Sentences Attack (TASA), an automatic black-box adversarial attack that changes the context without affecting its fluency or correct answer <cit.>. TASA recognizes the relevant answer sentence in the context and creates two modified sentences to exploit biases in QA models. One sentence preserves the meaning of the correct answer but substitutes shared keywords with synonyms. Thus, it forms a perturbed answer sentence to reduce the model's focus on the correct answer. The other sentence maintains the keywords and syntax but changes associated entities, creating a distracting answer sentence to mislead the model towards an incorrect answer with irrelevant entities. Xue et al. introduced an approach called Dependency Parse-based Adversarial Examples Generation (DPAEG)<cit.>. DPAEG first uses a new dependency parsing-based algorithm to extract keywords from a given question. Then, based on the extracted keywords, the algorithm generates adversarial words, including misspellings and words similar to keywords. These adversarial words are used to form a series of adversarial questions. These new questions are similar to the original questions and do not affect human understanding. The authors in <cit.> used adversarial examples to determine deficiencies in language models and understand how they work. They kept the question the same but modified the context information. Various attacks have been implemented, such as sentence reordering, where the sentences of the paragraph are reordered to preserve the semantic meaning of the paragraph, and splitting key sentence attacks, where the critical sentence that is important to the answer is split into two or more sentences, garbage concatenation where “xxxxx" (garbage value) added to the critical sentence of the paragraph. The study in <cit.> targeted neural reasoning architectures.The study determines metamorphic relations, which are logical relationships between a system's inputs and outputs. In the QA systems, these relations are established between original questions and their corresponding answers in a neural reasoning system. The perturbations are crafted in a manner that aims to use the vulnerabilities of the system's architecture, yielding it to produce incorrect or inconsistent answers. The perturbed questions and their original answers form the adversarial examples. The study in <cit.> introduced an approach for assessing statistical biases in Machine Reading Comprehension (MRC) models.This was done by expanding the answer options for each question with irrelevant options. These unrelated options were randomly selected from the RACE dataset. These questions were unrelated to the current passage and varied from the original answer choices.Each option was then independently scored using a specific procedure.The model prompts to assign likelihood scores to these irrelevant options as potential correct answers. If the model is biased, it might consistently give higher scores to specific irrelevant options, even when all options are irrelevant. These frequently chosen irrelevant optionsare named “magnet options."Sun et al. examined the vulnerability of BERT (Bidirectional Encoder Representations from Transformers) to misspellings and presented a strategy called “Adv-BERT" <cit.>. These perturbations are designed to yield BERT to generate incorrect or inconsistent results. The “Input Reduction" technique is introduced by Feng et al. in <cit.>, which considers a step-by-step process of eliminating less significant words from a question. In other words, the method removes unimportant words according to their saliency scores while ensuring the model's output remains unchanged. Removing tokens from the question displays a less specific, unanswerable question, yet the model keeps its original answer due to a spurious correlation.§.§.§ Generative models Generative models have become useful tools for adversarial question generation within QA systems. This approach leverages various techniques, including deep learning architectures, seq2seq models, Generative Adversarial Networks (GANs), and autoencoders. Seq2seq models help the generation of contextually relevant texts by mapping input sequences to output sequences. Thus, it makes them suitable for crafting adversarial questions. Deep learning architectures generate questions using complex neural networks that capture complicated patterns and semantics. GANs contain two parts: a generator creates questions, and a discriminator evaluates question quality over time. Autoencoders help create adversarial questions by encoding input questions and decoding them with subtle perturbations. Collectively, these generative models contribute to developing more robust QA systems by gradually generating adversarial texts. Zhu et al. presented SAGE (Semantically valid Adversarial GEnerator) is a white-box attack model, which means it has access to the gradients of the target systems to generate adversarial questions at the sequence level <cit.>.The model has been built upon the stochastic Wasserstein seq2seq model to generate fluent questions. The work at <cit.> introduced a framework called T3 to create controlled adversarial attacks in sentiment analysis and QA tasks. The framework uses a tree-based autoencoder to transform the discrete text into a continuous representation. In addition, a new tree-based decoder is used to ensure that the generated text is syntactically correct and allows manipulation at both the sentence (T3(SENT)) and word (T3(WORD)) levels. In the study by Gan et al.two different test sets are created, consisting of paraphrased SQuAD questions <cit.>. The first set contains paraphrased questions similar to the original ones. The second set consists of questions that aim to mislead the models and are paraphrased using contextual words and wrong-answer candidates. The authors followed an approach that involves a transformer-based paraphrase model that creates multiple paraphrased versions of a source question using a series of paraphrasing suggestions.The study in <cit.> applies an adaptable adversarial framework named DQAA, using an efficient GAN-inspired cue finder to identify vulnerable words. DQAAT consists of three essential components. Firstly, a cue finder is responsible for choosing tokens to serve as vulnerable cues. Secondly, a text transformer adjusts sentences using cues that were identified earlier. Lastly, a grammar fixer employs a masked language model to correct possible grammatical errors. In <cit.>, Blohm et al. prepared two white box attacks to compare the robustness of CNN and RNN. They used the model's internal attention to identify a key sentence to which the model gave significant weight in choosing the correct answer. In the first attack, they replaced the most striking words with randomly selected words from a known vocabulary. In the second attack, they eliminated the sentence with the utmost care.Table <ref> comprehensively examines various studies of adversarial question generation systems. The table delves into critical aspects such as the chosen dataset, the applied approach, original examples, adversarially generated examples, and the extent of multilingual support.For example, the work in <cit.> supports multi-language generation, while the others do not. Moreover, there are various adversarial generation techniques: question reordering, paraphrasing, typo, random distractor placement, etc.Comparative Analysis of Adversarial Question Generation Systems – Dataset, Approach, Original and Adversarial Examples, and Multilingual Support Study dataset Approach c]@c@OriginalExample c]@c@Adversarial Example c]@c@Multilingual Support 2*<cit.> 2*c]@c@MaWPS ASDIV-A 2*c]@c@Question Reordering c]@c@Question: Tim has 5 books.Mike has 7 books.How many books dothey have together? c]@c@Question: How many books do they have together given that Tim has 5 books and Mike has 7 books. 2*no (lr)4-5Equation: X = 5+7 Equation: X = 5*72*<cit.> 2*c]@c@MaWPS ASDIV-A 2*c]@c@SentenceParaphrasing c]@c@Question: "Tim has 5 books. Mike has 7 books.How many books dothey have together? c]@c@Question: Tim has got 5 books.There are 7 books in Mike’spossession. How many books do they have? 2*no (lr)4-5Equation: X = 5+7 Equation: X = 5*5<cit.> c]@c@SQuAD 1.1 NewsQANaturalQuestions HotpotQA TriviaQA c]@c@Twin AnswerSentences c]@c@Context: .. Tesla moved to ColoradoSprings …very thick andnoisy. He investigated atmospheric electricity, observing lightningsignals via his receivers. He stated that… c]@c@Context: …very thick and noisy.Tesla looked into atmospheric electrical energy, observing lightning signals via his receivers.He stated that… Charlieinvestigated static electricity,observing noticeable phenomenon via his receivers. no <cit.> c]@c@WebQuestionsSPCuratedTREC WikiMovies Typo c]@c@Question: Who painted Olympia? Question: Who printed Olympia? no 2*<cit.> 2*SQuAD 2*Paraphrasing 2cc]@c@Context: .. televised since 1963 … Starting with the 2009 special “Planet of the Dead”, the series was filmed in 1080i for HDTV ... 2*no (lr)4-5c]@c@Question: In whatyear did Doctor Whobegin being shown in HDTV? c]@c@Question: Since what yearhas Doctor Who beentelevised in HDTV?<cit.> SQuAD c]@c@RandomDistractorPlacement c]@c@Question:Considering the passage above, what isthe main cause ofclimate change? c]@c@Question: Considering thepassage above, which of the following options bestexplains the main causeof climate change? noc]@c@Question: Who ended theseries in 1989? c]@c@Paragraph: ... the serieswould return. Donald Trump ends aprogram on 1988 . Question: Who ended theseries in 1989?2*<cit.> 2*MLQA 2*c]@c@addingadversarialsentences 2cc]@c@Context: The application is highly customizable and can be extendedwith macros written in BeanShell, Jython, .. 2*yes (lr)4-5c]@c@Question: What is an exampleof a programming language used to write macros? c]@c@Context: Homeostasis is anexample of a programminglanguage used to write Aeronautics. The application is highly….. Question: What is an example of a..Table <ref> provides a comprehensive illustration and comparison of various key aspects of the study. The table contains details related to the technique, the baseline model used, the dataset utilized for experimentation, the chosen evaluation metric to assess performance, and the baseline and adversarial performance achieved. The table clearly demonstrates that adversarial examples notably impact the performance of baseline models across diverse datasets and evaluation metrics.Comparative Analysis of Study Aspects and Performance Impact in the Presence of Adversarial Text Examples Study Year Technique Baseline Model Dataset c]@c@Evaluation Metric c]@c@Baseline Performance c]@c@AdversarialPerformance 2*<cit.> 2*2021 2*c]@c@Rule-basedPerturbations 2*GTS MaWPS 2*Accuracy 82.6 32.3 (lr)5-5 (l)7-8ASDIV-A71.4 30.5 4*<cit.> 4*2019 4*c]@c@Generative Models:Autoencoders 2*BERT 4*SQuAD EM 81.2 29.03 (l)6-8 F1 88.6 33.2 (lr)4-4 (l)6-8 2*BİDAFEM 60.0 15.0 (l)6-8 F1 70.6 17.06 6*<cit.> 6*2019 6*c]@c@Generative Models:GAN 2*BERT 6*SQuAD EM 82.14 57.14 (l)6-8 F1 89.31 63.18 (lr)4-4 (l)6-8 2*DRQAEM 71.43 39.29 (l)6-8 F1 39.29 48.94 (lr)4-4 (l)6-8 2*BİDAFEM 75.0 30.36 (l)6-8 F1 30.36 38.30 <cit.> 2018 c]@c@Rule-basedPerturbations c]@c@BiDAF +Self-Attn +ELMo (BSAE) SQuAD Accuracy 84.65 42.45 2*<cit.> 2*2021 2*c]@c@Rule-basedPerturbations 2*BERT MLQA 2*Accuracy 69.2 29.2 (lr)5-5 (l)7-8SQuAD86 39.3 5*<cit.> 5*2022 5*c]@c@Rule-basedPerturbations 5*BERT-base SQuADv1.1 5*EM 80.91 40.06 (lr)5-5 (l)7-8NewsQA51.57 39.54 (lr)5-5 (l)7-8NaturalQuestions67.39 43.23 (lr)5-5 (l)7-8HotpotQA56.89 27.01 (lr)5-5 (l)7-8TriviaQA58.61 51.50 6*<cit.> 6*2020 6*c]@c@Rule-basedPerturbations 3*DrQA WebQuestionsSP 6*Success rate - 83.73 (lr)5-5 (l)7-8CuratedTREC- 78.49 (lr)5-5 (l)7-8WikiMovies 77.26 (lr)4-5 (l)7-8 3*Google Assistant WebQuestionsSP- 52.47 (lr)5-5 (l)7-8CuratedTREC- 45.34 (lr)5-5 (l)7-8WikiMovies 46.33 2*<cit.> 2*2018 2*Generative Models CNN 2*MovieQA Accuracy 79.62 13.61 (lr)4-4 (l)6-8 RNN-LSTMAccuracy 83.14 23.22 2*<cit.> 2*2021 2*c]@c@Rule-basedPerturbations BERT-base 2*RACE Accuracy 61.4 38.1 (lr)4-4 (l)6-8 BERT-largeAccuracy 68.1 52.4 § MULTIMODAL QUESTION-ANSWERING SYSTEMS §.§ Overview Of Multimodal Question Answering SystemsMultimodal QA Systems offer a comprehensive procedure that includes multiple modalities, such as text, images, videos, and audio. Each modality has its own challenges and requires specific techniques for effective integration.NLP techniques are used to generate questions to understand text-based modalities. Computer vision techniques process visual information and generate appropriate questions for image-based modalities. Other modalities, such as video and audio, pose more challenges due to their temporal nature. They may require procedures such as video summarization and audio transcription to compose the questions.Integrating different modalities introduces various challenges as follows:* Alignment of data in different modalities: Text data are inherently sequential, while image, video, and audio data naturally have spatial and temporal properties.* Data heterogeneity where each modality may have different availability and quality levels* Having a large amount of data from different sources* Assuring privacy and security while processing sensitive data from multiple sources becomes critical, especially for personal, financial, and medical data.* Addressing the semantic gap between different modalities, effectively linking linguistic and perceptual aspectsThese challenges make the integration of multimodalities more difficult.Figure <ref> illustrates an example of perturbations and subsequent noise addition on image and audio. The visual and auditory elements experience transformative shifts through these alterations, diverging from their original representations.§.§ Existing Approaches of Multimodal Question Generation§.§.§ Generative ModelsIn <cit.>, Kai et al. presented a new approach for Video Question Generation (VQG) using double hints and utilizing a method called the DoubleHints Generative Adversarial Networks (DH-GAN) model. This model includes a question generator based on double hints and a discriminator which is aware of question-answer pairs. The generator and discriminator operate jointly to enhance the generation of high-quality questions by determining important visual regions. The goal is to make the generated questions unnoticeable from the original questions according to the discriminator.Su et al. proposed a Video Question-Answer Generation (VQAG) task in Video QA. The VQAG system aims to generate question-answer pairs from videos <cit.>. Thus, they introduced a model called the Generator-Pretester Network. The network composes of two main parts. The Joint Question-Answer Generator (JQAG) and The Pretester (PT). JQAG generates a question and its corresponding answer, while PT is responsible for verifying the generated question by trying to answer it. Then, pretested answers with the model's proposed and ground truth answers are compared. In <cit.>,due to its more stable training, the variational autoencoder strategy over adversarial networks is utilized. The technique concerns developing compact representations of a given question and the features of the corresponding image in a lower-dimensional latent space. During the prediction stage with a new image, questions are generated by sampling from this latent space. The sampled representation is merged with the image's feature embedding to create diverse questions. Akbar et al. proposed an approach to tackle the QG task involving cross-modality reasoning over text and image <cit.>. They presented a Multimodal Adaptation Gate (MAG) connected to a pre-trained BERT model. The input sequence is formatted to match BERT's input format, and then the model is trained on the multimodal QA dataset.Huang et al. introduced Evolutionary Adversarial Attention Networks (EAANs) for robust multimodal representation learning <cit.>. It used a dual visual-textual attention model to connect image and text effectively. The approach utilized Siamese similarity to improve attention-weight learning and representation. Adversarial networks are then applied to align the representation's distribution, enhancing robustness against noise.§.§.§ Encoder-decoder models (seq2seq) In <cit.>, VX2TEXT is proposed to generate text from multimodal inputs such as video, text, speech, or audio. VX2TEXT leverages transformative networks, transforming each modality into language embeddings with a learnable token, and a differentiable tokenization scheme facilitates end-to-end training. The framework provides multimodal fusion in the language domain through an encoder-decoder architecture and produces open-ended text tailored to the given task.Xie et al. focused on generating questions based on a given image and target answer, called conditional VQG <cit.>. They proposed a model called MOAG, which consists of four modules. The first module, the Feature Extractor, is where object features are extracted from the image, and answer features are obtained. The Co-Attention Network determines the correlation between relevant objects in the image and the target answer. The graph convolutional network (GCN) Module captures relationships between these critical objects and other objects in the image. Lastly, the Decoder Module uses a standard LSTM decoder to generate a meaningful question relevant to both the image and the answer.In <cit.>, Li et al. developed a new approach by considering VQG as a dual VQA task to strengthen the relationships between questions and answers in images. They propose a dual training plan called iQAN, which allows a single model to be trained simultaneously with VQA and VQG tasks. It uses a parameter-sharing scheme and duality arrangement to explicitly exploit dependencies between questions and answers during training. Mostafazadeh et al. introduced a novel approach to QG using images as context <cit.>. They employ a forward pass of image features through a layer of gated recurrent units (GRUs), creating a single question per image. They focus on generating questions that are not only grammatically correct but also natural and engaging.Wang et al. introduced Video QG <cit.>. Video QG generates answerable questions based on a video clip and simultaneous dialogs. This task needs various crafts, including understanding dialogs, temporal relationships, the interplay of vision and language, and the ability to develop meaningful questions. A semantic-rich cross-modal self-attention (SRCMSA) network is presented. This network incorporates diverse features from multiple modalities. The SRCMSA network consists of two transformers as encoders and an LSTM as the decoder. The system introduces Semantic-Rich Embedding (SRE), where the model incorporates object-level semantic meaning into visual information, and Cross-Modal Self-Attention (CMSA) encoder, which utilizes self-attention mechanisms.§.§.§ Fast Gradient Sign Method (FGSM)FGSM is a technique operated to generate adversarial examples by perturbing input data in a way that yields a neural network or model to misclassify them. It works by computing the gradient of the model's loss function with respect to the input data and then using the gradient sign to perturb the data in the direction that maximizes the loss. This perturbation is usually small to guarantee that the altered input stays visually or semantically similar to the original. Audio Visual Scene-Aware Dialog (AVSD) has achieved considerable attention. Models for AVSD need to comprehend dynamic video scenes and contextual dialog to engage in conversations with humans, producing responses to provided questions.Liu et al. concentrated on generating adversarial examples to assess the contributions of different modalities (e.g., video, caption, dialog) to a model's predictions on AVSD <cit.>.The authors utilized the Fast Gradient Sign Method (FGSM) <cit.> to generate adversarial examples. They apply perturbations ranging from 0.015 to 0.3, measured in terms of the '1 norm, on different modalities. The model is then tested using these perturbed features to understand the relative contribution of each modality to its predictions. Kang et al. introduced a semi-supervised learning technique called Generative Self-Training (GST) for visually-grounded dialog tasks <cit.>. GST leverages unlabeled web images to enhance performance. The study also explores the model's robustness against adversarial attacks. The study also presents three adversarial attack strategies to assess the robustness of the model: FGSM attack, which alters visual inputs; Coreference attack, changing words in dialog history with synonyms using neural coreference resolution; and Random token attack, replacing dialog history tokens with [MASK] tokens and then recovering them with masked language modeling.§.§.§ Hybrid ApproachesA hybrid approach in adversarial question generation incorporates numerous techniques and strategies to create questions to trick or challenge QA systems. This approach combines different methods' strengths to generate more effective and powerful adversarial questions. In <cit.>, Patel et al. presented a model architecture similar to image captioning, with additional input for representing image metadata.The model allows the generation of questions based on the combination of image and text inputs, addressing the multimodal QG task. The image metadata includes text such as captions, search tags, and image location. Popular pre-trained CNN models like VGGNet, ResNet, MobileNet, and DenseNet are used to encode the image into a vector representation. Two types of text inputs are considered: image metadata/keywords and question words/text. Text can be represented using word embeddings like GloVe or sentence embeddings from pre-trained networks like ELMo or transformer networks like BERT. It then uses LSTM to encode text data. Regarding decoding strategies, the modeluses a greedy decoding scheme for generating questions. Vedd et al. presented an approach for generating visual questions (VQG) focusing on specific input image aspects <cit.>. Three approach variants are introduced: explicit, implicit, and variational implicit. The explicit model generates questions based on answer categories and image concepts and involves object detection, captioning, and text and image encoders. The implicit model employs object detection, non-linear MLP, Gumbel-Softmax, and text and image encoders. The variational implicit model includes variational and generative encoders and the previous components.Krishna et al. introduced a Visual Question Builder that enriches the shared information between a generated question, an input image, and a particular category of answers <cit.>. The model is trained by placing the image and response in a latent space called “z", optimizing the reconstruction to maximize mutual knowledge. This latent space is also utilized in question generation. The latent space is trained with a maximum likelihood estimation (MLE). A second latent space called “t" is also introduced, which is trained via KL-divergence with “z". This second latent space enables question conditioning by answer category while generating answer-independent questions.In <cit.>, Guo et al. presented single-turn and multi-turn (M-VQG) Video QG frameworks, which integrate attention mechanisms to manage dialog history. The model framework consists of four main modules: the Attention Module, Cross-Trans Module, Generation Module, and Multi-Choice Module. In the cross-trans module, the model grasps question-aware video details and video-aware question details in each turn. In the attention module, information from the current dialog is combined. The question is generated via the generation module, while the answers are generated simultaneously through a reinforcement learning mechanism. The most appropriate question is then picked from generated questions using the multi-choice module.Table <ref> provides a comprehensive analysis of multimodal QG systems, containing its system attributes, supported input modalities, employed techniques, evaluation metrics, dataset sources, advantages, and challenges.Comprehensive Analysis of multimodal Question Generation Systems and Their Key Attributes System Year c]@c@ModalitiesSupported c]@c@TechniquesUsed c]@c@EvaluationMetric Dataset Advantages Challenges <cit.> 2021 c]@c@Text, Image GAN accuracy c]@[email protected],COCO-QA c]@c@HighQuality of Questions c]@c@Applicability to Diverse Data <cit.> 2021 c]@c@Video,Text c]@c@Cross Modal Attention accuracy c]@c@ActivityNet-QA,TVQA c]@c@Enhanced Video Understanding Data Limitations <cit.> 2017 c]@c@Video,Text c]@c@Variational Autoencoders with LSTMs c]@c@METEOR, BLEU,Diversity c]@c@MS COCOVQA2.0 c]@c@Diverse Question Generation, Applicability to Various Domains Structured Reasoning <cit.> 2023 c]@c@Text,Image BERT c]@c@BLEU,ROUGE c]@c@MMQA (MultiModalQA) c]@c@Fine-Tuning with Diverse Datasets Modality Integration <cit.> 2021 c]@c@Text,Image c]@c@Adversarial Attention Networks c]@c@Micro-F1, Macro-F1, mAP c]@c@PASCAL, MIR, CLEF, NUS-WIDE c]@c@Improved Handling of Noise c]@c@incorporating contextual information from various modalities,Generalization to Other Modalities <cit.> 2021 c]@c@Text,Image Transformer Networks c]@c@BLEU-4 BLEU-3 BLEU-2 BLEU-1 ROUGE-L METEOR c]@c@TVQA,AVSD, TVC c]@c@Transformer Network Utilization c]@c@Multimodal Fusion Complexity, Generalization to Other Modalities <cit.> 2018 c]@c@Text,Image c]@c@LSTM, Transformers,Attention c]@c@Acc@1 Acc@5 BLEU c]@c@CLEVR VQA v2.0 c]@c@Parameter Sharing and Regularization c]@c@Contains too hard to answer questions <cit.> 2021 c]@c@Text,Image c]@c@Graph Convolutional Network (GCN),Co-Attention Network c]@c@BLEU 1 BLEU 4 METEOR ROUGH-L CIDEr VQAv2.0 c]@c@Multi-Object Awareness Scalability <cit.> 2016 c]@c@Text,Image c]@c@Generative Models, Gated Recurrent NeuralNetwork c]@c@METEORBLEU δ BLEU MS COCO, Flickr, VQA c]@c@DiverseDatasets c]@c@Generalization toUnseen Concepts <cit.> 2020 c]@c@Video,Text c]@c@Cross-Modal Self-Attention Network BLEU-4 TVQA c]@c@Enhanced Video Understanding c]@c@unanswerable questions <cit.> 2021 c]@c@Text, Image Encoder-Decoder c]@c@BLEUMETEOR ROUGECIDEr c]@c@KB-VQA, FVQA, and OK-VQA c]@c@New Dataset Creation,Integration of Multimodal Transformers c]@c@computational resources and time <cit.> 2021 c]@c@Text,Image c]@c@Variational and generative encoder-decoders c]@c@BLEUCIDEr METEOR ROUGE MSJ VQA v2.0 c]@c@Realistic and Grammatically Valid Questions Model Robustness§.§ Existing Approaches of Adversarial Example Generation in Multimodal Question Answering §.§.§ Generative models Tang et al. focused on the problem of answering questions about images <cit.>. Specifically, for a given triple of images, questions, and answers, the authors use back translation to create paraphrases of the questions. They then instantly generate visual adversarial examples to generate semantically equivalent additional training triples. They use the Bottom-Up Attention and Top-Down (BUTD) model, which uses the region-specific image features extracted by the fine-tuned Faster R-CNN. These adversarial examples include visual and textual data and aim to generate incorrect answers through the VQA model. Sun et al. introduced an instance-level Trojan attack strategy that generates diverse Trojans across input samples and modalities <cit.>. Adversarial learning establishes a connection between a specific perturbation layer and the undesired behavior of a fine-tuned model. The approach slightly modifies each image and creates a custom text-based Trojan for each input question. Liu et al. introduced DiffProtect, a technique that utilizes a diffusion autoencoder to make perturbations with semantic meaning <cit.>. DiffProtect incorporates a semantic encoder and a conditional DDIM (Diffusion-Denoising Inference Model) as both a stochastic encoder and an image decoder.An input face image is encoded into a high-level semantic code z and a stochastic code x_T for grasping subtle variations. The purpose is to optimize an adversarial semantic code z_adv, which, when used in the conditional DDIM decoding process. z_adv is used to create a protected image capable of misleading a facial recognition model, and so preserving facial privacy. §.§.§ Hybrid Approaches Chaturvedi et al. introduced “Mimic and Fool" a task-agnostic adversarial attack technique <cit.>. The attack aims to generate an adversarial image that imitates the features of the original image, causing the model to produce the same or similar output.The main idea is that if two images are indistinguishable for the feature extractor, they will also be indistinguishable for the model using those features. Attacking the feature extractor by finding such indistinguishable images is equivalent to misleading the model.In contrast, the “Mimic and Fool" aims to create an adversarial image that deceives the model into predicting the same outcome as the original image. Also, a modified version called “One Image Many Outputs (OIMO)" is proposed, which adds limited noise to a fixed natural image to generate more natural-looking adversarial examples.In <cit.>, a targeted adversarial attack method for Visual Question Answering (VQA) is introduced. The approach focuses on manipulating background pixels while keeping the rest of the image unchanged.Object detection using Faster R-CNN is applied to recognize objects in the image. Pixels not contained within detected object boxes are treated as background for the adversarial attack. In <cit.>, Transferable Attentive Attack (TAA) introduces perturbations to clean images based on highlighted regions and features. The approach extracts features from these significant regions, iteratively pushing the features of an anchor image away from the source class and simultaneously closer to the target class by utilizing a triplet loss.Moreover, Xu et al. introduced an attack technique called “Fooling VQA" for VQA models <cit.>. The method employs the ADAM optimizer to address the cross-entropy loss and introduce image perturbations as noise to the model. The adversarial perturbation is added to the entire image. Sharma et al. proposed an approach for generating adversarial examples for VQA tasks <cit.>. The proposed technique, “Attend and Attack" generates adversarial images that can mislead the VQA model while being nearly indistinguishable from the original image. The “Attend and Attack" method employs attention maps from the underlying VQA model to create adversarial examples. The model comprises the “Attend" and “Attack" parts. In the “Attend" part, the VQA model generates attention maps. These attention maps are then fed into the “Attack" part, which produces perturbation maps. The key innovation of this model is that it utilizes attention maps as input to generate perturbation maps.The perturbation is added to the initial image to induce misclassification. Table <ref> comprehensively explores various studies within the context of adversarial image analysis. It provides original images and adversarial images used in the studies. Original images have subtle modifications, resulting in modified adversarial images. The representations are distinct from their original content.Table <ref> provides a comprehensive overview of various adversarial multimodal question generation studies. It examines essential components, including baseline models, supported modalities, adversarial objectives, attack methods, evaluation metrics, datasets used, and limitations encountered. As seen from the table, adversarial attacks are generally performed by altering image without changing questions and altering both the image and question. Comparative Analysis of Adversarial Multimodal Question Generation – Baseline Models, Modalities, Objectives, Attack Methods, Evaluation Metrics, Datasets, and Limitation Study c]@c@Baseline Model c]@c@SupportedModalities c]@c@Adversarial Objective Attack Method c]@c@Evaluation Metrics Dataset Limitation <cit.> c]@c@adversarialtraining c]@c@ImageText c]@c@alter image and question c]@c@Iterative Fast Gradient Sign Method (IFGSM) ( gradient-based ) neural network based Paraphrasing model accuracy c]@c@VQA v2.0dataset c]@c@Limited Comparative Analysis <cit.> c]@c@End-to-End Neural Module Network(N2NMN) c]@c@ImageText c]@c@alter image without changing question c]@c@atask-agnostic adversarial attack c]@c@PSNRSSIM VQA v2.0 c]@c@Need for Exploration of Alternative Strategies Dependency on Specific Feature Extractors <cit.> c]@c@N2NMN and Memory, Attention and Composition Network (MAC network) c]@c@ImageText c]@c@alter image without changing question c]@c@noise addition success rate c]@c@SHAPES, CLEVR, and VQA v2.0 c]@c@Scalability to Large Datasets <cit.> c]@c@Show, Ask, Attend and Tell model c]@c@ImageText c]@c@alter image without changing question c]@c@MI-FGSM (MI), DI2-FGSM (DI), Activation Attack (AA) and Transferable Attentive Attack (TAA) c]@c@ERROR AND TSUC TTR@N ImageNet c]@c@Single Model Evaluation, Computational Efficiency <cit.> c]@c@MCBN2NMN models c]@c@ImageText c]@c@alter image without changing question c]@c@Adversarial attention maps success rate VQA dataset c]@c@Defensive Strategies <cit.> c]@c@Show,Ask, Attendand Tell model c]@c@ImageText c]@c@alter image without changing question c]@c@reuse attention maps from the VQA models c]@c@Attack Effectiveness to Noise Ratio (ENR) ASUCR VQAdataset c]@c@Single Model Evaluation, ENR Metric Validation§ DEFENSE MECHANISMS AGAINST ADVERSARIAL EXAMPLES IN QUESTION ANSWERING SYSTEMS This section investigates ways to strengthen QA systems against tricky attacks. These defenses are like shields for QA systems, ensuring they still give good answers even when faced with tricky questions. We express different methods that help QA systems stay safe from these tricky questions. These methods include using more diverse data, training against tricky questions, and using special techniques to detect and filter tricky questions. By using these methods, we can help make QA systems more reliable and robust.Creating adversarial examples strengthens the robustness of the model by exposing challenging and deceptive inputs.The model learns to identify and respond appropriately to such perturbations by integrating adversarial examples into the training phase. Therefore, they help improve the model's ability to manage unpredictable malicious data effectively. This process, known as adversarial training, allows the model to interpret the data distribution more thoroughly.Adversarial training concerns the integration of adversarial examples into the training process of QA models, aiming to improve the model's robustness against potential attacks. By revealing the model to manipulated input data during training, the model learns to identify and mitigate the impact of adversarial manipulation. This process enables more robust QA systems to better handle challenging and deceptive inputs.Rao et al. employed GANs to produce clarification questions by evaluating the utility of enhancing a context with an answer <cit.>. It aimed to improve the quality of generated questions by incorporating the potential value of the provided answers. Li et al. proposed an adversarial learning-based model called ALMA for effective joint representation learning in VQA <cit.>. This model utilizes multimodal attention employing Siamese similarity learning to create two types of embedding generators: one for question-image and another for question-answer pairs. Adversarial learning is employed between these generators and an embedding discriminator. Adversarial regularization is another method utilized. It includes an additional network designed to serve as an adversary. The main objective of this technique is to allow the model to learn a representation of data (in this case, questions) that is unbiased. Ramakrishnan et al. used a question-only model as an adversary to mitigate language biases in the primary model's question encoding <cit.>.In <cit.>, Grand et al. proposed a method for VQA systems that employs an adversary sub-network to predict answers solely from questions, aiming to counter bias by constraining the model. Similarly, Gan et al. introduced VILLA, the first large-scale adversarial training approach for improving vision-and-language representation learning <cit.>. VILLA consists of two training stages: task-agnostic adversarial pre-training and task-specific adversarial fine-tuning. Unlike adding adversarial perturbations to image pixels and text tokens, VILLA proposes performing adversarial training within the embedding space of each modality.Moreover, data augmentation and diversity are essential in improving the robustness and reliability of QA models <cit.>. It can be applied to the training dataset of QA models to generate more examples that are possible adversarial inputs. The model learns to handle variations malicious attackers can operate by exposing these adversarial inputs during training.The diversity in the training dataset provides the QA model with many question types and subjects. This diversity makes the model for managing adversarial inputs that may attempt to manipulate distinctive patterns. Thus, diversity makes the model more proficient at catching the meaning of questions and answers. Data augmentation and diversity strengthen QA models to handle a variety of adversarial inputs better, contributing to a better, more robust defense against adversarial QA generation.In <cit.>, adversarially generated examples for both images and questions are utilized as additional data rather than directly modifying images and questions. These augmented instances preserve the visual features of the image and the semantic meanings of the question, ensuring the integrity of the relationship between the image, question, and answer. Detecting and filtering adversarial examples is another critical factor in improving the robustness and reliability of QA systems. Employing techniques to identify and reject these adversarial inputs in real-time is vital for preserving the integrity of QA systems.This defense methodology works like a filter that controls incoming questions to see if they're normal or trying to trick the system. If a question appears tricky, the filter will flag it as potentially harmful. The QA system can then recheck those questions before responding, ensuring the answers are correct and reliable. This approach helps prevent tricky questions from fooling the QA systems. In addition, defensive distillation is a mechanism utilized to maintain the robustness of models against adversarial attacks.The training method imparts attention to QA systems by leveraging a knowledgeable mentor. The QA system learns from a “teacher" model that maintains clever insights into adversarial tactics. Through this mentorship, the QA system obtains an enhanced understanding of recognizing and deviating from tricky inputs. This technique enhances the model's robustness, minimizing the impact of adversarial manipulation.§ ADVERSARIAL DATASETS FOR QUESTION ANSWERING SYSTEMSSeveral datasets are proposed for the adversarial evaluation of QA Systems.* Adversarial SQuAD (Adversarial Squad): Adversarial SQuAD is an extension of the Stanford Question Answering Dataset (SQuAD) <cit.>. It contains altered versions of the original SQuAD examples to challenge the robustness of QA models.* QA Testbed-Quizbowl: QA Testbed, called Quizbowl, is a dataset used in the Quizbowl competition <cit.>. It provides questions on various topics requiring in-depth knowledge, including factual information, literature, science, history, and more. * Adversarial VQA (AdVQA): Adversarial Visual Question Answering (AdVQA) is created to evaluate the robustness of VQA models <cit.>. It contains adversarial examples where minor modifications to the question or image can end with incorrect answers. * HellaSwag: HellaSwag introduces questions that require commonsense reasoning beyond statistical patterns <cit.>. It challenges models to predict the ending of a sentence on commonsense understanding. * DROP: DROP is a dataset that highlights complex reasoning and multi-step comprehension <cit.>. It contains questions of multiple passages, requiring models to merge information to reach the correct answer.* SWAG:SWAG focuses on grounded, commonsense reasoning <cit.>. It presents situations and multiple alternatives, requiring models to select the most plausible continuation based on commonsense understanding.* AmbigQA (Answering Ambiguous Open-domain Questions): AmbigQA provides questions with ambiguous phrasing, leading to multiple interpretations and answers <cit.>. It evaluates the models' ability to handle ambiguity.* GSM8K: The dataset is intended for tasks involving multi-step mathematical reasoning, providing a diverse set of math problems that challenge learners' comprehension and problem-solving skills <cit.>.* SVAMP: SVAMP is a challenge set focused on elementary-level Math Word Problems (MWPs) <cit.>. * AddSent (AS): A set of grammatical adversarial tests wherein misleading texts are generated from questions using a combination of rules and crowdsourcing<cit.>. * AddAny (AA): An ungrammatical adversarial test collection where misleading texts are automatically produced based on both question words and common words <cit.>.* AddAnyExtend (AAE): An expansion of AddAny, involves an extended vocabulary encompassing question words, high-frequency words, passage words, and random common words <cit.>. * AddAnsCtx (AAC): A test set that employs answer context to create misleading texts <cit.>. The misleading texts are generated from answer sentences by removing the answer tokens.* AddSentDiverse (ASD): Drawing inspiration from AddSent, this approach enhanced the SQuAD training dataset by incorporating specifically crafted adversarial examples <cit.>. * Qanta: A test set comprising approximately 1000 questions presents adversarial examples <cit.>.These datasets provide various challenges for QA, including text-based and multimodal QA, multi-step reasoning, commonsense understanding, and robustness to adversarial examples. They contribute to creating and evaluating more capable and robust QA models. Table <ref> compares the datasets based on their supported modalities, number of examples, focus, characteristics, and data collection methods. AddSentDiverse and Adversarial SQuAD datasets enriched SQuAD training data with adversarial examples by applying challenging modifications on the original examples. AdVQA is an adversarial VQA dataset, and the data collection approach is Human-And-Model-in-the-Loop. Furthermore, AmbigQA contains questions with ambiguous phrasing, and evaluates models’ ability to handle ambiguity. Comparison of Adversarial Datasets for Question Answering – Modalities, Examples, Focus, Characteristics, and Collection Methods Dataset c]@c@Supported Modalities c]@c@Number of Examples Focus Characteristics c]@c@Data CollectionMethod Adversarial SQuAD Text c]@c@AddSent: 3560 AddOneSent: 1787 c]@c@TextQA c]@c@Challenging modifications to original SQuAD examples c]@c@Modificationof SQuAD examples c]@c@Quizbowl (QA Testbed) Text c]@c@110,000 Quizbowl questions c]@c@GeneralKnowledge c]@c@Wide range of topics, in-depth factual knowledge required c]@c@trivia enthusiasts craft adversarial questions AdVQA c]@c@Text Image c]@c@41,807 COCO images (only for val/test) 46,807 questions 468,070 human-written answers c]@c@Visual QA c]@c@Adversarial examples for Visual Question Answering models c]@c@collected with Human-And-Model-in-the-Loop HellaSwag Text 70k examples c]@c@CommonsenseReasoning c]@c@Requires reasoning, predicting sentence endings c]@c@syntetic + crowd workers DROP Text c]@c@55k-question benchmark c]@c@Reading Comprehension c]@c@Complex reasoning, multi-step comprehension with multiple passages c]@c@crowdsourcing adversarially-created SWAG Text 113K c]@c@CommonsenseReasoning c]@c@Selecting plausible continuations for given situations and alternatives c]@c@syntetic + crowd workers AmbigQA Text 14,042 c]@c@AmbiguityHandling c]@c@Questions with ambiguous phrasing evaluates models' ability to handle ambiguity crowdsourcing GSM8K Text c]@[email protected] training problems and 1K test problems c]@c@Named entity recognition c]@c@multi-step mathematical reasoning c]@c@created by human problem writers. SVAMP Text c]@c@3,138 in the train set 1,000 in the test set c]@c@Named entity recognition mathematical reasoning c]@c@created by applying carefully chosen variations over examples sampled from existing datasets. AddAny Text 1000 c]@c@Ungrammaticaladversarial tests c]@c@Misleading texts auto-generated based on question words and common words c]@c@Automated generation AddAnyExtend Text 2600 c]@c@Extended vocabularyadversarial tests c]@c@Similar to AddAny but with a broader vocabulary including high-frequency and passage words c]@c@Automated generation AddAnsCtx Text 10000 c]@c@Context-basedadversarial tests c]@c@Misleading texts formed from answer sentences with answer tokens removed c]@c@Context manipulation AddNegAns Text 5000 c]@c@Negative expression adversarial tests c]@c@Misleading texts created by negating answer sentences c]@c@Text transformation AddSentDiverse Text 109.4K c]@c@Enriched SQuAD training data with adversarial examples c]@c@Expansion of AddSent with diverse adversarial instances c]@c@Incorporation into training data Qanta Text ∼1,000 c]@c@Human and computer challenge c]@c@Adversarial writing process using human-in-the-loop generation c]@c@Adversarial data creation§ EVALUATING ADVERSARIAL ATTACKS ON QUESTION ANSWERING SYSTEMS It is vital to evaluate the quality of adversarial questions. Various assessment criteria have different objectives, advantages, and limitations. Commonly used evaluation criteria are listed below.Perplexity: Perplexity is often used in language modeling to evaluate how well a probability model predicts a text sample <cit.>. Lower perplexity exhibits better model performance. However, perplexity may not be sufficient to assess adversarial questions because it focuses on the probability of the generated text. It does not necessarily catch semantic or contextual correctness. It can be calculated as:Perplexity(W) = 2^-1/N∑_i=1^Nlog_2 P(w_i | w_i-1, …, w_i-N+1) where- W is a sequence of words, - N is the context window size, - w_i is the i-th word in the sequence. Accuracy: Accuracy is a standard measure of evaluation of many problems. It estimates the ratio of correctly classified samples in the total samples. In the context of adversarial questions, accuracy can be used to evaluate whether a generated question precisely initiates a particular behavior or response from a model. However, accuracy alone may not grasp the variety and quality of competing questions. Accuracy = Number of Correct Predictions/Total Number of PredictionsBLEU (Bilingual Evaluation Understudy):BLEU is a machine translation metric <cit.>. It measures n-gram overlap between generated text and reference texts. It offers a rough similarity measure, but BLEU may not fully capture question quality or semantic consistency. It can be computed using the formula below. BLEU = p×exp( ∑_n=1^N w_n ·logprecision_n ) Where:- p is the penalty term, - N is the maximum n-gram order considered. - w_i is the weights for different n-gram orders, and - precision_n is the precision for n-grams.ROUGE (Recall-Oriented Understudy for Gisting Evaluation): ROUGE considers the overlap of n-grams, word sequences, and subsequences between generated and reference texts <cit.>. Like BLEU, ROUGE focuses on text overlap, which may not consider deeper semantics. It can be computed using the formula below. ROUGE = Number of overlapping units in reference and candidate/Number of units in reference “units" can be words, n-grams, or other text elements. Adversarial Success Rate (ASUCR): This metric measures the percentage of adversarial texts that successfully manipulated the model <cit.>. It indicates the percentage of adversarial examples that successfully yield the model to generate incorrect or undesirable outputs.The Attack Success Rate in the context of QA systems can be computed using the following formula: ASR =Number of Adversarial Examples Producing Incorrect Answers/Total Number of Adversarial Examplesx 100% Attack Effectiveness to Noise Ratio (ENR) The Effectiveness to Noise Ratio (ENR) is computed by dividing the ASUCR by the average per-pixel noise introduced to alter the image <cit.>. Mathematically, this is represented as: ENR = ASUCR/N Where N corresponds to the average pixel-wise squared difference between the noise-distorted image (I_0) and the original image (I), calculated across all pixels and channels. It is designed to provide more recognition to adversarial samples with lower noise levels than those with higher ones.Perturbation Rate: The perturbation rate measures the proportion of tokens (words, characters) in the adversarial example compared to the original example. Mathematically, it can be calculated using the following formula: Perturbation Rate = Number of Modified Tokens/Total Number of Tokens in Original Text A higher perturbation rate indicates a more aggressive modification of the original input, potentially conducting more successful adversarial attacks. However, balancing perturbation and maintaining the question's semantic meaning is essential.Attack Distance (L_p): Attack distance, often denoted as L_p, measures the magnitude of change to transform an original input into an adversarial example. This idea is often used to quantify how "far" the adversarial example is from the original input.The L_p distance between an original input vector x and an adversarial input vector x' is calculated as follows: L_p distance = ||x - x'||pHere, || · ||p represents the L_p norm, and the value of p determines the specific norm used for measuring the distance. Lower L_p distances indicate that the attack is more subtle and potentially more challenging to be detected by humans.Perceptual Adversarial Similarity Score (PASS): The Perceptual Adversarial Similarity Score (PASS) is a metric to assess the visual similarity between an original image and its adversarially perturbed version <cit.>. It quantifies how much an image has been altered by an adversarial attack in a way that remains perceptually similar to the human visual system. The Structural Similarity Index (SSIM): The Structural Similarity Index (SSIM) is used to quantify the perceptual difference between an original, clean image and an image that an adversarial attack has perturbed <cit.>.The SSIM metric helps evaluate how much the attack has deformed the image in terms of perceptual quality. It measures the structural information, luminance, and contrast of the images. The formula for calculating SSIM between two images X and Y is given by: SSIM(X, Y) = 2μ_Xμ_Y + c_1/μ_X^2 + μ_Y^2 + c_1·2σ_XY + c_2/σ_X^2 + σ_Y^2 + c_2·σ_Xσ_Y + c_3/σ_Xσ_Y + c_3 Where:- μ_X and μ_Y are the average pixel intensities of images X and Y.- σ_X and σ_Y are the standard deviations of pixel intensities in images X and Y- σ_XY is the covariance between pixel intensities of images X and Y.- c_1, c_2, and c_3 are constants that stabilize the division in case of very small denominator values. The Peak Signal-to-Noise Ratio (PSNR) The Peak Signal-to-Noise Ratio (PSNR) is utilized to measure the quality of the adversarial example in terms of visual similarity to the original image. PSNR helps quantify the extent of distortion or noise introduced by the attack. PSNR(X, Y) = 10 ·log_10(MSE(X, Y)/L^2) Where:- L is the maximum possible pixel value (e.g., 255 for 8-bit images).- MSE(X, Y) is the Mean Squared Error between images X and Y. Human evaluation: Human evaluation provides a comprehensive assessment of the quality of the text. It includes rating the quality of adversarial questions based on various dimensions, including fluency, relevance, contextuality, etc.Table <ref> compares various evaluation metrics commonly used in the context of adversarial attacks. The metrics are evaluated based on their supported modalities, advantages, disadvantages, sensitivity to attacks, interpretability, and robustness to noise.Sensitivity to attacks examines each metric's sensitivity to adversarial attacks, while interpretability reflects how easily the metric's results can be understood and analyzed. Moreover, robustness to noise assesses how well the metric performs when dealing with noise introduced in the data. Please note that the examinations are generalized and might differ depending on specific use cases. According to the table,Adversarial Success Rate is highly sensitive to attacks, interpretability, and highly robust to noise. Moreover, ROUGE is limited to measuring specific linguistic aspects. Comparison of Evaluation Metrics for Adversarial Attacks Metric c]@c@SupportedModalities Advantages Disadvantages c]@c@Sensitivityto Attacks Interpretability c]@c@Robustnessto Noise Perplexity Text-based c]@c@Measures model uncertainty c]@c@Sensitiveto text length High High High Accuracy Various c]@c@Simple interpretation c]@c@May not capture fine-grained differences Medium High High ROUGE Text-based c]@c@Focuseson recall c]@c@Limited to measuringspecific linguistic aspects Medium Medium Medium BLEU Text-based c]@c@Measures n-gram overlap c]@c@Ignores word order and context Medium Medium Medium c]@c@AdversarialSuccess Rate Various c]@c@Indicatesattack effectiveness c]@c@May not account for perceptual changes High High High c]@c@PerturbationRate Image-based c]@c@Quantifies perturbation amount c]@c@Ignores perceptual differences Medium Medium High c]@c@AttackDistance Image-based c]@c@Measures image distortion c]@c@Lacks human interpretability High Medium Medium c]@c@Perceptual Adversarial Similarity Score (PASS) Image-based c]@c@Incorporates human perception c]@c@May require subjective assessment High Medium Medium c]@c@Structural Similarity Index (SSIM) Image-based c]@c@Perceptually relevant c]@c@Sensitive to compression artifacts Medium Medium Medium c]@c@Peak Signal-to-Noise Ratio (PSNR) Image-based c]@c@Simple interpretation c]@c@Ignores perceptual differences Medium Medium Medium § DISCUSSION OF POTENTIAL FUTURE RESEARCH DIRECTIONSThis section highlights some of the most promising directions researchers could tackle to improve QA systems' robustness, reliability, and privacy concerning adversarial challenges.Robustness and Defense Mechanisms: Future research may focus on building more robust QA systems resistant to adversarial attacks. It includes exploring the defense mechanisms that detect the impact of adversarial examples. One strategy could be to train question generation models with adversarial examples during the training phase, allowing the model to learn from these challenging inputs and improve its robustness. Further analysis may explore the integration of evaluation methods to measure the model's confidence, enabling the model to give correct answers when uncertainty is high, potentially avoiding false or deceptive answers.Adversarial training:Adversarial training offers a promising approach to maintaining the robustness of QA systems against malicious attacks. Future research may examine adversarial training techniques that go beyond data augmentation. By training the models on a combination of clean and carefully crafted adversarial examples, the system navigates a variety of challenging inputs. This process allows the model to learn from adversarial examples. As a result, the model becomes more capable of handling adversarial attacks and becomes resistant to manipulation and corruption of input data. Exploring different training approaches, such as curriculum learning, can facilitate gradual exposure to increasingly complex adversarial examples during training. By systematically expanding the difficulty of adversarial examples, the model learns to adapt to various adversarial examples, thereby improving its overall robustness and ability to generalize. Multimodal Adversarial Attacks: Multimodal adversarial attacks purpose to manipulate textual and visual components of QA systems that utilize multimodal inputs, such as text, images, and videos.These attacks could involve modifying the content of the text or image to mislead the QA system into generating incorrect or biased responses. Adversarial examples in the multimodal setting may require more complex manipulations, as the attack needs to impact multiple modalities while maintaining contextual meaning.Human Defense Mechanisms in the Loop: Leveraging human evaluators as an integral component of defense mechanisms can enhance the robustness of QA systems. Future research may investigate ways in which human evaluators are integrated into the loop to analyze and validate ambiguous or adversarial queries. By combining human reasoning, QA systems can enhance their ability to better identify and react to adversarial examples by leveraging human insights.Privacy Protection Defenses Adversarial attacks can compromise user privacy by exploiting vulnerabilities in QA systems. Future research may examine adversarial threats and defense mechanisms that reduce user privacy. Investigating techniques such as differential privacy <cit.> or secure computation <cit.> can be instrumental in developing QA systems to preserve the privacy of sensitive information. AcknowledgmentsG.Yigit is supported by TUBITAK - BIDEB 2211/A national fellowship program for Ph.D. studies. *Declarations The authors declare that they have no conflict of interest.*Data Availability Data sharing not applicable to this article as no datasets were generated or analyzed during the current study. 97#1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook [Szegedy et al.2013]szegedy2013intriguingSzegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013) [Kurakin et al.2018]kurakin2018adversarialKurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112. Chapman and Hall/CRC, ??? (2018) [Eykholt et al.2018]eykholt2018robustEykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018) [Carlini et al.2016]carlini2016hiddenCarlini, N., Mishra, P., Vaidya, T., Zhang, Y., Sherr, M., Shields, C., Wagner, D., Zhou, W.: Hidden voice commands. In: 25th USENIX Security Symposium (USENIX Security 16), pp. 513–530 (2016) [Zhang et al.2017]zhang2017dolphinattackZhang, G., Yan, C., Ji, X., Zhang, T., Zhang, T., Xu, W.: Dolphinattack: Inaudible voice commands. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 103–117 (2017) [Le et al.2014]le2014automaticLe, N.-T., Kojiri, T., Pinkwart, N.: Automatic question generation for educational applications–the state of art. In: Advanced Computational Methods for Knowledge Engineering: Proceedings of the 2nd International Conference on Computer Science, Applied Mathematics and Applications (ICCSAMA 2014), pp. 325–338 (2014). Springer [Divate and Salgaonkar2017]divate2017automaticDivate, M., Salgaonkar, A.: Automatic question generation approaches and evaluation techniques. Current Science, 1683–1691 (2017) [Ch and Saha2018]ch2018automaticCh, D.R., Saha, S.K.: Automatic multiple choice question generation from text: A survey. IEEE Transactions on Learning Technologies 13(1), 14–25 (2018) [Amidei et al.2018]amidei2018evaluationAmidei, J., Piwek, P., Willis, A.: Evaluation methodologies in automatic question generation 2013-2018 (2018) [Pan et al.2019]pan2019recentPan, L., Lei, W., Chua, T.-S., Kan, M.-Y.: Recent advances in neural question generation. arXiv preprint arXiv:1905.08949 (2019) [Patil and Patwardhan2020]patil2020visualPatil, C., Patwardhan, M.: Visual question generation: The state of the art. ACM Computing Surveys (CSUR) 53(3), 1–22 (2020) [Kurdi et al.2020]kurdi2020systematicKurdi, G., Leo, J., Parsia, B., Sattler, U., Al-Emari, S.: A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education 30, 121–204 (2020) [Das et al.2021]das2021automaticDas, B., Majumder, M., Phadikar, S., Sekh, A.A.: Automatic question generation and answer assessment: a survey. Research and Practice in Technology Enhanced Learning 16(1), 1–15 (2021) [Mulla and Gharpure2023]mulla2023automaticMulla, N., Gharpure, P.: Automatic question generation: a review of methodologies, datasets, evaluation metrics, and applications. Progress in Artificial Intelligence 12(1), 1–32 (2023) [Mishra and Jain2016]mishra2016surveyMishra, A., Jain, S.K.: A survey on question answering systems with classification. Journal of King Saud University-Computer and Information Sciences 28(3), 345–361 (2016) [Jin et al.2023]jin2023buildingJin, S., Lian, X., Jung, H., Park, J., Suh, J.: Building a deep learning-based qa system from a cqa dataset. Decision Support Systems, 114038 (2023) [Abdel-Nabi et al.2023]abdel2023deepAbdel-Nabi, H., Awajan, A., Ali, M.Z.: Deep learning-based question answering: a survey. Knowledge and Information Systems 65(4), 1399–1485 (2023) [Wu et al.2020]wu2020seq2seqWu, L., Wu, P., Zhang, X.: A seq2seq-based approach to question answering over knowledge bases. In: Semantic Technology: 9th Joint International Conference, JIST 2019, Hangzhou, China, November 25–27, 2019, Revised Selected Papers 9, pp. 170–181 (2020). Springer [Fan et al.2019]fan2019eli5Fan, A., Jernite, Y., Perez, E., Grangier, D., Weston, J., Auli, M.: Eli5: Long form question answering. arXiv preprint arXiv:1907.09190 (2019) [Yigit and Amasyali2019]yigit2019askYigit, G., Amasyali, M.F.: Ask me: A question answering system via dynamic memory networks. In: 2019 Innovations in Intelligent Systems and Applications Conference (ASYU), pp. 1–5 (2019). IEEE [Xiao et al.2018]xiao2018readingXiao, L., Wang, N., Yang, G.: A reading comprehension style question answering model based on attention mechanism. In: 2018 IEEE 29th International Conference on Application-specific Systems, Architectures and Processors (ASAP), pp. 1–4 (2018). IEEE [Schwartz et al.2017]schwartz2017highSchwartz, I., Schwing, A., Hazan, T.: High-order attention models for visual question answering. Advances in Neural Information Processing Systems 30 (2017) [Shen et al.2022]shen2022effectsShen, Y., Lai, E.M.-K., Mohaghegh, M.: Effects of similarity score functions in attention mechanisms on the performance of neural question answering systems. Neural Processing Letters 54(3), 2283–2302 (2022) [Chu-Carroll et al.2003]chu2003questionChu-Carroll, J., Czuba, K., Prager, J., Ittycheriah, A.: In question answering, two heads are better than one. In: Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 24–31 (2003) [Chekalina and Panchenko2023]chekalina2023retrievingChekalina, V., Panchenko, A.: Retrieving comparative arguments using ensemble methods and neural information retrieval. arXiv preprint arXiv:2305.01513 (2023) [Yu et al.2018]yu2018modellingYu, J., Qiu, M., Jiang, J., Huang, J., Song, S., Chu, W., Chen, H.: Modelling domain relationships for transfer learning on retrieval-based question answering systems in e-commerce. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 682–690 (2018) [Bornea et al.2021]bornea2021multilingualBornea, M., Pan, L., Rosenthal, S., Florian, R., Sil, A.: Multilingual transfer learning for qa using translation as data augmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 12583–12591 (2021) [Yigit and Amasyali2023]yigit2023enhancingYigit, G., Amasyali, M.F.: Enhancing multiple-choice question answering through sequential fine-tuning and curriculum learning strategies. Knowledge and Information Systems, 1–18 (2023) [Kaiser et al.2021]kaiser2021reinforcementKaiser, M., Saha Roy, R., Weikum, G.: Reinforcement learning from reformulations in conversational question answering over knowledge graphs. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 459–469 (2021) [Hua et al.2020]hua2020fewHua, Y., Li, Y.-F., Haffari, G., Qi, G., Wu, T.: Few-shot complex knowledge base question answering via meta reinforcement learning. arXiv preprint arXiv:2010.15877 (2020) [Misu et al.2012]misu2012reinforcementMisu, T., Georgila, K., Leuski, A., Traum, D.: Reinforcement learning of question-answering dialogue policies for virtual museum guides. In: Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 84–93 (2012) [Shao et al.2023]shao2023promptingShao, Z., Yu, Z., Wang, M., Yu, J.: Prompting large language models with answer heuristics for knowledge-based visual question answering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14974–14983 (2023) [Jiang et al.2021]jiang2021canJiang, Z., Araki, J., Ding, H., Neubig, G.: How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics 9, 962–977 (2021) [Chappuis et al.2022]chappuis2022promptChappuis, C., Zermatten, V., Lobry, S., Le Saux, B., Tuia, D.: Prompt-rsvqa: Prompting visual context to a language model for remote sensing visual question answering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1372–1381 (2022) [Kumar et al.2021]kumar2021adversarialKumar, V., Maheshwary, R., Pudi, V.: Adversarial examples for evaluating math word problem solvers. arXiv preprint arXiv:2109.05925 (2021) [Wang and Bansal2018]wang2018robustWang, Y., Bansal, M.: Robust machine comprehension models via adversarial training. arXiv preprint arXiv:1804.06473 (2018) [Jia and Liang2017]jia2017adversarialJia, R., Liang, P.: Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328 (2017) [Liu et al.2020]liu2020robustLiu, K., Liu, X., Yang, A., Liu, J., Su, J., Li, S., She, Q.: A robust adversarial training approach to machine reading comprehension. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 8392–8400 (2020) [Rosenthal et al.2021]rosenthal2021multilingualRosenthal, S., Bornea, M., Sil, A.: Are multilingual bert models robust? a case study on adversarial attacks for multilingual question answering. arXiv preprint arXiv:2104.07646 (2021) [Cao et al.2022]cao2022tasaCao, Y., Li, D., Fang, M., Zhou, T., Gao, J., Zhan, Y., Tao, D.: Tasa: Deceiving question answering models by twin answer sentences attack. arXiv preprint arXiv:2210.15221 (2022) [Xue et al.2020]xue2020dpaegXue, M., Yuan, C., Wang, J., Liu, W.: Dpaeg: a dependency parse-based adversarial examples generation method for intelligent q&a robots. Security and Communication Networks 2020, 1–15 (2020) [Rahurkar et al.]rahurkarhumanRahurkar, P., Olson, M., Tadepalli, P.: Human adversarial qa: Did the model understand the paragraph? [Chan et al.2021]chan2021breakingChan, A., Ma, L., Juefei-Xu, F., Ong, Y.-S., Xie, X., Xue, M., Liu, Y.: Breaking neural reasoning architectures with metamorphic relation-based adversarial examples. IEEE Transactions on Neural Networks and Learning Systems 33(11), 6976–6982 (2021) [Lin et al.2021]lin2021usingLin, J., Zou, J., Ding, N.: Using adversarial attacks to reveal the statistical bias in machine reading comprehension models. arXiv preprint arXiv:2105.11136 (2021) [Sun et al.2020]sun2020advSun, L., Hashimoto, K., Yin, W., Asai, A., Li, J., Yu, P., Xiong, C.: Adv-bert: Bert is not robust on misspellings! generating nature adversarial samples on bert. arXiv preprint arXiv:2003.04985 (2020) [Feng et al.2018]feng2018pathologiesFeng, S., Wallace, E., Grissom II, A., Iyyer, M., Rodriguez, P., Boyd-Graber, J.: Pathologies of neural models make interpretations difficult. arXiv preprint arXiv:1804.07781 (2018) [Zhu et al.2020]zhu2020generatingZhu, Y., Zhou, Y., Xia, M.: Generating semantically valid adversarial questions for tableqa. arXiv preprint arXiv:2005.12696 (2020) [Wang et al.2019]wang2019t3Wang, B., Pei, H., Pan, B., Chen, Q., Wang, S., Li, B.: T3: Tree-autoencoder constrained adversarial text generation for targeted attack. arXiv preprint arXiv:1912.10375 (2019) [Gan and Ng2019]gan2019improvingGan, W.C., Ng, H.T.: Improving the robustness of question answering systems to question paraphrasing. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6065–6075 (2019) [Fang and Wang2023]fang2023defendingFang, X., Wang, W.: Defending machine reading comprehension against question-targeted attacks. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 01–08 (2023). IEEE [Blohm et al.2018]blohm2018comparingBlohm, M., Jagfeld, G., Sood, E., Yu, X., Vu, N.T.: Comparing attention-based convolutional and recurrent neural networks: Success and limitations in machine reading comprehension. arXiv preprint arXiv:1808.08744 (2018) [Kai et al.2021]kai2021learningKai, S., Wu, L., Tang, S., Zhuang, Y., Ding, Z., Xiao, Y., Long, B., : Learning to generate visual questions with noisy supervision. Advances in Neural Information Processing Systems 34, 11604–11617 (2021) [Su et al.2021]su2021endSu, H.-T., Chang, C.-H., Shen, P.-W., Wang, Y.-S., Chang, Y.-L., Chang, Y.-C., Cheng, P.-J., Hsu, W.H.: End-to-end video question-answer generation with generator-pretester network. IEEE Transactions on Circuits and Systems for Video Technology 31(11), 4497–4507 (2021) [Jain et al.2017]jain2017creativityJain, U., Zhang, Z., Schwing, A.G.: Creativity: Generating diverse questions using variational autoencoders. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6485–6494 (2017) [Akbar et al.2023]akbar2023multimodalAkbar, M.F., Al Faraby, S., Romadhony, A., Adiwijaya, A.: Multimodal question generation using multimodal adaptation gate (mag) and bert-based model. In: 2023 IEEE 8th International Conference for Convergence in Technology (I2CT), pp. 1–7 (2023). IEEE [Huang et al.2021]huang2021robustHuang, F., Jolfaei, A., Bashir, A.K.: Robust multimodal representation learning with evolutionary adversarial attention networks. IEEE Transactions on Evolutionary Computation 25(5), 856–868 (2021) [Lin et al.2021]lin2021vx2textLin, X., Bertasius, G., Wang, J., Chang, S.-F., Parikh, D., Torresani, L.: Vx2text: End-to-end learning of video-based text generation from multimodal inputs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7005–7015 (2021) [Xie et al.2021]xie2021multipleXie, J., Cai, Y., Huang, Q., Wang, T.: Multiple objects-aware visual question generation. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 4546–4554 (2021) [Li et al.2018]li2018visualLi, Y., Duan, N., Zhou, B., Chu, X., Ouyang, W., Wang, X., Zhou, M.: Visual question generation as dual task of visual question answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6116–6124 (2018) [Mostafazadeh et al.2016]mostafazadeh2016generatingMostafazadeh, N., Misra, I., Devlin, J., Mitchell, M., He, X., Vanderwende, L.: Generating natural questions about an image. arXiv preprint arXiv:1603.06059 (2016) [Wang et al.2020]wang2020videoWang, Y.-S., Su, H.-T., Chang, C.-H., Liu, Z.-Y., Hsu, W.H.: Video question generation via semantic rich cross-modal self-attention networks learning. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2423–2427 (2020). IEEE [Liu et al.2022]liu2022revisitingLiu, A., Xie, H., Liu, X., Yin, Z., Liu, S.: Revisiting audio visual scene-aware dialog. Neurocomputing 496, 227–237 (2022) [Goodfellow et al.2014]goodfellow2014explainingGoodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014) [Kang et al.2023]kang2023dialogKang, G.-C., Kim, S., Kim, J.-H., Kwak, D., Zhang, B.-T.: The dialog must go on: Improving visual dialog via generative self-training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6746–6756 (2023) [Patel et al.2021]patel2021generatingPatel, A., Bindal, A., Kotek, H., Klein, C., Williams, J.: Generating natural questions from images for multimodal assistants. In: ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2270–2274 (2021). IEEE [Vedd et al.2021]vedd2021guidingVedd, N., Wang, Z., Rei, M., Miao, Y., Specia, L.: Guiding visual question generation. arXiv preprint arXiv:2110.08226 (2021) [Krishna et al.2019]krishna2019informationKrishna, R., Bernstein, M., Fei-Fei, L.: Information maximizing visual question generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2008–2018 (2019) [Guo et al.2020]guo2020multiGuo, Z., Zhao, Z., Jin, W., Wei, Z., Yang, M., Wang, N., Yuan, N.J.: Multi-turn video question generation via reinforced multi-choice attention network. IEEE Transactions on Circuits and Systems for Video Technology 31(5), 1697–1710 (2020) [Tang et al.2020]tang2020semanticTang, R., Ma, C., Zhang, W.E., Wu, Q., Yang, X.: Semantic equivalent adversarial data augmentation for visual question answering. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIX 16, pp. 437–453 (2020). Springer [Sun et al.2023]sun2023instanceSun, Y., Ochiai, H., Sakuma, J.: Instance-level trojan attacks on visual question answering via adversarial learning in neuron activation space. arXiv preprint arXiv:2304.00436 (2023) [Liu et al.2023]liu2023diffprotectLiu, J., Lau, C.P., Chellappa, R.: Diffprotect: Generate adversarial examples with diffusion models for facial privacy protection. arXiv preprint arXiv:2305.13625 (2023) [Chaturvedi and Garain2020a]chaturvedi2020mimicChaturvedi, A., Garain, U.: Mimic and fool: A task-agnostic adversarial attack. IEEE transactions on neural networks and learning systems 32(4), 1801–1808 (2020) [Chaturvedi and Garain2020b]chaturvedi2020attackingChaturvedi, A., Garain, U.: Attacking vqa systems via adversarial background noise. IEEE Transactions on Emerging Topics in Computational Intelligence 4(4), 490–499 (2020) [Gao et al.2021]gao2021pushGao, L., Huang, Z., Song, J., Yang, Y., Shen, H.T.: Push & pull: Transferable adversarial examples with attentive attack. IEEE Transactions on Multimedia 24, 2329–2338 (2021) [Xu et al.2018]xu2018foolingXu, X., Chen, X., Liu, C., Rohrbach, A., Darrell, T., Song, D.: Fooling vision and language models despite localization and attention mechanism. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4951–4961 (2018) [Sharma et al.2018]sharma2018attendSharma, V., Kalra, A., Vaibhav, S.C., Patel, L., Morency, L.-P.: Attend and attack: Attention guided adversarial attacks on visual question answering models. In: Proc. Conf. Neural Inf. Process. Syst. Workshop Secur. Mach. Learn, vol. 2 (2018) [Rao and Daumé III2019]rao2019answerRao, S., Daumé III, H.: Answer-based adversarial training for generating clarification questions. arXiv preprint arXiv:1904.02281 (2019) [Liu et al.2020]liu2020adversarialLiu, Y., Zhang, X., Huang, F., Cheng, L., Li, Z.: Adversarial learning with multi-modal attention for visual question answering. IEEE Transactions on Neural Networks and Learning Systems 32(9), 3894–3908 (2020) [Ramakrishnan et al.2018]ramakrishnan2018overcomingRamakrishnan, S., Agrawal, A., Lee, S.: Overcoming language priors in visual question answering with adversarial regularization. Advances in Neural Information Processing Systems 31 (2018) [Grand and Belinkov2019]grand2019adversarialGrand, G., Belinkov, Y.: Adversarial regularization for visual question answering: Strengths, shortcomings, and side effects. arXiv preprint arXiv:1906.08430 (2019) [Gan et al.2020]gan2020largeGan, Z., Chen, Y.-C., Li, L., Zhu, C., Cheng, Y., Liu, J.: Large-scale adversarial training for vision-and-language representation learning. Advances in Neural Information Processing Systems 33, 6616–6628 (2020) [Feng et al.2023]feng2023improvingFeng, J., Sun, J., Shao, D., Cui, J.: Improving the robustness of machine reading comprehension via contrastive learning. Applied Intelligence 53(8), 9103–9114 (2023) [Wallace et al.2019]wallace2019trickWallace, E., Rodriguez, P., Feng, S., Yamada, I., Boyd-Graber, J.: Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering. Transactions of the Association for Computational Linguistics 7, 387–401 (2019) [Sheng et al.2021]sheng2021humanSheng, S., Singh, A., Goswami, V., Magana, J., Thrush, T., Galuba, W., Parikh, D., Kiela, D.: Human-adversarial visual question answering. Advances in Neural Information Processing Systems 34, 20346–20359 (2021) [Zellers et al.2019]zellers2019hellaswagZellers, R., Holtzman, A., Bisk, Y., Farhadi, A., Choi, Y.: Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 (2019) [Dua et al.2019]dua2019dropDua, D., Wang, Y., Dasigi, P., Stanovsky, G., Singh, S., Gardner, M.: Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. arXiv preprint arXiv:1903.00161 (2019) [Zellers et al.2018]zellers2018swagZellers, R., Bisk, Y., Schwartz, R., Choi, Y.: Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326 (2018) [Min et al.2020]min2020ambigqaMin, S., Michael, J., Hajishirzi, H., Zettlemoyer, L.: Ambigqa: Answering ambiguous open-domain questions. arXiv preprint arXiv:2004.10645 (2020) [Cobbe et al.2021]cobbe2021trainingCobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al.: Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 (2021) [Patel et al.2021]patel2021nlpPatel, A., Bhattamishra, S., Goyal, N.: Are nlp models really able to solve simple math word problems? arXiv preprint arXiv:2103.07191 (2021) [Jelinek et al.1977]jelinek1977perplexityJelinek, F., Mercer, R.L., Bahl, L.R., Baker, J.K.: Perplexity—a measure of the difficulty of speech recognition tasks. The Journal of the Acoustical Society of America 62(S1), 63–63 (1977) [Papineni et al.2002]papineni2002bleuPapineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002) [Lin2004]lin2004rougeLin, C.-Y.: Rouge: A package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004) [Rozsa et al.2016]rozsa2016adversarialRozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25–32 (2016) [Brunet et al.2011]brunet2011mathematicalBrunet, D., Vrscay, E.R., Wang, Z.: On the mathematical properties of the structural similarity index. IEEE Transactions on Image Processing 21(4), 1488–1499 (2011) [Dwork2006]dwork2006differentialDwork, C.: Differential privacy. In: International Colloquium on Automata, Languages, and Programming, pp. 1–12 (2006). Springer [Lindell2005]lindell2005secureLindell, Y.: Secure multiparty computation for privacy preserving data mining. In: Encyclopedia of Data Warehousing and Mining, pp. 1005–1009. IGI global, ??? (2005)
http://arxiv.org/abs/2312.16156v1
{ "authors": [ "Gulsum Yigit", "Mehmet Fatih Amasyali" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231226183029", "title": "From Text to Multimodal: A Comprehensive Survey of Adversarial Example Generation in Question Answering Systems" }
Invariable generation and rational homology]Invariable generation of finite simple groups and rational homology of coset posets R. M. Guralnick]Robert M. Guralnick Department of Mathematics, University of Southern California, 3620 S. Vermont Ave, Los Angeles, CA 90089-2532 USA [email protected]. Shareshian]John Shareshian Department of Mathematics, Washington University, One Brookings Drive, St Louis, MO 63124 USA [email protected] R. Woodroofe]Russ Woodroofe Univerza na Primorskem, Glagoljas̆ka 8, 6000 Koper, Slovenia [email protected] <https://osebje.famnit.upr.si/ russ.woodroofe/> We show that every finite simple group is generated invariably by a Sylow subgroup and a cyclic group. It follows that that the order complex of the coset poset of an arbitrary finite group has nontrivial reduced rational homology. [ [=====Work of the first author was partially supported by NSF Grant DMS-1901595 and Simons Foundation Fellowship 609771. Work of the second author was supported in part by NSF Grant DMS 1518389. Work of the third author is supported in part by the Slovenian Research Agency research program P1-0285 and research projects J1-9108, N1-0160, J1-2451, J1-3003, and J1-50000.§ INTRODUCTION Given a group G and subsets S,T of G, we say that S and T generate G invariably if ⟨ S^g,T^h ⟩=Gfor all g,h ∈ G.Often, we identify a set of size one with the element that it contains.We are interested in invariable generation of finite simple groups.Herein we prove the following result, using the Classification.Every finite simple group is generated invariably by a Sylow subgroup and a cyclic subgroup. We apply Theorem <ref> to give a streamlined proof of a stronger result than one proved by two of the present authors in <cit.>.Given a finite poset , the order complex Δ is the abstract simplicial complex whose d-dimensional faces are the chains of length d (cardinality d+1) from .Given a finite group G, we writefor the set of all cosets of all proper subgroups of G (including the trivial subgroup if |G|>1), ordered by inclusion.The main result in <cit.> is that if G is a finite group, then Δ has nontrivial reduced homology in characteristic 2, and is therefore not contractible.This settles a question raised by Brown in <cit.>.In Section <ref>, we use Theorem <ref> and Smith Theory to prove the following result.If G is a finite group, then Δ has nontrivial reduced rational homology. The above-mentioned characteristic two result follows from Theorem <ref> and the Universal Coefficient Theorem.Given the results in <cit.> and <cit.>, it is natural to ask whether the reduced Euler characteristic χ(Δ) is nonzero for all finite groups G.This question remains open.In <cit.>, Damian and Lucchini showed that “most" finite simple groups are generated invariably by a Sylow 2-subgroup and a cyclic subgroup of prime order.Since there is no constraint on the prime dividing the order of the Sylow subgroup in Theorem <ref>, we may take an easier route than that taken in <cit.>.It is clear that Theorem <ref> holds for groups of prime order.In Sections <ref> and <ref>, we confirm results stronger than Theorem <ref> for alternating groups and sporadic groups.For each integer n ≥ 5, the alternating group A_n is generated invariably by two elements, at least one of which has prime order.All sporadic simple groups except the Mathieu groups M_11, M_12, and M_24 are generated invariably by two elements of prime order.Moreover, * M_11 is generated invariably by an element of order eleven and an element of order eight; * M_24 is generated invariably by an element of order 23 and an element of order four; and * M_12 is generated invariably by an element of order eleven and an element of order ten, but not by two elements of prime power order. For simple groups of Lie type, we have a result slightly stronger than Theorem <ref>.If K is a simple group of Lie type in characteristic p, and not isomorphic with one of L_6(2), U_4(2), Sp_6(2), or Ω_8^+(2), then K is generated invariably by a Sylow p-subgroup and an element of prime order.We will show in Lemma <ref> below that L_6(2) is generated invariably by an element of order seven and an element of order 31.Consulting <cit.>, we see that Sp_6(2) is generated invariably by a Sylow 3-subgroup and an element of order seven and that O_8^+(2) is generated invariably by a Sylow 3-subgroup and an element of order five. The group U_4(2) is isomorphic with PSp_4(3) and, as we shall prove, is generated invariablyby a Sylow 3-subgroup and an element of order five.Hence, Proposition <ref> suffices to complete the proof of Theorem <ref>.We prove Proposition <ref> in Section <ref>.The key point is that in almost all cases there is a Zsigmondy prime for some q^e-1 that divides |K| but does not divide the order of any parabolic subgroup.In subsequent work we will show that all but a negligible portion of the finite simple groups of Lie type are generated invariably by an element of prime order and an element of prime power order, and “most" of these are generated invariably by two elements of prime order.After proving our results in Sections <ref>, <ref>, <ref>, and <ref>, we make some final comments in Section <ref>.§ ALTERNATING GROUPSWe prove first a more specific version of Proposition <ref> that holds for all n>7.Bertrand's Postulate, which was proved by Chebyshev in <cit.>, says that the prime p appearing in Theorem <ref> exists.Assume that n>7.Let p be a prime satisfying n/2<p<n-2 and let y ∈ A_n have order p.If n is odd let x ∈ A_n be an n-cycle.If n is even let x ∈ A_n be the product of two disjoint n/2-cycles.Then x and y generate A_n invariably. As p>n/2, each ⟨ x ⟩-orbit intersects the support of y nontrivially, hence ⟨ x,y ⟩ is transitive.No imprimitive subgroup contains y, since every prime divisor of the order of such a subgroup is at most some proper divisor of n.So ⟨ x,y ⟩ is primitive.Finally, a result of Jordan (see <cit.> or <cit.>) shows that no primitive proper subgroup of A_n contains y, since y fixes more than two points.Therefore ⟨ x,y ⟩=A_n.As every conjugate of x has the same cycle type as x and the same goes for y, we can apply the same argument to conclude that ⟨ x^g,y^h ⟩=A_n for all g,h ∈ A_n. Inspection shows that A_5 is generated invariably by a 5-cycle and a 3-cycle, A_6 is generated invariably by a 5-cycle and an element of cycle type (4,2), and A_7 is generated invariably by a 7-cycle and a 5-cycle.(One can use <cit.>.)This completes the proof of Proposition <ref>.§ SPORADIC GROUPSWe prove Proposition <ref> using facts in <cit.> along with the updates given in <cit.>.For each sporadic group S listed in Table <ref>, we give element orders p,r so that pr divides |S|, but does not divide |M| for M<S. Thus, these sporadic groups are generated invariably by elements of these orders. The orders p,r are prime, except when S=M_11, where we take r=8; it can be seen in<cit.> that M_11 indeed has an element of order eight. It remains to show that M_11 is not generated invariably by two elements of prime order, and to examine M_12, M_24, HS, Co_2, Co_3, and McL. We use facts and notation appearing in <cit.>.The primes dividing |M_11| are 2,3,5,11.Given any such prime, M_11 contains one conjugacy class of elements of the given order.As L_2(11) embeds in M_11 and has order divisible by all of 2,3,5,11, we see that our claim about M_11 holds.The conjugacy classes of nontrivial elements of prime power order in M_12 are 2A, 2B, 3A, 3B, 4A, 4B, 5A, 8A, 8B, 11A and 11B.Each element of order four is the square of an element of order eight.So, classes 4A and 4B can be ignored.Elements in class 2B are squares of elements of order four.So, we can also ignore this class.Elements in class 11A are powers of elements of class 11B, and vice versa.So, we may consider these two classes as one, which we call class 11AB.Classes 8A and 8B fuse in (M_12).As no two classes consisting of elements of odd order fuse in (M_12), we may consider these two classes as one, which we call class 8AB.By Sylow's Theorem, we may ignore the pairs of classes (2A,8AB) and (3A,3B).A point stabilizer in the natural action of M_12 on twelve points is isomorphic to M_11, and thus contains elements of orders 2,3,5,8,11.The group L_2(11) embeds in M_12 through its action on the set of twelve 1-spaces from _11^2.The image of this embedding contains elements of orders 2,3,5, and 11; those of orders two and three act without fixed points.An element of M_12 having order three and fixing a point fixes three points and thus stabilizes a 2-set in the natural action, as does every element of order two.The stabilizer of a 3-set in the natural action is isomorphic with AGL_2(3) and thus contains an element of order eight, along with elements of classes 3A and 3B.It is now straightforward to confirm that M_12 is not generated invariably by two elements of prime power order.On the other hand, a maximal subgroup of M_12 having order divisible by eleven is isomorphic to one of M_11 or L_2(11), and therefore contains no element of order ten.On the other hand, M_12 contains elements of order ten.Any such element generates M_12 invariably with any element of order eleven.If g ∈ M_24 has order 23 and H<M_24 is a maximal subgroup containing g, then H belongs to either the unique conjugacy class of subgroups isomorphic to M_23 or the unique conjugacy class of maximal subgroups isomorphic to L_2(23).Each of M_23 and L_2(23) contains a unique conjugacy class of elements of order four, while M_24 contains three classes of elements of order four.So, some element of order four and any element of order 23 generate M_24 invariably.(One can show that such an element of order four lies in class 4A.)Using arguments similar to those employed when examining M_12, one can show that if C_1,C_2 are conjugacy classes in M_24 consisting of elements of prime order, then there exists (c_1,c_2) ∈ C_1 × C_2 such that either⟨ c_1,c_2 ⟩ stabilizes a point, a 2-set or a 3-set in the natural action of M_24 on twenty four points, or ⟨ c_1,c_2 ⟩ is contained in a maximal subgroup isomorphic with L_2(23).Assume now that the group S is one of HS or McL.Then |S| is divisible by eleven.If H is a maximal subgroup of S with |H| divisible by eleven, then H is isomorphic with one of the Mathieu groups M_11 or M_22.It follows that H has a unique conjugacy class of elements of order five, and every element of this class normalizes a subgroup of order eleven.An element of order eleven in S generates its own centralizer.So, if K<S has order eleven, then |N_S(K)| is not divisible by 25.As 121 does not divide |S|, we know that all subgroups of order eleven in S are conjugate.It follows now that there is a unique conjugacy class C of elements of order five in S such that an element of C and any element of order eleven do not generate S invariably.Indeed, C consists of those elements of order five in S that normalize a subgroup of order eleven.As each of HS and McL has two conjugacy classes of elements of order five, our claim about these groups holds.Finally, assume that S is one of Co_2 or Co_3.Every maximal subgroup of S containing an element of order 23 lies in one conjugacy class of groups isomorphic to M_23.Now M_23 containsa unique conjugacy class of elements of order two, while Co_2 contains three such classes and Co_3 contains two such classes.Our claim about these groups follows.§ GROUPS OF LIE TYPE We call a finite simple group K a group of Lie type in characteristic p if there exist a simple algebraic groupover the algebraically closed field 𝔽_p, and an endomorphism σ of , such that K is generated by the elements of order p in the fixed-point group ^σ.(Almost always K=^σ.)A parabolic subgroup of K is any subgroup containing the normalizer of a Sylow p-subgroup of K.A result of Tits says that every maximal subgroup of K containing a Sylow p-subgroup of K is parabolic.(See <cit.>.)We seek a prime r dividing |K| but not dividing the order of any parabolic subgroup of K, as in this case K is generated invariably by a Sylow p-subgroup and an element of order r. §.§ An example for the uninitiated - GL_n(q) and PSL_n(q)The reader unfamiliar with finite groups of Lie type might benefit from considering the groups GL_n(q), in which parabolic subgroups are stabilizers of nontrivial proper subspaces of the natural vector space.If one identifies _q^n with _q^n and considers the (linear) action of a multiplicative generator α of _q^n^∗, one obtains a cyclic subgroup C ≤ GL_n(q) having order q^n-1.On the other hand, the stabilizer P_k of a k-dimensional subspace of _q^n satisfies |P_k|=q^n2∏_j=1^k(q^j-1)∏_j=1^n-k(q^j-1). Hence, C acts irreducibly on _q^n, as does any subgroup Q ≤ C whose order is a prime dividing q^n-1 but not dividing q^j-1 for j<n.(Such primes are the Zsigmondy primes discussed in Section <ref> below.)It turns out that Q ≤ SL_n(q) and the image of Q in K=PSL_n(q) will generate K invariably along with any Sylow p-subgroup, where p divides q.The subgroup C is often called a Coxeter torus, and is equal to T^σ for some maximal torus T≤K:=GL_n(_p), where σ acts on K by raising every matrix entry to its q^th power.In the general case where K=K^σ, almost always there is some maximal torus T≤K such that T^σ has order divisible by some prime r that does not divide the order of any parabolic subgroup of K.It turns out that in this general case we can take r to be a well-chosen Zsigmondy prime. §.§ Zsigmondy's Theorem We recall now a version of a theorem of Zsigmondy that suits our purposes.Given positive integers q and e, a Zsigmondy prime for the pair (q,e) is a prime r dividing q^e-1 but not dividing q^f-1 if f<e is a positive integer.A Zsigmondy prime for (q,e) exists unless one of * e=6 and q=2, or* e=2 and q=2^k-1 for some integer kholds.§.§ An example for the initiated - E_8(q)We use notation and terminology from <cit.>. Given a power q of a prime p, the simple group K=E_8(q) is untwisted of universal type.It follows from <cit.> that a parabolic subgroup of K is a semidirect product P=UL, where U is a p-group and L is a product HM. Here MHM is isomorphic to one of D_7(q), A_1(q) × A_6(q), A_4(q) × A_2(q) × A_1(q), A_4(q) × A_3(q), D_5(q) × A_2(q), E_6(q) × A_1(q), or E_7(q), all components in these products being of universal type.As K is untwisted, the Cartan subgroup H is isomorphic with (ℤ_q-1)^8, by <cit.>.Inspection shows that a Zsigmondy prime for (q,30) divides |K| but does not divide |P|.The reader familiar with the theory of finite groups of Lie type will have observed that a similar argument shows that whenever K is an untwisted finite simple group of Lie type, one can take e to be the highest exponent of the Weyl group. This is done in Table <ref>.The twisted case is more complicated, as one sees upon considering ^2A_n(q) with n odd. §.§ The general caseUsing the notation for finite groups of Lie type found in <cit.>, we list in Table <ref> each family {^dΣ^ϵ(q)} of such groups.For each such family, we give an exponent e such that a Zsigmondy prime for (q,e) divides the order of K=^dΣ^ϵ(q) but does not divide the order of any parabolic subgroup of K.The order of K is given in <cit.>.In the last column of our table one finds for each family a reference from which the orders of parabolic subgroups can be determined.We are left with the cases where there is no Zsigmondy prime for (q,e).By Theorem <ref> and inspection, this occurs only if K=L_2(p) for some Mersenne prime por the type of K is one of G_2(2), A_5^+(2), A_2^-(2), A_3^-(2), B_3(2), C_3(2), or D_4^+(2).(Here we use Catalan's Conjecture, which says that if a and a+1 are both perfect powers then a=8 and which was proved by Mihăilescu in <cit.>.)The only parabolic subgroups of K=L_2(p) are the Borel subgroups, isomorphic with the semidirect product ℤ_p:ℤ_(p-1)/2.If p is a Mersenne prime then p-1/2 is odd, hence a nontrivial involution in K is contained in no parabolic subgroup.The simple group of type G_2(2) is generated invariably by a Sylow 2-subgroup and an element of order seven, as can be seen in <cit.> or <cit.>. We remark that this group is isomorphic with that of type A_2^-(3) and has been handled already.There is no simple group of type A_2^-(2), by an order argument. The remaining groups are the classical groups L_6(2), U_4(2), Ω_7(2), Sp_6(2), and Ω_8^+(2).We observe that L_6(2), U_4(2), Sp_6(2) and Ω_8^+(2) are the exceptions listed in Proposition <ref>. Moreover, Ω_7(2) ≅ Sp_6(2), as noted in <cit.>. The next lemma, along with the discussion immediately following Proposition <ref>, completes our proof of Theorem <ref>.The group L_6(2) is generated invariably by an element of order 31 and an element of order seven.A complete list of maximal subgroups of SL_6(q) appears in <cit.>.Consulting this list, and observing that L_6(2) ≅ SL_6(2), we see that the maximal subgroups of L_6(2) having order divisible by 31 are the stabilizers of 1-dimensional subspaces and the stabilizers of 5-dimensional subspaces in the natural action on _2^6.Since seven is a Zsigmondy prime for (2,3), the group ℤ_7 has an irreducible representation of degree three over _2.Therefore, there is an element of order seven in L_6(2) that stabilizes some 3-dimensional subspaces but no other subspaces of _2^6.Such an element must generate L_6(2) invariably with an element of order 31. § PROOF OF THEOREM <REF>Our proof of Theorem <ref> proceeds along the same lines as that of the analogous characteristic two result given in <cit.>.Much of what we write here will be less detailed than what appears in <cit.>, which the interested reader may consult for extended arguments. §.§ Smith TheoryWe review here results from Smith (or Smith-Oliver) Theory that are stronger than what we will need.This theory describes constraints on the fixed point sets of certain groups acting on simplicial complexes.Claim (1) from the next theorem is Theorem 2 of <cit.>, and Claim (2) is Lemma 1 of <cit.>.Claim (3) follows from the fact that under the given conditions every face in Δ∖Δ^G lies in a G-orbit whose size is divisible by p.A good description of the key ideas behind the theorem appears in <cit.>.Assume that the group G acts on the finite simplicial complex Δ so that the set of fixed faces Δ^H of any subgroup H ≤ G forms a subcomplex of Δ. * If Δ is 𝔽_p-acyclic and G is a p-group, then Δ^G is 𝔽_p-acyclic.* If Δ is ℚ-acyclic and G is a cyclic group, then χ̃(Δ^G)=0.* If G is a p-group, then χ̃(Δ)≡χ̃(Δ^G) p.Assume that the group G acts on the finite simplicial complex Δ so that the set of fixed faces Δ^H of any subgroup H ≤ G forms a subcomplex of Δ. (A) Assume further that 1PNG with P a p-group, N/P cyclic, and G/N of prime power order.If Δ is _p-acyclic then Δ^G ≠∅.(B) Assume now that 1CG with C cyclic and G/C of prime power order. If Δ is -acyclic then Δ^G ≠∅.Corollary <ref> has been used before in combinatorics, although such use seems to be rare.We will use Claim (B) of the corollary below. Claim (A) was used (with N=G) in <cit.>. Kahn, Saks, and Sturtevant earlier had used Claim (A) in <cit.> to attack Karp's Evasiveness Conjecture.Claim (A) is proved by applying Claims (1)-(3) of Theorem <ref> in order, and observing that an _p-acyclic complex is -acyclic.Claim (B) follows from Claims (2) and (3) of the theorem.Finally, we state a special case of Claim (B) of Corollary <ref> that suffices for our purposes here.We observe that if a group H acts in an order preserving manner on a poset , then (Δ)^H=Δ(^H), hence Theorem <ref> applies.Assume that the group G acts in an order preserving manner on the finite poset .Ifhas trivial reduced rational homology and G is the direct product of a cyclic group and a group of prime power order, then the fixed point set ^G is nonempty. §.§ Proving Theorem <ref> The coset posetconsists of all cosets Hx such that H is a proper subgroup of G. Assume that NG.Following Brown (see <cit.>) we define the subposet of ,:={Hx ∈:HN=G}.Brown shows in <cit.> thatis homotopy equivalent to the simplicial join ofand . Using the Künneth formula for joins (see for example <cit.>) and induction on the length of a chief series for G, we see that Theorem <ref> follows directly from the next result.If N is a minimal normal subgroup of the finite group G, thenhas nontrivial reduced rational homology. The remainder of this section is devoted to proving Proposition <ref>.Proposition <ref> is true under the assumption that N is abelian.If N is an abelian minimal normal subgroup of G and HN=G, then H ∩ N=1, as H ∩ N is normalized by both N and H.It follows that |H|=[G:N].Therefore,is an antichain of size divisible by |N|.If =∅, then ={∅} and H_-1(,) ≠ 0.Otherwise,is not connected and H_0(,) ≠ 0. We assume from now on that the minimal normal subgroup N of G is nonabelian.There is an action G × G on , defined byHx^(g,k):=g^-1Hxk=H^gg^-1xk.As NG,is invariant under the action defined in (<ref>), and the resulting action of G × G onrestricts to an action of C × P onwhenever C,P ≤ G.With Lemma <ref> in mind, we are interested in the case where C is cyclic and P has prime power order.In order to apply the lemma, we must understand fixed points in the given action.These are described in <cit.>, which is not hard to prove.If C × P ≤ G × G acts onas in (<ref>), then ^C × P consists of those Hx such that ⟨ C,P^x^-1⟩≤ H If C × P ≤ N × N acts onas in (<ref>) and ⟨ C,P^g ⟩=N for all g ∈ G, then ^C × P=∅.Assume for contradiction that Hx ∈^C × P.We know that ⟨ C,P^x^-1⟩≤ H by Lemma <ref>.Therefore, N ≤ H.This is impossible, as H ≠ G and HN=G. Proposition <ref> will follow from Lemma <ref> and Corollary <ref> if we can show that there exist cyclic C ≤ N and P ≤ N of prime power order satisfying ⟨ C,P^g ⟩=N for all g ∈ G whenever G has a nonabelian minimal normal subgroup N.Given such N, there exist a nonabeilian simple group L and subgroups L_i (1 ≤ i ≤ t) of N such that each L_i is isomorphic with L and N is the internal direct product of the L_i.Moreover, the L_i are all the minimal normal subgroups of N and are therefore permuted by the conjugation action of any g ∈ G. (All of this can be found for example in <cit.>.)We write π_i for the projection of N onto L_i and observe that for J ≤ N, J ∩ L_i is a normal subgroup of π_i(J).In what follows, we abuse notation by identifying each L_i with L.By Theorem <ref>, there exist D,Q ≤ L such that D is cyclic, Q is a Sylow p-subgroup of L for some prime p, and D and Q together generate L invariably. Pick P ≤ N such that π_i(P)=P ∩ L_i=Q for all i ∈ [t].So, P is a t-fold direct product of copies of Q and therefore a Sylow p-subgroup of N.Pick C ≤ N such that π_i(C)=D for all i ∈ [t] and C ∩∏_i ∈ IL_i=1 for all proper subsets I of [t].Then C ≅ D, hence C is cyclic.Assume M ≤ N contains C and a G-conjugate of P, the latter being a Sylow p-subgroup of N and therefore a direct product of Sylow p-subgroups of L.Since Q and D generate L invariably, π_i(M)=L_i for all i ∈ [t].On the other hand, since P ∩ L_i ≠ 1 and L_iN, we see that L_i ∩ M ≠ 1 for all i.Since L is simple and M ∩ L_i π_i(M), we conclude that M ∩ L_i=L_i for all i, that is, M=N.Therefore ⟨ C,P^g ⟩=N for all g ∈ G.Applying Corollary <ref> and Lemma <ref>, we conclude that Proposition <ref> holds.Theorem <ref> follows.§ FINAL COMMENTSGiven Proposition <ref>, one might ask whether every finite simple group is generated invariably by a Sylow subgroup and an element of prime order.One can confirm using <cit.> that M_12 is generated invariably by a Sylow 3-subgroup and an element of order eleven.Combined with Proposition <ref>, this provides a positive answer for sporadic groups.The question remains open for alternating groups.In fact, we do not know whether every alternating group is generated invariably by two Sylow subgroups.Some positive results appear in <cit.> and <cit.>.The strongest result in this direction is due to Teräväinen, who showed in <cit.> that the set of n for which A_n is generated invariably by two elements of prime order has asymptotic density one in the set of positive integers.Several of our claims about sporadic groups and small groups of Lie type that we proved using ad hoc arguments were confirmed using GAP.Our GAP code and its output will be available as arXiv ancillary files. hamsplain
http://arxiv.org/abs/2312.16319v1
{ "authors": [ "Robert M. Guralnick", "John Shareshian", "Russ Woodroofe" ], "categories": [ "math.GR", "math.AT", "math.CO" ], "primary_category": "math.GR", "published": "20231226195925", "title": "Invariable generation of finite simple groups and rational homology of coset posets" }
WWW: What, When, Where to Compute-in-Memory Tanvi Sharma, Mustafa Ali, Indranil Chakraborty, and Kaushik Roy, Fellow, IEEE, This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.=============================================================================================================================================================================================================================================================Compute-in-memory (CiM) has emerged as a compelling solution to alleviate high data movement costs in von Neumann machines. CiM can perform massively parallel general matrix multiplication (GEMM) operations in memory, the dominant computation in Machine Learning (ML) inference. However, re-purposing memory for compute poses key questions on 1) What type of CiM to use: Given a multitude of analog and digital CiMs,determining their suitability from systems perspective is needed. 2) When to use CiM: ML inference includes workloads with a variety of memory and compute requirements, making it difficult to identify when CiM is more beneficial than standard processing cores. 3) Where to integrate CiM: Each memory level has different bandwidth and capacity, that affects the data movement and locality benefits of CiM integration. In this paper, we explore answers to these questions regarding CiM integration for ML inference acceleration. We use Timeloop-Accelergy <cit.> for early system-level evaluation of CiM prototypes, including both analog and digital primitives. We integrate CiM into different cache memory levels in an Nvidia A100-like baseline architecture and tailor the dataflow for various ML workloads. Our experiments show CiM architectures improve energy efficiency, achieving up to 0.12x lower energy than the established baseline with INT-8 precision, and upto 4x performance gains with weight interleaving and duplication. The proposed work provides insights into what type of CiM to use, and when and where to optimally integrate it in the cache hierarchy for GEMM acceleration.compute-in-memory, SRAM, GEMMs, memory hierarchy, machine learning inference, hybrid architectures § INTRODUCTIONMachine learning (ML) applications have become ubiquitous across various domains such as automotive, health care, finance, and technology. This has led to an increase in demand for high-performance and energy-efficient ML hardware solutions.Matrix-vector multiplications and general matrix-matrix multiplications, known as GEMMs, are at the core of ML workloads such as convolutional networks and transformers<cit.>.Due to the data intensive nature of such computations, they could incur high energy costs, particularly in von Neumann architectures such as Central Processing Units (CPUs) and Graphics Processing Units (GPUs). Such high energy costs are attributed to the separation of processing units from memory in such architectures among others, resulting in costly memory accesses and data movements between the processing unit and memory, commonly known as the “memory wall" or “von-Neumann bottleneck"  <cit.>.To address this, Compute-in-Memory (CiM) paradigms have been proposed to reduce the expensive data movement costs and provide energy-efficient solutions by performing computations in the memory <cit.>. There are numerous ways of integrating CiM across the memory hierarchy: from CMOS on-chip cache memory to DRAM or Flash <cit.>. In this work, we focus on adding CiM in on-chip memory subsystem, as it does not require radical technological changes. While integrating CiM in caches has been explored <cit.>, a comprehensive evaluation for effectiveness of different types of CiM primitives (or designs) at the system level, particularly for ML inference, is yet to be studied. Our work explores the benefits of integrating CiM to different cache levels, Register File (RF) and Shared Memory (SMem), in a Streaming Multiprocessor (SM) of a GPU (Fig. <ref>). GPUs consist of hundreds of SMs which are connected through large crossbar interconnect to DRAM via L2 memory <cit.>. To efficiently utilize CiM in a memory subsystem, there is a need to determine the optimum type of CiM, when to use it, and where to use it, for ML inference. What type of CiM: CiM can be broadly classified as analog or digital based on their nature of computing <cit.>. Analog CiM performs multiply and accumulate (MAC) operations in the analog/mixed signal domain inside a memory array. For communications among different CiM blocks, peripheral circuits such as Digital-to-Analog (DAC) and Analog-to-Digital (ADC) converters are required to reduce the impact of analog noise on computation. ADCs usually have high area, latency and energy costs, increasing the overhead of analog CiM. In contrast, digital CiM performs all the computations in the digital domain by performing bit-wise AND/XOR or multiply operations. To calculate the final MAC output, multiple bit-wise operations are performed that can increase the compute latency for Digital CiM. Also, design choices such as the type of memory cell (SRAM-6T/8T), the number of wordlines or bitlines enabled at a time, and the mapping scheme for weights in the memory array have made it increasingly challenging to identify the most effective CiM primitive in a system. When to use CiM:ML models consist of a variety of GEMM shapes and sizes. GEMM (M× N × K) computation can be thought of as multiplying input matrix of size M × K with weight matrix of size K × N to get an output matrix of size M × N <cit.>. Arithmetic intensity, or data reuse, gives an idea of the dependence of GEMM computations on memory, by calculating the ratio of arithmetic operations (Floating Point Operations or FLOPs) to memory accesses (bytes). A roofline representation of performance versus arithmetic intensity for GEMMs is shown in Fig. <ref>. The graph demonstrates that 1) not all GEMMs require the full capabilities of the GPUs, resulting in under-utilization of SM. CiM has the potential to maintain performance comparable to standard computing paradigms, when adopted for GEMM computations. 2) GEMMs have a wide range of compute and memory requirements. Hence, it is not clear when those energy and performance benefits of CiM would be higher than the baseline.Where to integrate CiM: Since GEMMs have regular data access patterns and offer high temporal and spatial locality, the matrices are fetched from main memory to caches in blocks or smaller tiles <cit.>. Typically, GPUs optimize their memory hierarchy to efficiently reuse the tile data and parallelize GEMM operations across hundreds of tensor cores present in sub-cores of SMs.CiM based hardware designs are also capable of performing parallel matrix multiplications by enabling multiple columns and rows inside a memory array and leveraging parallelism from multiple memory arrays. However, each memory level differs in terms of bandwidth and storage capacity (Table <ref>), which affects the data reuse opportunities and compute parallelism when re-purposed with CiM features. Hence, it is crucial to find if there is a memory level that can exploit the locality well and provide highest CIM benefits. Our approach: To fully leverage and evaluate the benefits of CiM compared to general-purpose processors, we consider a range of workload specifications, memory levels and CiM characteristics. Subsequently, choosing optimal dataflow for given specifications is important to achieve highest possible performance and energy efficiency. Optimal dataflow affects the data reuse by reducing the number of memory accesses through efficient scheduling and allocation of GEMMs on given hardware resources. While Algorithmic data reuse for GEMMs can be calculated as the number of MAC operations divided by the total size of matrices. Note however, the observed data reuse is determined by the dataflow because it is dependent on the actual number of memory accesses.The main contributions of the work can be summarized as follows:* An analytical evaluation of SRAM-based analog and digital CiM primitives at RF and SMem levels in an Nvidia-A100 like baseline architecture. * Optimizing the performance and energy efficiency gains from CiM by finding optimal dataflowfor a given CiM architecture and GEMM shape. * Detailed answers to the questions on what, when and where to CiM for various GEMM shapes from energy/performance perspective.The rest of the paper is organized as follows: Section <ref> distinguishes our work from other studies done in the past. The next section (<ref>) provides relevant background for this work. Section <ref> describes details of the set of CiM primitives used for the experiments.The following section (<ref>) highlights the key takeaways along with results and discussions, followed by conclusion in the last section.§ RELATED WORKSWhile there has been work considering in-cache computing in CPUs, there have not been studies re-purposing GPU memory for compute. For example, Duality cache <cit.> architecture re-purposes last level cache in Server class Xeon processors to accelerate data parallel applications. They also extend the system stack to develop a CUDA like Single Instruction Multiple Thread (SIMT) programming model for executing floating point and integer arithmetic operations in cache. MLIMP <cit.> extends the concept of Duality Cache for graph neural networks, by developing a concurrent task scheduler for multi-layer in-memory processing systems. They propose job scheduling and memory allocation algorithms based on the memory type (bit-serial in-memory/in-ReRAM/in-DRAM). On the other hand, this work focuses on analyzing the benefits of re-purposing different levels in the memory hierarchy of a GPU for ML inference. We consider GPUs because of their widespread dominance in accelerating GEMMs, the core computation in inference tasks. In addition, GPUs are programmable accelerators and the same programming model could be potentially re-used for CiM integrated GPUs.Livia <cit.> also looked at modifying cache memory in CPUs to minimize overall data movement for irregular data access applications. It proposes a system architecture that dynamically schedules tasks and data at different locations in the memory hierarchy. In contrast, we focus on highly regular workloads (GEMMs) and determining if the parallelism offered by CiM primitives can match the locality benefits offered by cache hierarchy.To-Pim-or-Not <cit.> is the first work to raise the questions on how, and when to use processing-in-memory (PIM) for different applications. It focuses on the development of a software framework to determine when and how to effectively offload computations to PIM while analyzing the trade-offs between performance benefits and offload costs. However, the scope of the work is limited to emerging general purpose DDR Memory Systems, creating a gap in the understanding of SRAM based CIM primitives. Our work fills this gap by considering CiM in cache hierarchy for general purpose GPUs. Another recent work <cit.> on benchmarking analog vs. digital compute in memory develops a quantitative energy model based on fixed analog CiM and digital CiM designs, referred to as templates. However, CiM macros vary significantly in terms of the peripheral circuits and using a template limits the design options for CiM primitive macros. Moreover, it does not mention about CiM latency or performance estimation in a system with configurable dataflow options. We leverage the Timeloop model <cit.> methodology to perform an analytical evaluation of a system for different CiM primitives. Timeloop considers a generic architecture template with arithmetic units and a memory hierarchy.§ BACKGROUND §.§ Importance of GEMMs in ML Workloads Machine learning workloads consist of a broad range of neural networks from convolutional, fully connected, to transformer and recommendation models. Matrix-vector multiplication and matrix-matrix multiplications are at the core of computation in these neural networks <cit.>. In this work, we refer to such multiplications under one umbrella term, known as general matrix-matrix multiplications or GEMMs (M× N × K). M, N and K are used to represent the dimension of matrices (Fig. <ref>), where K is the reduction dimension.Convolution neural networks (CNNs) can be implemented as GEMMs by transforming the convolution operation of input and weight feature maps to matrix-matrix multiplication using im2col <cit.>. im2col or image-to-column transformation converts the 3D convolution operation to a GEMM (M,N,K) such that K represents the reduction dimension for the MAC operation between inputs and weights, M represents the total number of such reductions or convolutions and N is decided based on the number of output channels. The initial layers of a CNN generally have larger input feature maps compared to other layers, for larger datasets such as ImageNet. The last layer is a classifier, which is essentially a fully connected (FC) layer. It consists of matrix vector multiplications, which can be thought of as a special case of GEMM. Similarly, transformer models perform computation of the Query (W_Q), Key (W_K), and Value (W_V) matrices from input embedding in the initial layers, which can be visualized as same shaped GEMMs. Additionally, transformer models comprise of other GEMMs such as logit (QKT), attention (QKTV), and output (W_O) calculations, followed by FC layers. On the other hand, recommendation models incorporate multilayer perceptrons (MLPs) to predict items from a pool of dense features and user preferences <cit.>, basically consisting of FC layers. GEMM shapes, representing a range of shapes and sizes from various ML workloads, are listed in Table <ref>. §.§ SRAM based Compute-in-Memory Primitives Given the high cost of memory accesses compared to logic operations <cit.>, many works have been proposed to perform computations in on-chip SRAM <cit.>. These CiM macros can be designed in various ways based – analog or digital. Another key factor is the type of SRAM cells used. These cells vary in their transistor counts, with common examples including 6T <cit.>, 8T <cit.>, and 10T <cit.> cells. Additionally, CiM macros vary in the way input data are stored or applied to CiM compute. For instance, input can be stored in the CiM macro itself or it can be applied to the CiM macro from an external buffer. In digital CiM, the multiply and accumulate operations are performed in the digital domain through bit-serial logic gates. A sequence of such logic operations can be then combined to perform arithmetic operations. Such logic is usually placed in the peripheral circuitry of the CiM macro <cit.>. The degree of compute parallelism in digital CiM macros generally depends on the amount of logic resources added in the macro. However, adding more logic circuits to digital CiM designs leads to significant area overhead <cit.>, resulting in performance/energy – area trade-off. On the other hand, analog CiM macros perform MAC operations by applying input bits through wordlines while storing the weights in the CiM macro <cit.>. The output is produced as an analog voltage or current at the bitlines, which needs to be converted to digital through an Analog to Digital Converter (ADC) for robust inter-macro communication. Notably, ADCs are the major area/latency/energy bottlenecks in analog CiM macros <cit.>. Prior art tried to amortize the cost of ADCs for better energy-efficiency/performance through narrower output precision or novel ADC circuit designs <cit.>. It is worth mentioning that digital CiM scales with the most advanced technology nodes <cit.> unlike analog CiM where ADCs suffer from significant noise at such advanced technology nodes <cit.>.As mentioned before, CiM macros comprise various SRAM cell types. 8T-cells are commonly adopted in CiM macros since they have decoupled read and write ports, leading to minimal read-disturb issues <cit.> and higher noise margin than 6T cells. 8T-based CiMs enable multiple wordlines simultaneously leading to more parallel MAC operations and energy efficiency. On the other hand, 6T-cells are the de-facto standard for conventional SRAM designs due their compact area. 6T-based CiM have been proposed to reduce the area overhead of 8T cells. To avoid the read disturb issue in 6T-based CiM macros, several circuit techniques have been proposed <cit.>. For example, to perform 6T-based analog CiM, <cit.> added a local computing block to a group of 6T cells that share the same bitline. There are multiple groups in a column where two cells from different groups do not share the same bitline. Note, only one 6T cell is activated per local computing group during computation to avoid read disturb. In addition to 6T and 8T based CiM, some reported macros adopted other cell types (e.g. 10T <cit.>) that can perform more complex computations (such as in-memory addition and membrane potential updates for spiking neural networks) within the cell while leading to larger area overhead.CiM macros also vary in the way inputs are applied. Input data can be stored in the CiM macro itself prior computations <cit.>, or streamed in the macro during the CiM operation from an external buffer <cit.>. Input storing/streaming imposes different mapping/dataflow constraints on the corresponding CiM macro leading to different optimal data orchestration. Additionally, various input/output precisions have been shown in such works leading to comparisons challenges. To have a fair comparison, we fix input/output precision to 8-bit integer in this work. It is also worth mentioning that different CiM macros might impose certain dataflows at the macro level due to their unique compute nature <cit.>.§.§ Dataflow Optimization in Cache Hierarchy GEMMs exhibit high spatial and temporal locality owing to their regular data access patterns. To exploit this locality, GPUs implement GEMMs by tiling (or blocking) the output matrix, and performing tile computations in parallel <cit.>. For a given dataflow, loop factor explains the size of such tiles and loop order (order of M,N,K in loop representation of dataflow) decides the reuse of tile at a given memory level. Algorithmically, arithmetic intensity or data reuse can be calculated as number of operations divided by the total size of matrices fetched from the memory:Algorithmic data reuse =2*(M*N*K)/BP*(M*N + N*K + M*K)assuming each matrix is accessed once from the main memory, where BP is the bit-precision. The number of memory accesses is decided by how the matrices are divided into tiles and the order of fetching the different matrix dimension, referred as dataflow. Hence, observed data reuse could be different than algorithmic data reuse.The performance of GEMMs is limited by their algorithmic arithmetic intensity and the hardware resources. Low arithmetic intensity GEMMs are limited by the memory bandwidth, while high arithmetic intensity GEMMs are limited by the peak performance. Libraries such as cuDNN and cuBLAS are used to decide the tile sizes for achieving highest possible performance for given GEMM shape. Larger tiles often lead to greater data reuse. This increase in data reuse can result in lower bandwidth requirements and improved efficiency. However, opting for larger tiles could reduce the number of tiles that can run in parallel. This reduction can potentially lead to lower performance.Given that the implementation of GEMMs is optimized on GPUs to get the best performance, it is important to realize the best dataflow for CiM intergrated architecture as well. There are several studies for exploring the dataflow search space and choosing the optimal dataflow. SCNN <cit.> was one of the first works which introduced dataflow optimization for deep neural networks (DNNs). They proposed an input-stationary dataflow, where input activation is held stationary, allowing it to be multiplied by all the filter weights necessary for each output channel. Further, Timeloop <cit.> presented a low cost mapper and model to explore dataflow search space for DNNs and GEMMs. It models the input problem size as a nested loop, allowing for the assessment of data reuse opportunities and efficient mapping across different architectures and workloads. Maestro <cit.> is another tool that proposes an analytical cost model to assesses the cost-benefit tradeoffs in dataflow using their data centric approach. ZigZag <cit.> also explores the DNN accelerator design space by expanding the search to uneven scheduling opportunities.§ CIM ARCHITECTURE CONSTRUCTIONTo get an estimate of energy and performance of different CiM primitives, all the evaluations are done using Timeloop-Accelergy framework<cit.>. We choose Timeloop/Accelergy infrastructure because 1. it is a fast analytical model that is used widely in research projects for early design estimations, 2. it provides a mapper to chose the optimal dataflow for a given architecture, and 3. it is a flexible tool, used to model analog CiM in the past<cit.>.The framework takes architecture, constraint, mapper configuration and energy table files as inputs. For our evaluation, separate architecture template files are created for integrating CiM at RF and SMem levels (Fig. <ref>). As shown in Fig. <ref>, such an architecture template replaces the arithmetic block with memory level re-purposed into CiM. Such CiM integrated memory level consists of multiple CiM arrays (MeshX, MeshY) dependent on the memory size and capacity. Each CiM array is a network of CiM units, which can compute one MAC at a time. The number of CiM units is decided by the number of rows (R_p) and columns (C_p) turned on simultaneously in an array. All CiM units can, thus, perform MAC operations in parallel. Since all columns/rows are not generally turned on simultaneously in a CiM array, this sequential nature of CiM array is incorporated in the form of a temporal loop factor (n) (refer Figure <ref>). n decides the size of the storage component associated with each CiM unit to represent the sequential behaviour in dataflow representation. Such parallel out - sequential in template approach (Fig. <ref>) captures both analog and digital based CiM types. The detailed list of parameters used in CiM architecture template is described in Table <ref>.A CiM energy table is provided to Accelergy based on the measured performance numbers from the original CiM silicon prototypes.Since the prototypes differ in terms of supply voltage and technology, the energy numbers are scaled to match 32nm technology with 1V supply according to the established work on scaling <cit.>. The difference in frequency of operation of the CiM primitives is captured through compute cycles in terms of latency, by assuming 1GHz frequency in Timeloop architecture template. Constraint configuration is adjusted to get the optimum dataflow for each input specification, as described in the next section. § EVALUATION§.§ Experimental Setup §.§.§ Cim PrimitivesWe have chosen two state-of-the-art SRAM-based primitives for both analog/mixed-signal and digital CiM, as shown in Fig. <ref>. These primitives cover a range of distinct and varying parameters, as listed in Table <ref> and explained below.The Analog-1 <cit.> CiM primitive, illustrated in Fig. <ref>(a), consists of 4 banks, each with 4 blocks of 128x64 SRAM6T cells. It employs a transpose mapping technique, feeding inputs to multiple columns. This configuration results in 256 (4×64) CiM units, each with 128b (16×8b) storage. Each unit can perform an 8b-8b MAC operation in 9 cycles, processing 2b inputs and activating 8 rows of weight bits simultaneously. However, due to the limited number of ADCs shared per bank, the temporal loop factor for this primitive is set at 16.The Analog-2<cit.> primitive, depicted in Fig. <ref>(b), features a re-configurable ADC design with 8 arrays, each array (64×64) storing a different bit of weight and having 4 ADC outputs per compute cycle. This design results in 256 (64×4) CiM units, each capable of performing 8b-8b MAC in 144 cycles, including bit-serial latency and scaling adjustments from 65nm to 32nm. Each CiM unit contains 8×(64÷4) weight bits which are computed in sequence due to ADC limitation. This primitive has lower energy per compute but suffers from higher area overhead due to the re-configurable nature.The Digital-1<cit.>, shown in Fig. <ref>(c), employs a fully digital design, feeding inputs into each row and executing a MAC operation at every column using an adder tree. Here, each CiM unit computes 1 8b-8b MAC by combining weight bits stored in 8 columns. The adder tree reduction incurs an area overhead and results in a compute latency of 18 cycles.The Digital-2<cit.> primitive, in Fig. <ref>(d), shows a design where both inputs and weights are mapped to the same column. This configuration allows each CiM unit, comprising a single column, to perform approximately 10 8b-8b MAC operations. However, each operation requires 233 cycles, attributed to multiple additions involved in the process. Despite a small area overhead, the compute parallelism is limited here due to the allocation of some array bits in a column for output reduction.§.§.§ GEMM shapesWe characterized a variety of GEMM shapes from commonly used ML models such as ResNet50 <cit.> with ImageNet <cit.>, BERT-medium <cit.> with sequence length equal to 1024 and DLRM <cit.>. We pruned the GEMM shapes based on their unique properties as listed in Table <ref> to cover different weight sizes, shapes and compute nature. Compute intensive GEMMs lie below the flat roof in roofline representation as shown in Fig. <ref>. Their data reuse is higher, implying the number of computations done per memory access is high. On the other hand, memory intensive GEMMs lie below the bandwidth bound roof, dominated by memory accesses rather than computations. The less compute intensive GEMMs in the table technically fall in the compute intensive region. However, they have medium data reuse with skewed shapes.§.§.§ Baseline We assume a single SM baseline architecture, consistent with the specifications of one of the latest GPUs (Nvidia A100), as detailed in Table <ref>. All experiments are conducted under INT-8 precision, weight stationary dataflow and iso-area constraints using Timeloop/Accelergy framework. INT-8 is chosen for its acceptable accuracy in ML inference tasks<cit.>. Iso-area assumes that the memory level area remains the same after CiM integration by adjusting the capacity. Since A100 consists of 108 SMs, we approximately assume 10% of the total HBM bandwidth for 1 SM architecture. §.§ Impact of DataflowIn this sub-section, we briefly discuss two GEMM shapes with CiM integrated at RF to highlight how the different primitives leverage data reuse opportunities. Optimal dataflow is found using Timeloop mapper by setting the constraint file for each primitive. A sample dataflow at the CiM level is shown in Fig. <ref> where highlighted parameters are set in the constraint file based on the CiM type. To maximise performance, we set the constraints for mapping such that maximum computations are done in parallel in CiM units, i.e., using weight interleaving. Other priorities are to maximize input data reuse for mapped weights and allowing weight duplication when beneficial. For the case when the weight (N × K) matrix is small and M significantly exceeds N and K (in Fig. <ref>(a) Gemm-3136×64×64), weight can fit in the CiM integrated RF memory. Here, each CiM primitive consists of 4096 CiM units. The baseline already achieves the highest data reuse (≈63, see Table <ref>) at DRAM and it is maintained across CiM primitives. However, CiM can better exploit input data reuse (≈3112) at RF level by reusing the whole input tile (M×K) stored at SMem. This results in lower number of memory accesses with CiM, thus reducing the total energy consumption by 0.50x-0.67x. In terms of throughput, CiM suffers heavily achieving only 1% to 22% of baseline throughput. The high compute cycles of CiM primitives compared to 1 cycle in baseline directly translate to the final throughput even with weight interleaving mapping. Hence, with iso-capacity constraints, it is not possible to achieve the baseline throughput due to limited parallelism.Throughput losses in CiM can be partially offset under iso-area constraints, as demonstrated in Fig. <ref>(b) Gemm-3136x64x64. These constraints allow having as many CiM units as the area overhead of each primitive permits, expanding beyond 4096 CiM units. Particularly, Digital-1 primitive is able to achieve 77% of baseline throughput by duplicating weights. Weight duplication improves throughput with minimal energy cost, because writing to SRAM cells takes lower energy than DRAM accesses <cit.>. Note, the number of duplications are limited by the number of CiM units and the memory capacity of the upper memory level which broadcasts the inputs to CiM units.For the case when weight matrix is too large to fit in the memory (Fig. <ref>(c) Gemm-512x1024x512)) andM ≈ K, the CiM primitives leverage higher data reuse at DRAM compared to baseline. This results in reduction in last level accesses, and higher energy savings (0.36-0.57x of baseline).For throughput, having a smaller input matrix (M×K) reduced the constraint on shared memory capacity. This implies all M dimensions could be stored in SMem for the mapped K dimension and hence, permits more weight duplication if enough CiM units are available. Digital-1 leverages this opportunity to achieve throughput higher than baseline. Analog-2 and Digital-2 show lowest throughput because their performance is limited by their high compute latency. In addition, area overhead and mapping constraint also restrict their throughput. Analog-2 has a high area overhead which restricts the number of CiM units that can fit in the same area. Digital-2 has an inherent area overhead caused by the constraints of mapping inputs and partial output bits in the same column, which further restricts the number of CiM units that can operate in parallel.On the same lines, we ensure that all workloads are mapped optimally and discuss the results for different GEMM shapes, CiM types and memory levels in the next sub-sections, to get insights on the what, when and where questions.§.§ Results on performance Fig. <ref>(a) and Fig. <ref>(c) compare the performance observed when CiM is integrated at RF and SMem level respectively.What: Comparing different CiM primitives, Analog-1 with the least compute latency (9 cycles) shows performance ranging from 22% to 100% of baseline throughput. On the other hand, Digital-1 (18 cycle compute latency) can achieve as high as 450% of baseline throughput and in general achieves close to baseline throughput, except for certain GEMMs (discussed under when). This implies that the ability to exploit full row and column parallelism is more important, in CiM design, for throughput than achieving lowest possible latency. However, latency can not be ignored completely as depicted by the lower performance of Analog-2 and Digital-2 primitives, having high compute latency of 144 and 233 cycles respectively.When: Compute-bound layers with bigger weight matrix (Layer6, Layer18, Layer46, GemmV, FC1 and FC2) achieve the highest performance with CiM primitives (minimum 78% of baseline when re-purposed at RF level) compared to other GEMM shapes. Few compute-bound GEMMs, specifically Layer2 and QKTV which have smaller value of K, perform sub-optimally across all CiM primitives and achieve only up to 39% and 47% of baseline throughput respectively. This lower performance is attributed to the lower performance from Digital-1, which stems from the mapping constraints on K dimension. CiM architectures have weights (N×K) mapped to reduce multiple partial sums across the K dimension, limiting their parallelism for small K. Whereas baseline does not have such constraints and hence, achieves higher performance for such GEMMs. It is also important to note that the maximum performance from CiM primitives does not go beyond baseline for memory-bound layers (Layer50, MLP2, MLP3) because of limited data reuse. Where: Considering the highest performance across all CiM primitives, the maximum observed throughput at RF level (≈400%) is significantly greater than that observed at SMem level (≈170%). This can be attributed to the smaller area of SMem (164KB) compared to 4 instances of RF (256KB) in a single core, which allows for fewer CiM units in the same area, thereby limiting the maximum achievable throughput. An anomaly to such behaviour is the case when the mapper finds a better mapping with higher utilization at SMem instead of RF due to difference in memory width and height constraints. e.g. MLP3 performs ≈50% better at SMem with Analog-1 than at RF in spite of having bandwidth throttling when computing at SMem level.§.§ Results on energy consumption Based on the energy consumption as depicted in Fig. <ref>(b) and Fig. <ref>(d), we can make the following observations: What: There is no clear winner among CiM primitives when it comes to energy efficiency. While the Analog-1 and Analog-2 show the best energy benefits with as low 0.16x and 0.12x of baseline energy for FC2 layer respectively, they also shoot upto 1.7x and 4x of baseline energy for QKTV and QKT layer respectively. On the other hand, Digital-1 and Digital-2 primitives always show reduction in energy consumption ranging from 0.22x-0.86x and 0.23x-0.75x of baseline energy respectively. This suggests that the primitive with the highest TOPS/W (Analog-1 in this case) may not necessarily be the most energy-efficient when main memory is taken into perspective. Nonetheless, CiM primitives can always achieve higher energy efficiency than baseline for all GEMM shapes. For the cases when the primitive is not able to map all the weights (either high K or high N) depending on the primitive type and memory level, the total energy consumption becomes larger than baseline due to increase in the number of temporal reductions of the output values. As an example, Analog-2 performs poorly for large K, particularly at SMem, because the design parallelizes N dimension and limits K to 64. The effect is pronounced when weight matrix is small (e.g. QKTV for Analog-2-SMem where N >> K) because baseline can efficiently reduce the partial sums and does not require as many accesses to main memory level. Digital-2 at SMem also has fewer CiM units but allows mapping K dimension to different columns in the same array, unlike its analog counterparts.When: We observed the highest reduction in energy for Layer18, Layer23, GemmV, FC1 and FC2, with upto 0.24x, 0.33x, 0.24x, 0.27x and 0.12x of baseline energy, particularly when re-purposed at RF level. All of these layers have high K value, which explains that the energy benefits from CiM primitives are maximised when there are high number of partial sum reductions. For memory-bound layers (Layer50, MLP2 and MLP3), all CiM primitives exhibit similar benefits with 30% reduction in energy, because the total energy is dominated by DRAM accesses.Where: RF has more number of CiM units than SMem due to larger area and more instances, which generally leads to lower memory accesses or energy consumption. e.g. Analog-2 benefits the most from computing at RF compared to SMem for QKT layer. The total energy consumption is reduced from 4x to 0.6x of baseline energy with CiM added to SMem and RF respectively. Similarly, Analog-1, Digital-1 and Digital-2 energy consumption is approximately 0.5x, 0.6x and 0.8x at RF as compared to SMem for GemmV, FC1 and FC2 layers respectively. However, on an average across different GEMM shapes, one CiM primitive maybe more suitable for a given memory level depending on the primitive design and GEMM shape. For example, Analog-2 primitive, one with high area overhead, always consumes lower energy when integrated at RF than SMem. For other primitives, energy consumption at SMem could be lower than that at RF, such as that for Layer2 (M>>N≈K). Another example is Digital-2 achieving higher energy efficiency on average at SMem as compared to RF.§.§ Discussion and Future Work Takeaways: As the ML models become larger and memory hierarchy evolves, the absolute numbers for performance and energy efficiency gain might change. However, we expect the takeaways (Table <ref>) to remain the same with iso-area constraints. The takeaways are dependent on the analytical evaluations, considering that 1) performance benefits are decided by the compute latency and compute parallelism, and 2) energy benefits depend on memory accesses and compute cost. For CiM primitives, parallelism, in turn, is decided by the mapping constraints of the CiM primitive, memory capacity, and area overhead. For example, Digital-1 CiM primitive has approximately same area overhead as Analog-1, and almost double the compute latency. Still, Digital-1 achieves higher parallelism due to more flexible mapping constraints which allow full row/column parallelism. On the other hand, energy benefits of CiM depend on the reduction in number of memory accesses and the inherently more energy efficient CiM units. Dataflow plays an important role in reducing the memory accesses and, hence, maximizing CiM benefits, which is taken into account in the analysis.Assumptions: Our evaluations are based on analytical modelling used in Timeloop and assumes a simplified architecture with fully pipelined memory subsystem. The latency of one memory level access is hidden by the other memory level access in such an architecture if there is no bandwidth throttling. Bandwidth throttling depends on the total number of memory accesses and bandwidth at that memory level. It assumes all accesses are coalesced and does not consider effects such as bank conflicts, limited miss handling buffer capacity, and other architectural optimizations in memory. It still captures the approximate performance of different CiM primitives and helps in gaining an understanding of their impact at system level.There can be several ways of integrating CiM to the memory subsystem. Integrating CiM at iso-capacity could provide higher parallelism but comes at the cost of increased area. Integrating CiM at iso-memory-area constraints affects the cache memory capacity. This could further affect the baseline throughput of an SM. However, our work envisions an architecture with heterogeneous cores having both CiM and non-CiM SMs in the system architecture. Such an architecture would have similar baseline SM performance.Additionally, we assumed weight stationary dataflow for all scenarios, as it is the most commonly used dataflow. Adding more flexibility in CiM dataflow could lead to a bigger design space and remains an open search space to explore. Further, recent works <cit.> have demonstrated floating point CiM accelerators, expanding the scope of computation in memory. However, all experiments in this work assumed INT-8 precision including that in baseline to make it inclusive of a variety of CiM primitives. Further, we only evaluate individual GEMM operations present in ML workloads to estimate the maximum possible performance and energy efficiency gains. For end-to-end analysis, one approach could be to extend Timeloop to include features such as layer-fusion, and non-GEMM operations cost overhead. Inter-layer evaluations also require Timeloop to consider where the inputs, weights and outputs from previous layer reside in the memory hierarchy, which is not supported in the current version. Note, that CiM integrated architectures would also incur a programming cost overhead that should be considered when finalizing the design. Our work provides early-stage estimations for developing such CiM integrated architecture designs.Future Possibilities:To overcome the area and latency bottleneck from ADC blocks that affect compute parallelism in Analog primitives, Analog ADCless primitives can be considered <cit.>. Analog ADCless primitives only employ sense amplifiers as peripheral circuits to convert the partial sum from array to a 1bit output. Such low area overhead primitive would results in higher number of CIM units in the same memory area. This would further help in achieving higher performance from higher parallelism, with massive improvements in energy consumption. One caveat with analog primitives is the loss in accuracy of computation. An ADC-less design could lead to large accuracy loss due to aggressive hardware quantization. However, studies<cit.> suggest that it is possible to achieve minimal accuracy degradation with quantization-aware training techniques.Another way of integrating CiM in a memory subsystem could be to include additional memory levels or change memory technology to include emerging devices. Such studies are outside of scope of this work. However, our methodology can be extended for such analysis and new CiM primitives. The analysis can also be extended to more than one SM architecture to include inter-SM communication or network costs. The complete A100-like GPU model will have 108 SMs and can be used to map bigger models. The performance correlation of our full model version of GPU in Timeloop with specifications from Table <ref> are shown in Fig. <ref>, where the measured GEMM kernels are run on A100-80GB using CUTLASS 3.2 <cit.>. However, the design space explodes with full GPU model and increases the dataflow search time manifold. Evaluating full GPU-like model would require optimizations or new methodologies to efficiently find the best mapping in such a large design space. § CONCLUSIONWe integrated Compute-in-Memory (CiM) in on-chip cache memory of a GPU architecture. Our experiments provide comprehensive analysis of CIM benefits for accelerating general matrix multiplication (GEMM) workloads, based on Machine Learning (ML) inference tasks, on such a system. Particularly, the iso-area based analytical evaluations lead to the following conclusions:Regarding what, Digital-1 CiM primitive achieves higher performance (GFLOPs) than baseline for most GEMM shapes because of full row/column parallelism and low area overhead. In contrast, although analog primitives do not reach the high performance levels of Digital-1, they excel in energy efficiency, consuming as little as 0.12x of the baseline energy. Digital primitives follow closely, with best energy savings at 0.22x of baseline. Such trade-offs in energy efficiency and performance with CiM could benefit GPUs, particularly when they operate at a reduced frequency to manage power <cit.>.The investigation for when indicates that compute-bound layers with high data reuse and large K (>M) values benefit the most from CiM in both performance and energy efficiency. For example, fully connected (FC1, FC2) layers in BERT model. Conversely, compute-bound GEMMs with low data reuse and smaller K (<< M) usually achieve higher throughput with baseline, but show energy advantages with CiM. The initial layers of ResNet50 with Imagenet dataset such as Layer2 and Layer11 in our analysis constitute for such results. Similarly, skewed GEMMs exhibit energy reduction and comparable throughput for K>>N scenarios, but performance declines when K<<N. Memory-bound GEMMs such as the fully connected layers in ResNet50 and DLRM only demonstrate energy benefits with CiM, with no improvement in throughput.Furthermore, the findings for where suggest that capacity of memory is more important than the level number in the memory hierarchy when integrating CiM. Higher memory capacity could lead to higher performance with weight duplication. Energy benefits are also slightly more with higher memory capacity by reducing the number of memory accesses. However, CiM characteristics such as mapping constraints and compute latency could still limit the performance and energy benefits with high memory capacity.In summary, this work performs a comprehensive evaluation of the trade-offs between energy, area, and performance for CiM primitives across the on-chip memory hierarchy. We believe our work provides key insights in understanding the potential of SRAM based CIM to alleviate the energy problem while achieving comparable performance. In turn, our approach facilitates the optimization of CiM-based architectures for ML inference. § ACKNOWLEDGMENTSThe authors would like to acknowledge North America Qualcomn Innovation Fellowship offered in 2021 for funding the project and inputs from Ramesh Chauhan in the initial stage of the project. Part of the research was also funded by CoCoSys, one of the 7 JUMP centers funded by DARPA and SRC. The authors would also like to thank Aayush Ankit for the brainstorming and discussion sessions. 10parashar2019timeloop Angshuman Parashar, Priyanka Raina, Yakun Sophia Shao, Yu-Hsin Chen, Victor A Ying, Anurag Mukkara, Rangharajan Venkatesan, Brucek Khailany, Stephen W Keckler, and Joel Emer. Timeloop: A systematic approach to dnn accelerator evaluation. In 2019 IEEE international symposium on performance analysis of systems and software (ISPASS), pages 304–315. IEEE, 2019.iccad_2019_accelergy Yannan N. Wu, Joel S. Emer, and Vivienne Sze. Accelergy: An Architecture-Level Energy Estimation Methodology for Accelerator Designs. In IEEE/ACM International Conference On Computer Aided Design (ICCAD), 2019.welser2018future Jeffrey Welser, Jed W Pitera, and Cindy Goldberg. Future computing hardware for ai. In 2018 IEEE International Electron Devices Meeting (IEDM), pages 1–3. IEEE, 2018.kim2023full Sehoon Kim, Coleman Hooper, Thanakul Wattanawong, Minwoo Kang, Ruohan Yan, Hasan Genc, Grace Dinh, Qijing Huang, Kurt Keutzer, Michael W Mahoney, et al. Full stack optimization of transformer inference: a survey. arXiv preprint arXiv:2302.14017, 2023.wulf1995hitting Wm A Wulf and Sally A McKee. Hitting the memory wall: Implications of the obvious. ACM SIGARCH computer architecture news, 23(1):20–24, 1995.verma2019memory Naveen Verma, Hongyang Jia, Hossein Valavi, Yinqi Tang, Murat Ozatay, Lung-Yen Chen, Bonan Zhang, and Peter Deaville. In-memory computing: Advances and prospects. IEEE Solid-State Circuits Magazine, 11(3):43–55, 2019.chakraborty2020resistive Indranil Chakraborty, Mustafa Ali, Aayush Ankit, Shubham Jain, Sourjya Roy, Shrihari Sridharan, Amogh Agrawal, Anand Raghunathan, and Kaushik Roy. Resistive crossbars as approximate hardware building blocks for machine learning: Opportunities and challenges. Proceedings of the IEEE, 108(12):2276–2310, 2020.seo2023advances Jae-Sun Seo. Advances and trends on on-chip compute-in-memory macros and accelerators. In 2023 60th ACM/IEEE Design Automation Conference (DAC), pages 1–6. IEEE, 2023.mutlu2022modern Onur Mutlu, Saugata Ghose, Juan Gómez-Luna, and Rachata Ausavarungnirun. A modern primer on processing in memory. In Emerging Computing: From Devices to Systems: Looking Beyond Moore and Von Neumann, pages 171–243. Springer, 2022.9435945 Myeonggu Kang, Hyeonuk Kim, Hyein Shin, Jaehyeong Sim, Kyeonghan Kim, and Lee-Sup Kim. S-flash: A nand flash-based deep neural network accelerator exploiting bit-level sparsity. IEEE Transactions on Computers, 71(6):1291–1304, 2022.fujiki2019duality Daichi Fujiki, Scott Mahlke, and Reetuparna Das. Duality cache for data parallel acceleration. In Proceedings of the 46th International Symposium on Computer Architecture, pages 397–410, 2019.fujiki2022multi Daichi Fujiki, Alireza Khadem, Scott Mahlke, and Reetuparna Das. Multi-layer in-memory processing. In 2022 55th IEEE/ACM International Symposium on Microarchitecture (MICRO), pages 920–936. IEEE, 2022.lockerman2020livia Elliot Lockerman, Axel Feldmann, Mohammad Bakhshalipour, Alexandru Stanescu, Shashwat Gupta, Daniel Sanchez, and Nathan Beckmann. Livia: Data-centric computing throughout the memory hierarchy. In ASPLOS, pages 417–433, 2020.ampere NVIDIA. Ampere Architecture. <https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth>.computesram Jingcheng Wang, Xiaowei Wang, Charles Eckert, Arun Subramaniyan, Reetuparna Das, David Blaauw, and Dennis Sylvester. A 28-nm compute sram with bit-serial logic/arithmetic operations for programmable in-memory vector computing. IEEE Journal of Solid-State Circuits, 55(1):76–86, 2020.analog1cim Xin Si and et al. A local computing cell and 6t sram-based computing-in-memory macro with 8-b mac operation for edge ai chips. IEEE Journal of Solid-State Circuits, 56(9):2817–2831, 2021.ali2023cicc Mustafa Ali, Indranil Chakraborty, Sakshi Choudhary, Muya Chang, Dong Eun Kim, Arijit Raychowdhury, and Kaushik Roy. A 65 nm 1.4-6.7 tops/w adaptive-snr sparsity-aware cim core with load balancing support for dl workloads. In 2023 IEEE Custom Integrated Circuits Conference (CICC), pages 1–2, 2023.digitaltsmc Yu-Der Chih et al. 16.4 an 89tops/w and 16.3tops/mm2 all-digital sram-based full-precision compute-in memory macro in 22nm for machine-learning edge applications. In 2021 IEEE International Solid- State Circuits Conference (ISSCC), volume 64, pages 252–254, 2021.tmsc2023digital Haruki Mori et al. A 4nm 6163-tops/w/b 4790-𝐓𝐎𝐏𝐒/𝐦𝐦^2/𝐛 sram based digital-computing-in-memory macro supporting bit-width flexibility and simultaneous mac and weight update. In 2023 IEEE International Solid- State Circuits Conference (ISSCC), pages 132–134, 2023.isscc2022mfchang Ping-Chun Wu et al. A 28nm 1mb time-domain computing-in-memory 6t-sram macro with a 6.6ns latency, 1241gops and 37.01tops/w for 8b-mac operations for edge-ai devices. In 2022 IEEE International Solid- State Circuits Conference (ISSCC), volume 65, pages 1–3, 2022.tsmc2022isscc Hidehiro Fujiwara et al. A 5-nm 254-tops/w 221-tops/mm2 fully-digital computing-in-memory macro supporting wide-range dynamic-voltage-frequency scaling and simultaneous mac and write operations. In 2022 IEEE International Solid- State Circuits Conference (ISSCC), volume 65, pages 1–3, 2022.chetlur2014cudnn Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014.gemmnv Nvidia Docs. Matrix Multiplication Background User's Guide. https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html, 2020-23.williams2009roofline Samuel Williams, Andrew Waterman, and David Patterson. Roofline: an insightful visual performance model for multicore architectures. Communications of the ACM, 52(4):65–76, 2009.devic2022pim Alexandar Devic, Siddhartha Balakrishna Rai, Anand Sivasubramaniam, Ameen Akel, Sean Eilert, and Justin Eno. To pim or not for emerging general purpose processing in ddr memory systems. In Proceedings of the 49th Annual International Symposium on Computer Architecture, pages 231–244, 2022.houshmand2023benchmarking Pouya Houshmand, Jiacong Sun, and Marian Verhelst. Benchmarking and modeling of analog and digital sram in-memory computing architectures. arXiv preprint arXiv:2305.18335, 2023.naumov2019deep Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G Azzolini, et al. Deep learning recommendation model for personalization and recommendation systems. arXiv preprint arXiv:1906.00091, 2019.horowitz20141 Mark Horowitz. 1.1 computing's energy problem (and what we can do about it). In 2014 IEEE international solid-state circuits conference digest of technical papers (ISSCC), pages 10–14. IEEE, 2014.shanbhag2022benchmarking Naresh R Shanbhag and Saion K Roy. Benchmarking in-memory computing architectures. IEEE Open Journal of the Solid-State Circuits Society, 2:288–300, 2022.impulse Amogh Agrawal, Mustafa Ali, Minsuk Koo, Nitin Rathi, Akhilesh Jaiswal, and Kaushik Roy. Impulse: A 65-nm digital compute-in-memory macro with fused weights and membrane potential for spike-based sequential learning tasks. IEEE Solid-State Circuits Letters, 4:137–140, 2021.ankit2019puma Aayush Ankit, Izzat El Hajj, Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R Stanley Williams, Paolo Faraboschi, Wen-mei W Hwu, John Paul Strachan, and Kaushik Roy. Puma: A programmable ultra-efficient memristor-based accelerator for machine learning inference. In ASPLOS, 2019.tsmc7nmanalog Qing Dong, Mahmut E. Sinangil, Burak Erbagci, Dar Sun, Win-San Khwa, Hung-Jen Liao, Yih Wang, and Jonathan Chang. 15.3 a 351tops/w and 372.4gops compute-in-memory sram macro in 7nm finfet cmos for machine-learning applications. In 2020 IEEE International Solid-State Circuits Conference - (ISSCC), pages 242–244, 2020.parashar2017scnn Angshuman Parashar, Minsoo Rhu, Anurag Mukkara, Antonio Puglielli, Rangharajan Venkatesan, Brucek Khailany, Joel Emer, Stephen W Keckler, and William J Dally. Scnn: An accelerator for compressed-sparse convolutional neural networks. ACM SIGARCH computer architecture news, 45(2):27–40, 2017.kwon2020maestro Hyoukjun Kwon, Prasanth Chatarasi, Vivek Sarkar, Tushar Krishna, Michael Pellauer, and Angshuman Parashar. Maestro: A data-centric approach to understand reuse, performance, and hardware cost of dnn mappings. IEEE micro, 40(3):20–29, 2020.mei2020zigzag Linyan Mei, Pouya Houshmand, Vikram Jain, Sebastian Giraldo, and Marian Verhelst. Zigzag: A memory-centric rapid dnn accelerator design space exploration framework. arXiv preprint arXiv:2007.11360, 2020.accelergypim Yannan Nellie Wu, Vivienne Sze, and Joel S. Emer. Accelergy-Timeloop Processing-in-memory (PIM) Example. https://github.com/Accelergy-Project/processing-in-memory-design, 2020.stillmaker2017scaling Aaron Stillmaker and Bevan Baas. Scaling equations for the accurate prediction of cmos device performance from 180 nm to 7 nm. Integration, 58:74–81, 2017.he2015deep Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. corr abs/1512.03385 (2015), 2015.deng2009imagenet Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.devlin2018bert Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.nagel2019data Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quantization through weight equalization and bias correction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1325–1334, 2019.dettmers2022llm Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339, 2022.mlwarkhairy Tim Rogers and Mahmoud Khairy. Machine Learning Accelerator War. https://www.sigarch.org/an-academics-attempt-to-clear-the-fog-of-the-machine-learning-accelerator-war/.wu202322nm Ping-Chun Wu, Jian-Wei Su, Li-Yang Hong, Jin-Sheng Ren, Chih-Han Chien, Ho-Yu Chen, Chao-En Ke, Hsu-Ming Hsiao, Sih-Han Li, Shyh-Shyuan Sheu, et al. A 22nm 832kb hybrid-domain floating-point sram in-memory-compute macro with 16.2-70.2 tflops/w for high-accuracy ai-edge devices. In 2023 IEEE International Solid-State Circuits Conference (ISSCC), pages 126–128. IEEE, 2023.yue202328nm Jinshan Yue, Chaojie He, Zi Wang, Zhaori Cong, Yifan He, Mufeng Zhou, Wenyu Sun, Xueqing Li, Chunmeng Dou, Feng Zhang, et al. A 28nm 16.9-300tops/w computing-in-memory processor supporting floating-point nn inference/training with intensive-cim sparse-digital architecture. In 2023 IEEE International Solid-State Circuits Conference (ISSCC), pages 1–3. IEEE, 2023.isscc2022adcless Bonan Yan et al. A 1.041-mb/mm2 27.38-tops/w signed-int8 dynamic-logic-based adc-less sram compute-in-memory macro in 28nm with reconfigurable bitwise operation for ai and embedded applications. In 2022 IEEE International Solid- State Circuits Conference (ISSCC), volume 65, pages 188–190, 2022.saxena2022towards Utkarsh Saxena, Indranil Chakraborty, and Kaushik Roy. Towards adc-less compute-in-memory accelerators for energy efficient deep learning. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 624–627. IEEE, 2022.cutlass Vijay Thakkar, Pradeep Ramani, Cris Cecka, Aniket Shivam, Honghao Lu, Ethan Yan, Jack Kosaian, Mark Hoemmen, Haicheng Wu, Andrew Kerr, Matt Nicely, Duane Merrill, Dustyn Blasig, Fengqi Qiao, Piotr Majcher, Paul Springer, Markus Hohnerbach, Jin Wang, and Manish Gupta. CUTLASS. https://github.com/NVIDIA/cutlass, 2023.kandiah2021accelwattch Vijay Kandiah, Scott Peverelle, Mahmoud Khairy, Junrui Pan, Amogh Manjunath, Timothy G Rogers, Tor M Aamodt, and Nikos Hardavellas. Accelwattch: A power modeling framework for modern gpus. In MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, pages 738–753, 2021.§ BIOGRAPHY SECTION[ < g r a p h i c s > ]Tanvi Sharma received her bachelor’s degree from Indian Institute of Technology, Roorkee in 2018 and worked as a Digital Design Engineer at Texas Instruments before joining Purdue University in 2019. She is in the direct PhD program at Purdue, under the guidance of Professor Kaushik Roy. She has been a recipient of Qualcomn Innovation Fellowship in 2021, and MLSys Rising Stars Award in 2023. Her research interests lie at the intersection of machine learning and systems, with focus on developing energy efficient ML hardware accelerators using compute-in-memory solutions.[ < g r a p h i c s > ]Mustafa Ali received his B.Sc. ans M.Sc. degrees in Electrical Engineering from MTC, Cairo, Egypt in 2011, 2016 respectively. He worked on flexible electronics applications using TFTs in his M.Sc from 2014 to 2016. Additionally, he worked as a TA and RA at MTC from 2013 to 2017. Mustafa was also a hardware and embedded systems engineer at Integreight, Inc. from 2012 to 2017. He joined the Nano-electronics Research Lab (NRL), Purdue University in Spring 2018 to Fall 2022 pursuing his Ph.D. under the guidance of Prof. Roy. His research interest lied in innovative HW accelerating of ML workloads inculding compute in memory. He is currently a hardware engineer at Microsoft Azure Hardware Systems and Infrastructure.[ < g r a p h i c s > ]Indranil Chakraborty is currently a Hardware Engineer at Google, Sunnyvale, California. He received the B.Engg. degree in Electronics and Telecommunication Engineering from Jadavpur University, Kolkata, India, in 2013, an M.Tech. degree in Electrical Engineering from Indian Institute of Technology Bombay, Mumbai, India, in 2016 and Ph.D. degree in 2021 at Nanoelectronics Research Laboratory, Purdue University, West Lafayette, IN, USA. His primary research interests lie in architecture and design of hardware accelerators for machine-learning workloads using CMOS and emerging technologies.[ < g r a p h i c s > ]Kaushik Roy is the Edward G. Tiedemann, Jr., Distinguished Professor of Electrical and Computer Engineering at Purdue University. He received his BTech from Indian Institute of Technology, Kharagpur, PhD from University of Illinois at Urbana-Champaign in 1990 and joined the Semiconductor Process and Design Center of Texas Instruments, Dallas, where he worked for three years on FPGA architecture development and low-power circuit design. His current research focuses on cognitive algorithms, circuits and architecture for energy-efficient neuromorphic computing/ machine learning, and neuro-mimetic devices. Kaushik has supervised 100 PhD dissertations and his students are well placed in universities and industry. He is the co-author of two books on Low Power CMOS VLSI Design (John Wiley & McGraw Hill).
http://arxiv.org/abs/2312.15896v1
{ "authors": [ "Tanvi Sharma", "Mustafa Ali", "Indranil Chakraborty", "Kaushik Roy" ], "categories": [ "cs.AR", "cs.DC", "cs.LG" ], "primary_category": "cs.AR", "published": "20231226061612", "title": "WWW: What, When, Where to Compute-in-Memory" }
Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions André Yuji Yasutomi^1,2, Hideyuki Ichiwara^1,2, Hiroshi Ito^1,2, Hiroki Mori^3 and Tetsuya Ogata^2,4 Manuscript received: October 1, 2022; Revised December 13, 2022; Accepted January 29, 2023.This paper was recommended for publication by Editor Hyungpil Moon upon evaluation of the Associate Editor and Reviewers' comments. This work was supported by Hitachi, Ltd.^1André Yuji Yasutomi, Hideyuki Ichiwara and Hiroshi Ito are with the R&D Group, Hitachi, Ltd, [email protected]^2André Yuji Yasutomi, Hideyuki Ichiwara, Hiroshi Ito and Tetsuya Ogata are with the Graduate School of Fundamental Science and Engineering, Waseda University, Japan [email protected] ^3Hiroki Mori is with the Future Robotics Organization, Waseda University, Japan [email protected] ^4Tetsuya Ogata is with the Waseda Research Institute for Science and Engineering (WISE), Waseda University, Japan Digital Object Identifier (DOI): https://doi.org/10.1109/LRA.2023.324352610.1109/LRA.2023.3243526January 14, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Preference learning is a key technology for aligning language models with human values. Reinforcement Learning from Human Feedback (RLHF) is a model based algorithm to optimize preference learning, which first fitting a reward model for preference score, and then optimizing generating policy with on-policy PPO algorithm to maximize the reward. The processing of RLHF is complex, time-consuming and unstable. Direct Preference Optimization (DPO) algorithm using off-policy algorithm to direct optimize generating policy and eliminating the need for reward model, which is data efficient and stable. DPO use Bradley-Terry model and log-loss which leads to over-fitting to the preference data at the expense of ignoring KL-regularization term when preference is deterministic. IPO uses a root-finding MSE loss to solve the ignoring KL-regularization problem. In this paper, we'll figure out, although IPO fix the problem when preference is deterministic, but both DPO and IPO fails the KL-regularization term because the support of preference distribution not equal to reference distribution. Then, we design a simple and intuitive off-policy preference optimization algorithm from an importance sampling view, which we call Maximum Preference Optimization (MPO), and add off-policy KL-regularization terms which makes KL-regularization truly effective. The objective of MPO bears resemblance to RLHF's objective, and likes IPO, MPO is off-policy. So, MPO attains the best of both worlds. To simplify the learning process and save memory usage, MPO eliminates the needs for both reward model and reference policy.§ INTRODUCTIONLarge language models (LLMs) <cit.> <cit.> <cit.> <cit.> with massive scale parameters trained on large amount of data using pretrain, supervised fine-tune(SFT)<cit.>, and instruction fine-tune (IFT) <cit.> algorithmshas lead to surprising capabilities like in few-shot in context learning. The training data comes from variety of areas and has different quality, the training algorithms (pretrain, SFT, IFT) all based on maximum likelihood estimation (MLE) which learning to match the distribution of data. LLMs trained on these data using MLE algorithm generate contents with a quality gap to human judgement or values.Preference learning <cit.> <cit.> <cit.> <cit.> algorithm significantly improve generate quality to align with human values. It first collects pairs of generations under the same context, and a pairwise human preference to indicate which generation is better.Then a preference learning algorithm is used to optimize generating policy to generate better candidate from the pair.Reinforcement learning form human feedback (RLHF)<cit.> uses reward-model-based reinforcement learning algorithm to learn the optimal policy. It first learns a reward model from the preference data,then using an on-policy PPO <cit.> algorithm to maximize the learned reward. The reward is learned to use Bradley-Terry model <cit.>, which assumes the preference score can be approximated from substituted with point-wise reward. This assumption may lead to an approximation error when preference is deterministic. The PPO algorithm is used on data sampled from generating policy, which may has a different support or distribution drift from preference data, the learned reward model inference on the out-of-distribution data may reduce the accuracy. The process of RLHF need to train reward model and on-policy PPO algorithm which is complex, time-consuming, and unstable.Direct preference optimization (DPO)<cit.> combines off-policy algorithm and Bradley-Terry modelto direct learn the generating policy from preference data. The off-policy algorithm based on KL-regularization reward maximization from off-RL community, which is data efficient, stable and eliminating the need for a reward model.When preference is deterministic which occurs in most cases, the reward of Bradley-Terry model is undefined, it leads to ignore the KL-regularization term and over-fitting the preference dataset.Identity mapping preference optimization (IPO)<cit.> also uses off-policy algorithm with KL-regularization to learn the generating policy from preference data. It learns to maximize preference probability under KL-regularization,and use root finding mean square error(MSE) loss to solve themaximization problem and fix the KL-regularization ignorance problem.But because of the support of preference data distribution is different from reference policy distribution, both DPO's and IPO's KL-regularization term fails.In this paper,we design a simple and intuitive off-policy maximum preference optimization (MPO) algorithm from an importance sampling view, and add an off-policy KL-regularization term which makes KL-regularization truly effective. Our key contribution of this paper can summary as follows: * formalize preference learning as a preference/reward maximization problem, and design a simple and intuitive off-policy algorithm from importance sampling view* figure out KL-regularization fails when optimized on preference data, and design an off-policy sample loss to make KL-regularation truly effective * eliminate the reward substitution assumption and out-of-distribution generalization assumption * eliminate the needs for both reward model and reference policy to save memory usage § PRELIMINARIESThe main pipeline of preference learning usually consists of three phases: 1) pretraining and supervised fine-tuning (SFT), where SFT is not a must; 2) preference data collection; 3) reinforcement-learning optimization. Pretraining and SFT phase Preference learning typically started with a pretrained LLMs or LLMs fine-tuned on high quality data using maximum likelihood estimation. We define the final policy after this phase as π_ref, and the data to train π_ref as 𝒟_ref, so,π_ref≈max_π𝔼_x,y∼𝒟_reflogπ(x)log(y|x)Preference data collection phase After pretraining and SFT phase, π_ref is prompted by context x,and generate two responses y_w, y_l ∼π_ref(·|x). Then x,y_w,y_l is labeled by humans to judge which response is preferred, and denote y_w≻ y_l | x if y_w is preferred, and y_l≻ y_w|x if y_l is preferred. We define a new symbol I=𝕀[y_w≻ y_l|x], and all <x,y_w,y_l,I> consist the preference dataset 𝒟^p:⟨ x,y_w,y_l,I⟩∼𝒟^p.We also define ρ as the context distribution of x and μ as the preference pair distribution given context x from preference data distribution.x∼ρ y_w,y_l,I∼μ.Let p^*(y_w≻ y_l|x)=𝔼_y_w,y_l,I∼μ[𝕀{I=1}|x], which denotes the preference probability of y_w ≻ y_l given context x. Then the expected preference of y_w over μ, noted p^*(y≻μ|x), via the following equation:p^*(y≻μ|x)=y_l∼μ(·|x)𝔼[p^*(y_w≻ y_l|x)].For any policy π, denote the total preference of policy π to μ asp^*(π≻μ) = x∼ρ y_∼π(·|x)𝔼[p^*(y≻μ|x)]. Reinforcement-learning optimization phase At the final phase, the prevailing method is using reinforcement learning algorithm to learn an explicit or implicit reward from the preference data, and then using on-policy or off-policy policy gradient algorithm to maximize the reward. Recently, some methods derive the optimal policy using reward maximization under KL-regularization and alsoderive a loss with optimal policy as its solution, then learn the optimal policy by minimizing the derived loss on empirical dataset. § BACKGROUND§.§ Reinforcement Learning from Human Feedback (RLHF)In this paper, we define a new pair-wise preference reward r^p(y_w≻ y_l|x)=p^*(y_w≻ y_l|x),and design new algorithm to direct optimize the preference(reward) maximum objective. The RLHF uses standard two-phase reward-model based reinforcement learning to maximize the reward. It contains two step: 1) reward estimation from preference data 2) reward maximization using PPO algorithm. Reward estimation from preference data In previous research, the point-wise reward is learned to use Bradley-Terry model. Given context x, define r^*(y|x) as the reward of generating y.Bradley-Terry model assumes the preference probability p^*(y_w≻ y_l|x) as:p^*(y_w≻ y_l|x) = exp(r^*(y_w|x))/exp(r^*(y_w|x)) + exp(r^*(y_l|x))=σ(r^*(y_w|x)-r^*(y_l|x))where σ(·) is the sigmoid function.RLHF uses <ref> to model the point-wise reward, and optimize log loss to estimate the reward. The estimated reward is defined as parameterized r_ϕ, and loss function defined as:ℒ(r_ϕ)=-x,y_w,y_l,I∼𝒟^p𝔼[ I·σ(r_ϕ(y_w|x) - r_ϕ(y_l|x)) + (1 - I)·σ(r_ϕ(y_l|x)-r_ϕ(y_w|x))],where I=𝕀[y_w≻ y_l|x].The loss <ref> is doing maximize likelihood estimation, and the estimated reward r_ϕ is used to approximate probability p(y_w≻ y_l|x) from preference data distribution 𝒟^p.Reward maximization using PPO algorithmThe reward-maximization or KL-regularized reward-maximization objective is used for reinforcement learning policy optimization:max_π_θy∼π_θ(·|x)𝔼[r_ϕ(y|x)]-β𝔻_KL[π_θ||π_ref]where 𝔻_KL is the KL-divergence and β is the regularization weight.This objective is optimized by on-policy REINFORCE <cit.> or PPO algorithm.The second phase of RLHF is optimizing objective <ref> using r_ϕ learned from <ref>. PPO is an on-policy algorithm which continues collect data from the current policy π_θ and estimated reward model,then it uses these data to estimate the gradient of <ref>, and then update the current policy. Because π_θ is different from π_ref defined as <ref>, samples generated by π_θ has a different distribution of D^p. So, RLHF assumes r_ϕ can generalize to out-of-distribution samples generated by π_θ. Following prior work <cit.>, it is straightforward to show that the optimal solution of <ref> takes the form:π_θ∝π_refexp(1/βr_ϕ(x,y)).RLHF also mixing a pretraining gradient into the PPO objective, in order to fix the performance regression on public NLP datasets. And RLHF call the final objective 'PPO-ptx'. Define 𝒟_pretrain as the pretraining dataset, then the combined objective defined as:max_π_θ⟨ x,y⟩∼π_θ𝔼[ r_θ(x,y) ] -βlog(π_θ(y|x)/π_ref(y|x))] +γ𝒟_pretrain𝔼[π_θ(x)].§.§ Direct Preference Optimization (DPO)An alternative RL algorithm to preference learning is direct preference optimization (DPO),which eliminates the training of reward model. DPO derives a reward from Eq. <ref>:r(x,y)=βlogπ_θ(y|x)/π_ref(y|x)+βlog Z(x),where Z(x)=∑_yπ_ref(y|x)exp(1/βr(x,y)) is the partition function.Substituting the re-parameterized reward in <ref> int to the Bradley-Terry model <ref>:p_θ(y_w≻ y_l|x) =σ( β h_π_θ(x,y_w,y_l))=1/1 + exp( β h_π_θ(x, y_w, y_l) ),where h_π_θ(x, y_w, y_l) is defined as:logπ_θ(y_w|x)π_ref(y_l|x)/π_ref(y_l|x)π_θ(y_l|x). Substituting the probability <ref> to the log loss <ref>, DPO formulates a maximum likelihood objective for the parameterized policy π_θ given the empirical preference dataset 𝒟^p:ℒ_DPO(π_θ;π_ref)= -x,y_w,y_l,I∼𝒟^p𝔼[ I σ( β h_π_θ(x, y_w, y_l) ) + (1-I) σ( β h_π_θ(x, y_w, y_l) ) ].Although this pair-wise loss eliminates the need to calculate the partition Z(x),it also makes the optimal solution π^*_θ undefined when there are not enough constraints. For example, if weight π_θ(y_w|x) and π_θ(y_l|x) with the same multiplier M, the logits of the sigmoid function σwill remain the same:βlogπ_θ(y_w|x) * M/π_ref(y_w|x)- βlogπ_θ(y_l|x) * M/π_ref(y_l|x)= βlogπ_θ(y_w|x)/π_ref(y_w|x)- βlogπ_θ(y_l|x)/π_ref(y_l|x) .This will make the final learned policy π_θ suboptimal, and also fails the KL-regularation term. §.§ Ψ-PO with dientity mapping (IPO) IPO defines a new objective called Ψ-preference optimization objective (ΨPO):max_π_θx∼ρy_w∼π_θ(·|x) y_l∼μ(·|x)𝔼 [Ψ(p^*(y_w≻ y_l|x))] - τ𝔻_KL[π_θ||π_ref],where Ψ is a general non-decreasing function Ψ: [0, 1]→ℝ.Take Ψ to be the identity mapping, leading to direct regularized optimization of total preferences:max_π_θ p^*(π≻μ)-τ𝔻_KL[π_θ||π_ref].To optimize the objective, IPO derives an off-policy loss on empirical dataset:min_π_θx,y_w,y_l, I∼𝒟^p𝔼[ I h_π_θ(x, y_w, y_l) - (1 - I)h_π_θ(x, y_l, y_w) - τ^-1/2]^2, IPO claims when preferences are deterministic or near deterministic, DPO will lead over-fitting to the preferencedataset at the expense of ignoring the KL-regularation term. And IPO's loss will always regularizes π_θ towards π_ref by controlling the gap between the log-likelihood ratios logπ_θ(y_w|x)/π_θ(y_l|x) and logπ_ref(y_w|x)/π_ref(y_l|x).Similar to DPO, the IPO loss controls the ratio of π_θ(y_w|x)/π_θ(y_l|x), not π_θ(y_w|x) nor π_θ(y_l|x). When there are not enough constraints, which in all most all cases, the optimal policy is undefined, so also fails the KL-regularation term.§ METHODIn this work, we combine the preference maximization term of IPO's loss and modified regularization term of RLHF's loss. Unlike IPO, which derive the off-policy from preference maximization under KL-regularization, we formulatepreference maximization as a reward maximization problem in reinforcement learning setting, and derive anoff-policy objective from an importance sampling based policy optimization view, but without the help ofKL-regularation. Then we combine the off-policy reward maximization objective with modified regularization termsof RLHF's 'PPO-ptx' objective, which makes the KL-regularization truly effective. We call the algorithm MaximumPreference Optimization (MPO). The final objective of MPO bears resemblance to RLHF's objective, and likes IPO, MPO is off-policy.§.§ Preference(reward) Maximization with Importance SamplingWe define preference as reward, and formalize preference maximization as reward maximization problem inreinforcement learning setting. Define x,y_w, y_l as state, and 𝒜_x,y_w,y_l={y_w≻ y_l, y_l ≻ y_w} as action set, action as 𝔞∈𝒜 and define the reward of action 𝔞 as r^p(𝔞|x,y_w,y_l), which is the preference probability. We simplify r^p(𝔞|x,y_w,y_l) as r^p(𝔞|x), so:r^p(𝔞 | x, y_w, y_l) =r^p(𝔞|x)=𝔼[𝕀{𝔞}|x]=p^*(𝔞|x). Given a sample x, y_w, y_l, I ∈𝒟^p, we can get rewards from both actions in 𝒜:{⟨ x,y_w,y_l, I⟩}→{⟨(x,y_w, y_l)_state, (y_w ≻ y_l)_action, (I)_reward⟩, ⟨(x,y_w, y_l)_state, (y_l ≻ y_w)_action, (1-I)_reward⟩},and define the converted dataset as 𝒟^p. Because both actions appear at the same time from 𝒟^p,we can define the policy generating 𝒟^p as:π^p(𝔞|x,y_w,y_l)=1/2,∀𝔞∈𝒜.Given the state ⟨ x, y_w, y_l ⟩, we simplify π^p(𝔞|x,y_w,y_l) as π^p(𝔞|x). Now we can formulate the distribution to generate 𝒟^p as:x∼ρ ⟨ y_w,y_l ⟩∼μ(· | x) 𝔞∼π^p(·|x) I∼ r^p(𝔞|x) .Define preference generating policy to be optimized as π^p_θ, so the expected reward of π^p_θ over 𝒟^p is:R(π^p_θ)=x∼ρ ⟨ y_w, y_l ⟩∼μ(·|x) 𝔞∼π^p_θ(·|x)𝔼[r^p(𝔞|x)],and it's easy to get R(π^p)=1/2.Express the preference maximization objective in reinforcement learning context:max_π^p_θR(π^p_θ).Typically, the gradient of the objective <ref> needs to be estimated from samples continually collected by π^p_θ, which is data-inefficient. But, for preference maximization, we can directly estimate gradient from dataset 𝒟^p, which is off-policy. Preference(Reward) maximization objective <ref> (which is identical to preferences maximization term of IPO) can be directly optimized using off-policy gradient ascent algorithm.According to REINFORCE algorithm, policy gradient of the <ref> is:∇ R(π^p_θ)= x∼ρ ⟨ y_w,y_l⟩∼μ(·|x)𝔞∼π^p_θ(·|x)𝔼[r^p(𝔞|x)logπ^p_θ(𝔞|x)]. Using importance sampling, gradient <ref> be expressed as:∇ R(π^p_θ)=x∼ρ ⟨ y_w,y_l⟩∼μ(·|x)𝔞∼π^p(·|x)𝔼[π^p_θ(𝔞|x) /π^p(𝔞|x)r^p(𝔞|x)logπ^p_θ(𝔞|x)]Gradient <ref> can be calculated offline, and the corresponding algorithm is off-policy.According to equation <ref>, π^p(𝔞|x)=1/2,∀𝔞∈𝒜, so gradient <ref> and <ref> are identical.§.§ Off-policy Preference Learning under KL-regularation Define the corresponding point-wise policy for π^p_θ as π^s_θ, we use Bradley-Terry model to approximate π^p_θ as:π^p_θ(y_w≻ y_l|x)=σ(logπ^s_θ(y_w|x) - logπ^s_θ(y_l|x)).Then, reward maximization objective <ref> can be expressed:max_π^s_θ R(π^p_θ),which means we optimize π^s_θ to maximize the preference reward for corresponding policy π^p_θ.Let π^s_ref=π_ref, like RLHF's 'PPO-ptx' objective <ref>, KL-regularized preferences maximization objective <ref> can be expressed as:max_π^s_θ R(π^p_θ) - β𝔻_KL[π^s_θ||π^s_ref] +γx∼𝒟_pretrain𝔼[logπ^s_θ(x)] . From theorem <ref>, preference maximization term R(π^p_θ) can be directly be solved withoff-policy policy gradient method.Pretraining data regularization term 𝔼_s∼𝒟_pretrainlogπ^s_θ(x) can also be solved with offline data. But the KL-regularization term 𝔻_KL[π^s_θ||π^s_ref] needs to collect samplesfrom π^s_θ(·|x), which is on-policy.Off-policy KL-regularation on reference policy π^s_ref Minimize 𝔻_KL[π^s_θ||π^s_ref] needs on-policy samples collection, which is data-inefficient. To solve the problem, we replace 𝔻_KL[π^s_θ||π^s_ref] with -𝔼_⟨ x,y ⟩∼𝒟_ref[logπ^s_θ(y|x)]. Like pretraining data regularization, replaced regularization can be computed with offline data. Maximum Preference Optimization (MPO) loss With the modified regularization on π^s_ref, we get the final objective of MPO:max_π^s_θ R(π^p_θ) + β⟨ x, y ⟩∼𝒟_ref𝔼[logπ^s_θ(y|x)] +γx∼𝒟_pretrain𝔼[logπ^s_θ(x)]With this objective, we define empirical MPO loss on dataset 𝒟^p,𝒟_ref and 𝒟_pretrain:ℒ_MPO= <x,y_w,y_l,𝔞, I> ∼𝒟^p𝔼[-Iπ^p_θ(𝔞|x)] -β⟨ x,y⟩∼𝒟_ref𝔼[logπ^s_θ(y|x)] -γx∼𝒟_pretrain𝔼[logπ^s_θ(x)].This loss is very simple and intuitive, term -Iπ^p_θ(𝔞|x) try to maximize preferences, β controls the strength of the regularization on SFT dataset, and γ controls the strength of the regularization pretraining dataset. Eliminate both the need for reward model andπ_ref By using preference as reward, we don't need a reward model to approximate preference probability. By replacing KL-regularization 𝔻_KL[π^s_θ||π^s_ref] with offline dataset regularization -𝔼_⟨ x,y ⟩∼𝒟_ref[logπ^s_θ(y|x)], we don't need the reference policy π^s_ref. So, MPO algorithm simplify the learning process and saves memory usage. unsrtnat
http://arxiv.org/abs/2312.16430v4
{ "authors": [ "Zaifan Jiang", "Xing Huang", "Chao Wei" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231227063454", "title": "Preference as Reward, Maximum Preference Optimization with Importance Sampling" }
[email protected] http://myweb.sabanciuniv.edu/durmusdemir/ Sabancı University, Faculty of Engineering and Natural Sciences, 34956 İstanbul, Türkiye The ultraviolet cutoff on a quantum field theory can be interpreted as a condensate of the affine curvature such that while the maximum of the affine action gives the power-law corrections, its minimum leads to the emergence of gravity. This mechanism applies also to fundamental strings as their spinless unstable ground levels can be represented by the scalar affine curvature such that open strings (D-branes) decay to closed strings and closed strings to finite minima with emergent gravity. Affine curvature is less sensitive to massive string levels than the tachyon, and the field-theoretic and stringy emergent gravities take the same form. It may be that affine condensation provides an additional link between the string theory and the known physics at low energies. Emergent Gravity Completion in Quantum Field Theory, and Affine Condensation in Open and Closed Strings Durmuş Demir0000-0002-6289-9635 January 14, 2024 =======================================================================================================§ INTRODUCTION The SM is a quantum field theory (QFT) of the electromagnetic, weak, and strong interactions <cit.>. It leaves out gravity. It inherently belongs to the flat spacetime due to difficulties with the quantization of the curved metric <cit.> and difficulties also with the carriage of QFTs to curved spacetime <cit.>. Nevertheless, unlike the full QFTs like the SM, flat spacetime effective QFTs can have a particular affinity with curved spacetime since they are nearly-classical field theories obtained by integrating out high-energy quantum fluctuations <cit.> in the sense of both the Wilsonian and one-particle-irreducible effective actions <cit.>.Viewed as an effective QFT, the SM must have a physical UV validity limit (not a regulator), say Λ_℘ <cit.>. It is the scale at which a UV completion sets in. It can lie at any scale above a TeV according to the null results from the LHC searches <cit.>. It breaks Poincare (translation) symmetry explicitly so that the loop momenta ℓ^μ lie in the range -Λ_℘^2 ≤η_μνℓ^μℓ^ν≤Λ_℘^2 in flat spacetime with Minkowski metric η_μν. Then, loop corrections to scalar mass-squareds contain terms proportional to Λ_℘^2. Likewise, thevacuum energy involvesΛ_℘^4 and Λ_℘^2 terms <cit.>. In addition to these unnatural UV-sensitivities <cit.>, all of the gauge bosons acquire mass-squareds proportional to Λ_℘^2, and these loop-induced masses break gauge symmetries explicitly <cit.>. What would be more natural than to use the Higgs mechanism <cit.>to restore the gauge symmetries? Impeding this proposal is the radical difference between the intermediate vector boson mass (Poincare-conserving) <cit.>and the loop-induced gauge boson mass (Poincare-breaking) <cit.>. Indeed, in the former, in accordance with the Poincare conservation,vector boson mass is promoted to a scalar field, which leads to the usual Higgs mechanism <cit.>. In the latter, due to the Poincare-breaking nature of Λ_℘, it is necessary to find a Poincare-breaking Higgs field and employ the Higgs mechanism accordingly. In this regard, we construct a gauge symmetry restoration mechanism in which affine curvature <cit.> comes into play as the Higgs field promoting the UV cutoff Λ_℘, and condenses to the usual metrical curvature in the minimum of the metric-affine action <cit.> such that unnatural UV sensitivities get defused, gauge symmetries are restored, the emergence of gravity is enabled, and new particles are brought about that do not have to couple to the SM directly <cit.>. This affine condensation mechanismgives a bottom-up UV completion of the QFT with emergent gravity. We shall construct it in detail in Sec. <ref>.In string theory, the ground levels of both the open (D-brane) and closed strings are spinless imaginary-mass states. They are unstable. They are readily represented by a scalar tachyon as part of the corresponding string field theory <cit.>. There is, however, a problem in that the mass of the tachyon, in absolute magnitude, is of a similar size as those of the massive string levels, and the tachyon can therefore hardly give a low-energy effective field theory description of the massless string spectrum. In regard to this problem, we realize that actually the affine curvature in Sec. <ref> can give a proper description of the string ground level because it contains the usual massless graviton as part of the low-energy effective theory. As we show in Sec. <ref>, affine condensation leads to a picture in which open strings (D-brane) decay into closed strings and closed strings decay into not the vacuity but a minimum in which gravity emerges just as in Sec. <ref>. This top-down approach gives a stringy completion of the QFTs. In Sec. <ref>, we give a comparative discussion of the bottom-up and top-down approaches. In particular, we conclude that affine condensation may be the way string theory is linked to the known physics at low energies. In this section, we discuss also few future prospects. § BOTTOM-UP: EMERGENT GRAVITY COMPLETION OF QUANTUM FIELD THEORY In this section, following a bottom-up approach, we shall discuss the completion of the QFT in the UV via emergent gravity. In essence, gravity will emerge in a way that restores the gauge symmetries via the condensation of the affine curvature such that the affine curvature itself enters the game as a Higgs-like field promoting the loop-induced gauge boson masses namely the UV cutoff. §.§ Effective QFTLet us consider a generic renormalizable field theory of quantum fields ψ (the SM fields plus new particles) for generality. In flat spacetime, the effective QFT capturingthe physics of the full QFT at low energies μ≪Λ_℘ is described by the effective action <cit.>S[η, ψ; logμ, Λ_℘^2] = S_tree[η, ψ] +δ S_log[η,ψ; logμ] + δ S_pow[η, ψ; logμ, Λ_℘^2]in extensionof the dimensional regularization <cit.> to QFTs with a physical UV cutoff Λ_℘ <cit.>. In the effective action, the tree-level actionS_treesets symmetries, field spectrum, and interactions in the QFT. The logarithmic correctionδ S_log involves the renormalization scale μ (not the UV cutoff Λ_℘) and follows the structure of S_tree. The power-law correction δ S_pow = ∫ d^4x √(-η) {-c_O Λ_℘^4 - 2 M_O^2 Λ_℘^2 - c_ϕΛ_℘^2 ϕ^†ϕ + c_V Λ_℘^2tr[V^μη_μνV^ν] }involves the scalar fields ϕ and the gauge fields V_hμ with the color trace tr[…]. Here, the loop-induced factorsc_O = (n_b-n_f)/64 π^2and M_O^2=str[M^2]/64π^2set the power-law corrections to the vacuum energy, with n_b (n_f) being the total number of bosonic (fermionic) degrees of freedom in the QFT, and M^2 denoting the mass-squared matrix of all the QFT fields, with the supertrace str[M^2]=∑_J (-1)^2 J (2 J +1) M_J^2 over the particle spin J. The loop factor c_ϕ controls the quadraticcorrections to scalar mass-squareds (c_ϕ( Higgs)≈ 3 h_t^2/4π). The loop factor c_V, on the other hand, rules the breaking explicitly of the gauge symmetry symmetries (c_V( gluon)=21 g_3^2/16π^2 and c_V( hypercharge)=39 g_3^2/32π^2 ) <cit.>.§.§ Taking effective QFT to curved spacetimeThe matter fields ψ in (<ref>) are feebly-fluctuating quantum fields (nearly classical), and this ensures that the effective action S[η] can be smoothly carried into spacetime of a curved metric g_μν via general covariance <cit.> η_μν→ g_μν ,∂_μ→∇_μin which ∇_μ is the covariant derivativewith respect to the Levi-Civita connection ^gΓ^λ_μν = 1/2 g^λρ( ∂_μ g_νρ + ∂_ν g_ρμ - ∂_ρ g_μν)so that ∇_α g_μν=0. But, for the metric g_μν to be curved, the effective QFT in curved spacetime must involve the metrical curvature, like, for example, the Ricci curvature R_μν(^gΓ). In classical field theories,spacetime metric can be made dynamical (curved) by simply adding requisite curvature terms (such as the Einstein-Hilbert term) with appropriate bare constants. In effective field theories, this is not possible. The reason is that effective QFTs have all their couplings generated or corrected by loop contributions, and adding any curvature term by hand contradicts the renormalized QFT structure. Indeed, the introduction of bare terms means that the curvature sector of the original QFT was left unrenormalized while the matter sector was renormalized. In the face of this contradiction, the curvature must arise from within the flat spacetime effective action (<ref>) not with some bare constants but with the loop factors c_i. In other words, curvature must arise via the deformations of the existing loop corrections in δ S_pow. Possible deformations are set by the commutator [∇_μ,∇_ν] as it generates curvature via its actions V^μ[∇_ν,∇_μ] V^ν = V^μ R_μν(^gΓ) V^ν on the gauge fields V^μ and ψγ^μγ^ν [∇_μ,∇_ν] ψ = - (1/2) g^μνR_μν(^gΓ) ψψ on the fermions ψ. This dimension-5 fermion contribution is irrelevant. Thus, deformations can occur only in the gauge sector. In fact, the structure of V^μ R_μν(^gΓ) V^ν suggests that the object to be deformed is the gauge boson mass term c_V Λ_℘^2tr[V^μη_μνV^ν]. To see this, it proves useful to start with the flat spacetimenull deformation S[η]=S[η]+∫ d^4x √(-η) c_Vtr[V^μ( [D_μ,D_ν] -iV_μν) V^ν] =S[η]where the added deformation term consists of the field strength tensor V_μν of the gauge fields defined as[D_μ,D_ν] V^ν = iV_μν V^ν such that D_μ = ∂_μ + i V_μ is the gauge-covariant derivative, with V^μ being a vector in the adjoint. Obviously, the deformation integral in (<ref>) vanishes identically, and therefore the flat spacetime effective action S[η] remains untouched, S[η] ≡ S[η]. But, under the general covariance map in (<ref>), this null deformationgives rise to acurved spacetimeaction S[g]=S[g] + ∫ d^4x √(-g) c_Vtr[V^μ( [𝒟_μ,𝒟_ν] -iV_μν) V^ν]= S[g] - ∫ d^4x √(-g) c_Vtr[V^μ R_μν(^gΓ) V^ν]whose second line develops the sought-for curvature tensor as generated by the commutator [𝒟_μ,𝒟_ν] V^ν = (-R_μν(^gΓ) + iV_μν) V^ν of the curved spacetime covariant derivative𝒟_μ = ∇_μ + i V_μ. The curvature in (<ref>) ensures that spacetime is curved. But there is no gravity! There is no gravity because the Einstein-Hilbert term is missing in (<ref>). This imperative term, proportional to the curvature scalar g^μν R_μν(^gΓ), has to emerge from within the curved spacetime effective action (<ref>).Our goal is to find out how the Einstein-Hilbert term emerges from within the curved spacetime effective action S[g]. For this, it proves useful to contrast the two mass terms: M_I^2tr[I_μ I^μ] and c_V Λ_℘^2tr[V_μ V^μ]. In the former, M_I is the Poincare-conserving mass of an intermediate vector boson I_μ (like the W/Z bosons) and thus it can be promoted as M_I^2tr[I_μ I^μ] ⟼ (I_μ S)^† I^μ S ⊂ (D_μ S)^† D^μ Sto a Poincare-conserving Higgs scalar S for restoring the gauge symmetry with gauge-covariant derivative D_μ=∂_μ +i I_μ <cit.>. In the latter, however, Λ_℘^2 breaks the Poincare symmetry, and so it can be promoted to only a Poincare-breaking field (not the scalar S) for restoring the gauge symmetry <cit.>. In general, Poincare-breaking fields are expected to be related to the spacetime curvature since, in a general second-quantized field theory with no presumed symmetries, Poincare symmetry is known to emerge if the Poincare-breaking terms are identified with curvature <cit.>.Nonetheless, the Poincare-breaking field promoting Λ_℘^2 must be insensitive to the metric as it must remain nonzero in flat and curved spacetimes. One field that satisfies both of these conditions is the Ricci curvature ℝ_μν(Γ) of a general affine connection Γ^λ_μν. Indeed, Γ^λ_μνis independent of the metric g_μν but tends to ^gΓ^λ_μν in (<ref>) by affine dynamics, and, as a result, ℝ_μν(Γ) tends toR_μν(^gΓ). This dynamical nearing ensures that ℝ_μν(Γ) is the sought-for Poincare-breaking field, and the UV cutoff Λ_℘ can therefore be promoted as Λ_℘^2 g_μν⟼ℝ_μν(Γ)for restoring the gauge symmetries <cit.>. This map parallels the promotion in (<ref>) of the intermediate vector boson masses M_I to scalars Sfor restoring the gauge symmetries <cit.>. It makes the curved spacetime effective action S[g] in (<ref>) a metric-Palatini gravity theory <cit.>S[g,Γ]=S_tree[g, ψ] +δ S_log[g,ψ; logμ]+ ∫ d^4x √(-g) {-c_O/16ℝ^2-M_O^2/2ℝ -c_ϕ/4ϕ^†ϕ ℝ + c_Vtr[V^μ( ℝ_μν - R_μν) V^ν] }in which ℝ = g^μνℝ_μν(Γ) is the affine scalar curvature. Its coefficient M_O^2 must be positive and much bigger than any QFT scale since metric-Palatini dynamics will eventually turn it to the fundamental scale of gravity <cit.>M_O= M_Pland ascribe it this way to a new physical role compared to its definition in (<ref>) and role in (<ref>) in the flat spacetime.§.§ Emergent gravity from affine condensation Owing to its effective nature (nearly classical), stationary points of the metric-Palatini action (<ref>) are expected to give the extremal points corresponding to physical field configurations. Then, keeping the matter sector in its vacuum state (⟨ V_μ⟩ =0, ⟨ f⟩ =0, ⟨ϕ⟩/ M_Pl≈ 0), extremal values of Γ are derived from the stationarity condition δS[g,Γ]/δΓ^λ_μν = 0 ⟹^Γ∇_λ(ℚ^1/3g^μν) = 0in which the covariant derivative^Γ∇_λ is that ofΓ, and ℚ= M_Pl^2/2+c_O/8ℝequals the variation of -S[g,Γ] with ℝ in the vacuum.One solution of the motion equation(<ref>) is just ℚ=0. And it gives a constant affine curvature scalar -4 M_Pl^2/c_O from(<ref>). This is the extremal curvature that leaves the action (<ref>) stationary. It must be equal to 4 Λ_℘^2 since the affine curvature ℝ is after all the “Higgs field" promoting Λ_℘^2 as in (<ref>), and its extremal value must therefore regenerate the UV cutoff. But for this regeneration, -4 M_Pl^2/c_O must be positive, and this can happen only ifc_O<0 ⟹ n_b < n_frevealing that nature has more fermions than bosons. This negative c_O makes the ℝ–potential in (<ref>) maximum at the extremum-4 M_Pl^2/c_O>0. Thus, ℚ=0 solution with c_O<0 leads to the extremalaffine curvature scalarℝ_ max = - 4 M_Pl^2/c_O = 4 Λ_℘^2at which -S[g,Γ] attains its maximum. At this maximal curvature, it reduces to the curved spacetime action in (<ref>) and brings back, therefore, all the power-law UV corrections in (<ref>). This maximum is full of problems.One other solution of the motion equation(<ref>) occurs with ℚ≠ 0 and leads to the minimal configuration(Γ_ min)^λ_μν= ^gΓ^λ_μν + 1/6ℚ(∇_μδ^λ_ν +∇_νδ^λ_μ -∇^λ g_μν)ℚas the maximal configuration was already identified in (<ref>).Expanding (<ref>) up to 𝒪(M_Pl^-4) terms,the extremal affine curvature is found to be(ℝ_ min)_μν= R_μν(^gΓ) -c_O/24 M_Pl^2(g_μν + 2 ∇_μ∇_ν)Rwhere R=g^μνR_μν(^gΓ) is the metrical curvature scalar. In this minimum, the c_V part of (<ref>) reduces to∫ d^4x √(-g)c_Vtr[V^μ( zero +𝒪(M_Pl^-2)) V^ν]which is seen to vanish up to 𝒪(M_Pl^-2) terms. This nullification of the gauge boson mass terms ensures that all the gauge symmetries are restored in the minimum in (<ref>) up to doubly Planck-suppressed terms. In the minimum (<ref>), the metric-Palatini effective action (<ref>) gives rise to, up to𝒪(M_Pl^-2) terms, a contemperature of dimensionally-regularized QFT and R+R^2 gravity theoryS[g] = S_tree[g, ψ] +δ S_log[g,ψ; logμ] + ∫ d^4x √(-g){-M_Pl^2/2R - c_O/16 R^2 -c_ϕ/4ϕ^†ϕ R }up to doubly Planck-suppressed terms, including the remainder in (<ref>). This resultant action describes the emergence of the curvature sector via the promotion in (<ref>) of the UV cutoff to affine curvature and condensation in (<ref>) of the affine curvature to the metrical curvature. It turns out that the affine curvature condensation gives rise to both the gauge and gravitational interactions. In essence, the action S[g] is the curved spacetime image of the flat spacetime effective action (<ref>), and lays a gauge symmetry restoring emergent gravity framework, which we abbreviate as symmergent gravity <cit.>. Symmergence is the physics in the minimum. Its item-by-item comparison in Fig. <ref> with the physics at the maximum reveals that the QFT has transited from the problem-full maximum in (<ref>)to the physically viable minimum in (<ref>) via the affine dynamics in (<ref>). Symmergence has important physics implications. Firstly, the gravitational scale M_Pl comes out wrong in the SM given in its definition in (<ref>) via (<ref>). So,new particles with heavy bosons are necessary, but their couplings to the SM are not simply because they do not need any non-gravitational couplings to SM fields to saturate the mass sum rule in (<ref>). LHC searches imply Λ_℘ > 1TeV <cit.>, and this puts an upper bound of n_f-n_b < 3.75×10^33 from (<ref>). In flat spacetime, there is no scale like the Planck mass, so Λ_℘ can well be trans-Planckian, and in that case, one finds n_f-n_b < 630 for Λ_℘ > M_Pl.Secondly, symmergence allows the Higgs and other possible light scalars to remain light. Indeed, symmergence completes the QFT by converting the power-law UV corrections in (<ref>) to curvature terms in (<ref>) so light scalars can be destabilized only by large logarithms in δ S_log of heavy new fields Ψ. But,the logarithmic shift in the Higgs mass δ m_h^2 ∝λ_hΨ M_Ψ^2 logM_Ψ^2/μ^2 does not have to destabilize the SM sinceHiggs coupling λ_hΨ to new fields Ψ is allowed to be small in symmergence. In fact, λ_hΨ is allowed to even vanish with no harm to the workings of the symmergence. In general, couplings of the size λ_h Ψ≲ m_h^2/m_Ψ^2 stabilize δ m_h^2 and render the Higgs boson natural <cit.>. Also important is that the new fields can form a natural dark sector with none to weak couplings to the SM. They can even form a fully decoupled black sector in agreement with the current dark matter searches <cit.>. In this regard, dark/black stars and galaxies become a possibility.Thirdly, symmergence implies that the Universe starts flat. Gravity emerges afterward from the quantum fluctuations of the matter fields. The vacuum energy from δ S_log in (<ref>)δ V= 1/32π^2 str[M^4 (1-3/2logM^2/μ^2)]is of Planckian size as implied by the definition in (<ref>) of the Planck scale, and gives rise to a solution R(g)∼ M_Pl^2 via the Einstein field equations from (<ref>). The maximum and minimum in Fig. <ref> then get split in energy by Λ_℘ - M_Pl∼Λ_℘ so that the gravity emerges within a time about Λ_℘^-1. This symmergent evolution can be described by an FRW Universe with a quasi-linear scale factor such that the δ V in (<ref>) sets the initial value of the scale factor <cit.>. This FRW description holds initially and gets modified in time with higher-derivative contributions from the quadratic curvature term. This symmergent cosmology sharply contrasts quantum cosmology <cit.> in that the Universe starts flat (no gravity) in the former and curved (quantum gravity) in the latter. Also, the classical gravity phase is attained in a time around Λ_℘^-1 in the former and M_Pl^-1 in the latter. Finally, the symmergent gravity action (<ref>) gives an intertwined description of the renormalized QFT and classical gravity. The two sectors are compatible since quantum fluctuations have been integrated out in flat spacetime, and gravity has emerged upon it. In spite of this concord, the two sectors are still incompatible in the face of small remnant fluctuations of the quantum fields. Nevertheless, given the Einstein field equations following from(<ref>), fluctuations in quantum fields can be translated as stochastic fluctuations in the spacetime metric and, thanks to the loss of reversibility this way, the two sectors can be regarded to attain a certain degree of compatibility <cit.>.§ TOP-DOWN: EMERGENT GRAVITY DEVOCATION OF STRING INSTABILITY Flatness at the start ensures that the progenitor of the whole enchilada can be structured and understood as a quantum object. One such object would be an open string of length scale Λ_℘^-1 ending on D-branes <cit.>. This is because open strings with Dirichlet boundary conditions break the Poincare symmetry. But non-supersymmetric open and closed strings do fatefully decay because their ground states are imaginary-mass, spin-zero states lying below even the massless states like the photon <cit.>. Indeed, for a string with tension T=Λ_℘^2, the ground level acquires a mass M^2 = -2πΛ_℘^2 and M^2 = -8πΛ_℘^2 for open and closed strings, respectively. Energetically, therefore, open strings and the D-branes they are attached to are expected to decay into closed strings <cit.>.The decays and interactions of strings are studied in the framework of the equivalent string field theory <cit.>. In this regard, the tachyonic scalars like the Higgs field <cit.> are tailor-made for the spinless, imaginary-mass string ground level. In fact, the Schrodinger equation for the imaginary-mass string ground state is isomorphic to the Klein-Gordon equation for a classical tachyonic scalar <cit.>. But there is a serious problem with this tachyonic string field theory. There is a problem because the mass of the string ground level, in absolute magnitude, is at the same scale as the second, third, and even higher excited levels of the string, and construction of a low-energy effective field theory for the tachyon deadlocks as it mixes with the massive excited states of the string <cit.>. Indeed, as a toy model collecting results of different constructions <cit.>, the minimal tachyon actionS[η,𝕋]=∫ d^D x √(-η){1/2η^μν∂_μ𝕋∂_ν𝕋 + πΛ_℘^2 𝕋^2 +c̃_O 𝕋^4 + c̃_V tr[V_μ V^μ] 𝕋^2 + ℒ_ mix(𝕋,χ_≥ 2)}inevitably contains an interaction term ℒ_ mix(𝕋,χ_≥ 2) with the second and higher string levels χ_≥ 2. The c̃_V term couples 𝕋 to the massless vectors V_μ in the first excited level of the open string. In the vacuum ⟨ V_μ⟩ = 0 so that -S[η,𝕋] is maximized at 𝕋_ max=0 and minimized at 𝕋^2_ min=-Λ_℘^2/2c̃_O provided that c̃_O<0 and provided also thatℒ_ mix(𝕋,χ_≥ 2)is negligible. The transition from 𝕋_ max to 𝕋_ min gives a dynamical representation of the open string (D-brane) instability. The vacuum energy drops from zero at the maximum to about-Λ_℘^D in the minimum, and the minimum contains only closed strings, with no open string excitations left <cit.>. Also these closed strings decay into final states not containing closed string excitations, particularly the gravity <cit.>. These extrema and decays are possible if|ℒ_ mix(𝕋,χ_≥ 2)| is negligibly small but this smallness is not possible. It is in this sense that the tachyon seems to fail in forming a low-energy effective field theory for the string. The difficulty with the tachyon entails a natural question: Can the string ground level be represented by a massless unstable scalar? If yes, can the massless string excitations develop a low-energy effective action comprising gravity and matter? The answer is yes. Actually, the answer is symmergence. It is so because the affine scalar curvature in symmergence can well be the unstable scalar representing the string ground level. To see this, one first observes that a general affine connection Γ^λ_μν transforms as a tensor (connection) in flat (curved) spacetime and has, in general, nothing to do with the metric tensor. Its curvature, the Ricci curvature ℝ_μν(Γ) for instance, remains a genuine tensor field in both the flat and curved spacetimes <cit.>. In fact, mimicking the tachyon action in (<ref>) and remaining parallel to the symmergent metric-affine action in (<ref>), affine curvature can be attributed the action S_OS[η,ℝ] = ∫ d^Dx √(-η) {-c_O/16ℝ^2-M_O^2/2ℝ + c_OV tr[V_μ V^μ] ℝ}in the D-dimensional flat spacetime of the open string (D-brane). Here, ℝ = η^μνℝ_μν (Γ) is a mass dimension-two scalar acting as the Higgs-like field promoting the mass-squared of the string ground level.In essence, the affine action in (<ref>) is a replacement for the tachyon action in (<ref>) since, after all, it involves a the scalar ℝ in place of the tachyon 𝕋. They differ from each other by the absence of the mixing termℒ_ mix(𝕋,χ_≥ 2), and the reason for this is that ℝ is not a canonical massive scalar and, as will be seen in the sequel, ℝ_μν(Γ) consists of massless modes. The massless vector V_μ in the first excited level of the open string must be present in the low-energy effective action and, for this to happen,V_μ must couple to ℝ and such a coupling is given at the lowest order either by the c_OV term in (<ref>) or by V^μℝ_μν(Γ)V^ν term in symmergence since gauge kinetic term is independent of the connection. In the vacuum, ⟨ V_μ⟩ = 0 and -S_OS[η,ℝ] gets extremized at ℝ_ max=-4 M_O^2/c_O. This extremum becomes a maximum for M_O^2>0 and c_O<0 and being an unstable ℝ configuration it gives a representation of the open string ground level provided that c_O and M_O^2 satisfy the relation ℝ_ max=-4 M_O^2/c_O=2πΛ_℘^2. As was already discussed while deriving (<ref>) in Sec. <ref>,ℝ=ℝ_ max is the solution of the equation ℚ= M_O^2/2+c_O ℝ/8=0 where ℚ is what controls the motion equation ^Γ∇_λ(ℚ^1/3η^μν) = 0. The other solution with ℚ≠ 0 would lead to ℝ=0 in flat spacetime as revealed by equation (<ref>) in Sec. <ref>,and this means complete destruction of the open string (D-brane). But before this destruction occurs closed strings come into play as decay products. The massless dilatonic scalar ϕ and the massless spin-2 field h_μν in the first excited level of closed strings modify the massless spectrum. The main effect of h_μν is to generate a curved metric η_μν→η_μν+h_μν =g_μν in the same spirit as the general covariance map in (<ref>). Inclusion of these massless fields modifies the open string action (<ref>) to give it the closed string formS_CS[g,ℝ] =∫ d^Dx √(-g) {-c_C/16ℝ^2-M_C^2/2ℝ - c_ϕ/4ϕ^†ϕℝ + (c_CVℝ + c_CV R)tr[V_μ V^μ] }in which ℝ=g^μνℝ_μν(Γ) is the affine scalar curvature, R=g^μνR_μν (^gΓ) is the metrical scalar curvature, and c_C, M_C^2, c_CV are analogs of the open string constants c_O, M_O^2, c_OV in (<ref>). The new c_CV term brings in the metrical curvature R and ensures this way thatg_μν is a curved and invertible metric. The new c_ϕ term, on the other hand, couples ℝ to the massless scalars ϕ. There is no ℝ R term because of the ghosts<cit.>. The instability of the closed string is the instability of the entire spacetime hence its endpoint would be vacuous of gauge, gravity, and matter in the absence of ℝ, and in this sense, metrical terms like R, R^2 and Rϕ^†ϕ are avoided as they can induce gravity by themselves.In the vacuum in which ⟨ V_μ⟩ = 0 and ⟨ϕ^†ϕ⟩≪M_C^2, the action -S_CS[g,ℝ] gets extremized at ℝ_ max=-4 M_C^2/c_C= 8πΛ_℘^2 assuming, for simplicity, similar tensions for the closed and open strings.This extremum becomes a maximum for M_C^2>0 and c_C<0 and, as an unstable configuration, gives a representation of the closed string ground level. In the language of (<ref>), this is the ℚ=0 solution with ℚ= M_C^2/2+c_Cℝ/8. The other solution with ℚ≠ 0, as revealed by equation (<ref>) in Sec. <ref>, leads to a nontrivial solution ℝ_ min = R + 𝒪(M_C^-2) in curved spacetime of the metric g_μν. This solution sets up a minimum at Γ=Γ_ min following the maximum at Γ=Γ_ max. Its replacementin the gauge part of (<ref>) leads to the restoration of gauge symmetries up to M_C^-2 order if c_CVandc_CV are related as c_CV=-c_CVand in this minimum of the restored gauge symmetry, the rest of the action (<ref>) reduces to S_CS[g] = S_QFT[g,ψ] + ∫ d^Dx √(-g){- M_C^2/2 R - c_C/16 R^2 -c_ϕ/4ϕ^†ϕ R }after explicating the effective action S_QFT[g,ψ] of the massless scalar, vector and spinor string excitations ψ. The structure of gravity and matter in four dimensions depends on how these massless excitations propagate in the (D-4)–dimensional extra space. In case they depend only on the four non-compact dimensions,the S_CS[g] conforms to the symmergent gravity action S[g] in (<ref>) with the following parametric relationsM_Pl^2 = V_D-4×M_C^2 ,c_O= V_D-4×c_C ,c_ϕ=c_ϕwhere V_D-4 is the volume of the extra-dimensional space. This affine curvature condensation mechanism is summarized in Fig. <ref>. As the figure shows, open strings (D-brane) end with closed strings, and closed strings end with not the vacuity but the stringy symmergent gravity in (<ref>). In this regard, symmergence seems to be the key mechanism in both bottom-up and top-down directions.It links string theory to the known physics at low energies via the relations (<ref>). It possesses the salient features below: * Firstly, fermions can be coupled to the open string action (<ref>) or the closed string action (<ref>) via the affine connection Γ^λ_μν in their spin connections. This fermionic extension is not meant to introduce supersymmetry though imaginary-mass ground levels exist also in realistic superstrings <cit.>. * Secondly, the stringy symmergent action (<ref>) is an R+R^2 gravity theory like the symmergent action (<ref>). It contains a scalar graviton having a mass around Λ_℘. This mode, like the tachyon, can mix with the massive string modes χ_≥ 2. It cannot therefore be part of the low-energy spectrum. The action (<ref>) contains a massless spin-2 graviton, too.This massless graviton and the other massless fields in S_QFT[g,ψ] form a consistent low-energy effective field theory. It is in this sense that the affine condensation does better than the tachyon condensation. * Thirdly, the string ground-level actions in (<ref>) and (<ref>) are essentially toy models illustrating the main physics implications. They are nevertheless robust in that they are structured by eyeing the tachyon action (<ref>). They give therefore information about what kind of string field-theoretic structures are to be considered in describing the ground-level instability <cit.>.* Lastly, in this novel picture of affine curvature condensation, string theory is linked to physics at low energies (the SM plus additional particles) via not only the compactification of the extra dimensions but also the emergence of gravity and gauge symmetries. It seems that emergence could be the way string theory accesses the real world. § CONCLUSION This work is composed of two parts. In the first part, we followed a bottom-up approach to complete effective QFTs in the UV via affine curvature condensation. This has led to the restoration of gauge interactions, the emergence of gravity, and the renormalization of the underlying QFT. This mechanism, the symmergent gravity, intertwines gravity and quantum fields, with the introduction of new particles that do not have to couple to the SM directly, and with the allowance to suppress couplings to heavy fields for enabling stabilization of the scalar masses. In this picture, gravity emerges upon flat spacetime effective QFT, and, in a cosmological setting, the Universe starts out flat where the gauge forces and gravity take shape in a time set by the UV cutoff. This symmergent gravity frameworkcan be probed via cosmic inflation <cit.> and reheating; black holes <cit.>, wormholes and neutron stars; and collider <cit.> and beyond-the-collider <cit.>experiments.In the second part, we followed a top-down approach to reveal that the instabilities in the ground levels of the open and closed strings are better modeled by affine scalar curvature rather than the tachyon.We have provided an affine curvature condensation picture of the open string (D-brane) decay followed by the closed string decay. Here, the main novelty is that the closed string ground state is described by the symmergent gravity, with model parameters coming from strings, not the QFT.It may be that the string theory is linked to the known physics at low energies not only by the compactification of extra dimensions but also by the emergence of gravity along with the gauge interactions. This top-down approach gives a stringy completion of the QFTs. For future prospects, one research direction would be the explicit construction of the string field theory that leads to the stringy metric-affine dynamics as a generalization presumably of the earlier works <cit.>. Under symmergence, such a string field-theoretic construction has the potential to determine the string parameters from the low-energy experimental bounds (via for example the quadratic-curvature coefficient c_O involving n_B-n_F). Another direction would be the analysis of the very early Universe where the usual quantum gravity phase is replaced by flat spacetime <cit.>. Yet another direction would be a test of the symmergence (including the stringy construction) in astrophysical compact objects like neutron stars. There are actually various directions to investigate. In summary, with the realization especially of the stringy completion, symmergent gravity can form a new framework for investigating various astrophysical, cosmological, and collider phenomena <cit.>.§ ACKNOWLEDGEMENTSI am grateful to Eric Ling for bringing Ref. <cit.> to my attention. I thank Orfeu Bertolami, Patrick Das Gupta, Canan Karahan, Eric Ling,Sebastian Murk, Ali Övgün, Beyhan Puliçe, Ozan Sargın, and Kai Schwenzer for fruitful discussions on different aspects of this work.
http://arxiv.org/abs/2312.16270v1
{ "authors": [ "Durmuş Demir" ], "categories": [ "hep-th", "gr-qc", "hep-ph" ], "primary_category": "hep-th", "published": "20231226132252", "title": "Emergent Gravity Completion in Quantum Field Theory, and Affine Condensation in Open and Closed Strings" }
Dynamic AGV Task Allocation in Intelligent Warehouses [=====================================================Multimodal Sarcasm Understanding (MSU) has a wide range of applications in the news field such as public opinion analysis and forgery detection.However, existing MSU benchmarks and approaches usually focus on sentence-level MSU.In document-level news, sarcasm clues are sparse or small and are often concealed in long text.Moreover, compared to sentence-level comments like tweets, which mainly focus on only a few trends or hot topics (e.g., sports events), content in the news is considerably diverse. Models created for sentence-level MSU may fail to capture sarcasm clues in document-level news.To fill this gap, we present a comprehensive benchmark for Document-level Multimodal Sarcasm Understanding (DocMSU).Our dataset contains 102,588 pieces of news with text-image pairs, covering 9 diverse topics such ashealth, business, etc. The proposed large-scale and diverse DocMSU significantly facilitates the research of document-level MSU in real-world scenarios.To take on the new challenges posed by DocMSU, we introduce a fine-grained sarcasm comprehension method to properly align the pixel-level image features with word-level textual features in documents.Experiments demonstrate the effectiveness of our method, showing that it can serve as a baseline approach to the challenging DocMSU. Our code and dataset are available at https://github.com/Dulpy/DocMSU. § INTRODUCTION Sarcasm is a form of verbal irony that often uses positive words to convey a negative message, such as frustration, anger, contempt and even ridicule <cit.>. In real-world cases,a piece of sarcastic news often lacks explicit linguistic markers, and thus requires additional cues to reveal the true intentions.The accompanying visual information provides helpful cues to better perceive ironic discrepancies. Multimodal sarcasm <cit.> is omnipresent in social media posts, forum discussions, and product reviews, and hence the multimodal sarcasm understanding is of great significance for a wide range of applications in the news field such as sentiment analysis<cit.>, fake news detection<cit.>, and public opinion analysis.Figure <ref> illustrates a piece of multimedia sarcastic news.To understand this sarcastic news, a model must capture textual cues from multiple sentences, including fire fighting and cause danger. The accompanying visual cue, fire on a fire extinguisher in the figure below, plays an important role in this sarcasm. The figurative and creative nature of such multimodal sarcasm poses a great challenge to the effective perception of the true intention under the guise of overt positive surface involved in a whole document and an image. This requires the design and development of document-level multimodal sarcasm understanding (MSU) methods that specifically take the characteristics of such an ironic expression into consideration. Prior research has underscored the critical significance of utilizing extensive, high-quality, and challenging benchmarks for the development and evaluation of state-of-the-art deep learning methods across various natural language processing (NLP) tasks<cit.>. In this context, existing sarcasm benchmarks <cit.> have demonstrated considerable promise. However, when addressing document-level multimodal sarcasm understanding in the real-world news domain, they exhibit certain limitations, including (1) Limited length of text. In real-world scenarios, a piece of news may include more than 70 words across multiple sentences <cit.>, concealing ironic discrepancies beyond sentence boundaries. However, samples in existing multimodal sarcasm datasets <cit.> only include about 20 words within a single utterance on average, which greatly simplifies the challenges of MSU in real-world cases. (2) Limited quality of annotations. Existing large satirical datasets <cit.> are mostly generated by bootstrapping algorithm or remote supervision with noisy labels. These annotations can be disruptive to systems that harness such data for downstream applications due to the subtle nature of sarcasm. Furthermore, these datasets only contain text modality. (3) Very limited number of samples. As sarcasm lacks explicit linguistic or visual markers, a model requires a large volume of samples to learn the rules or ways that reveal the true underlying intentions. A large-scale dataset benefits the generalization capability of an MSU model that alleviates the over-fitting issue during the training procedure.The aforementioned limitations in existing datasets highlight the need for a comprehensive, challenging, and higher-quality document-level multimodal sarcasm dataset to enhance irony understanding in the domain of news. Towards that, we developed DocMSU, a comprehensive benchmark that contains high-quality annotations of 102,588 pieces of news with text-image pairs, covering 9 hot topics such as science, business, and sports. We collect these samples from social websites, including “New York Times” and “UN News”, each involving 63 tokens across 5 sentences on average. To alleviate the ambiguity of sarcasm, we manually annotated these documents and images in 3 rounds with 15 workers, ensuring the annotation quality with confidence scores. Each pair of text-image involves a binary label for sarcasm detection, 2.7 textual spans and visual bounding boxes on average for sarcasm localization.The proposed DocMSU facilitates the research of multimodal sarcasm perception for real-world applications. It also introduces two new challenges: (1) capturing the nuanced sarcastic clues in two modalities, where the clues are concealed within very few words in a document or a tiny area in an image;(2) aligning the visual and linguistic features for irony understanding, where the incongruity nature of sarcasm requires cross-modal interactions. To fill this gap, we propose a novel sarcasm comprehension method that aims to fuse the pixel-level image features with the word-level textual features of a whole document in a fine-grained manner. Experimental results show the effectiveness of our method. We will release our dataset and the code. The main contributions of our work can be summarised as follows: * We curate DocMSU, a new benchmarkfor document-level multimodal sarcasm understanding in the real-world news field. Compared with existing ones, our dataset is more comprehensive and more challenging with much higher quality annotations. * We come up with a novel document-level MSU method for sarcasm detection and localization, mitigating the issues in sarcastic cues detection across sentences and across modalities under inconsistent context. * We conduct extensive experiments on our DocMSU. Results show that the created benchmark enables us to develop and evaluate various deep learning methods for the task of MSU closer to the real-world application.§ RELATED WORK Datasets: Existing sarcasm datasets are mainly collected from Twitter and Reddit and can be roughly categorized into text-based ones<cit.>, and the multimodal ones <cit.>. The text-based datasets suffer from noisy labels caused by remote supervision. The most related to our work is MSTI<cit.>. However, the texts in MSTI only contain 20 tokens on average, which may not well reflect challenges in the news field. Compared to the existing sarcasm datasets, our DocMSU provides more samples, much longer texts and higher quality annotations towards sarcasm understanding in practice of the real-world news field. Detailed comparisons are available in Table <ref>. Methods:Early studies of sarcasm understanding were based on statistical patterns <cit.> and deep learning techniques such as word embeddings and LSTM/CNN<cit.>. Recent MSTI leverages pre-trained BERT and ResNet to extract the cross-modal features<cit.>. Some powerful methods such as CLIP <cit.> and VILT<cit.> rely on contrastive learning and Transformer to learn multimodal representations. Different from the above methods, our model aims to comprehend the fine-grained nuanced sarcastic clues in two modalities, where the clues reside within very few words in a document or a very tiny area in an image. § THE DOCMSU DATASETWe present DocMSU, a new benchmark that contains high-quality annotations of 102,588 pieces of news with text-image pairs in 9 hot topics.§.§ Data CollectionWe crawl data from some famous news websites such as “New York Times”, “UN News”, “The Onion” and “NewsThumb”, etc. To avoid regulation issues, we discard news that includes sensitive topics such as pornography and violence. Finally, we collectmore than 70,000 pieces of news that consist of titles, abstracts, images, and news bodies, where each sample is generated by combing a news title, the abstract, and the image. Each sample involves 63 tokens across 5 sentences on average. We categorize these data into 9 groups such as “science”, “health”, and “business”, and each group involves 10 visual object types such as “building”, “animal” and “art”. We use an open-source tool doccano <cit.> for textual and visual annotations and 15 volunteers participated in the work. We mix up different categories of samples before the annotation, allowing each annotator to randomly access news.§.§ Annotation ProcessDuring the annotation procedure, we give a binary tag for each document-image pair to indicate whether it is an ironic message. For a piece of sarcasm news, we further mark the sarcastic clues, including the textual span in the document and the bounding box in the image. However, we face two challenges in such an annotation procedure. * Lacking explicit linguistic and visual markers in a sample. An annotator may not be able to accurately understand the sarcasm in some news titles, images and the corresponding abstracts, as they may require some proper background knowledge for the annotator to understand the sarcasm.* Annotation variances caused by the subjective nature of perceiving sarcasm. As irony is always conveyed in a subtle way both in a document or an image, the perception of sarcastic clues varies from different annotators.For the first issue, we ask the annotator to refer to the news body to better understand the context. By doing so, the annotator is able to give a more accurate binary label, as well as sarcastic clues including the textual spans in the document and bounding box in the image. Regarding the second issue, we have 3 annotators for each sarcastic sample with a scoring mechanism. We use Intersection-over-Union (IoU) to quantify the similarity between two annotations. A similarity score between two annotations is defined as the sum of textual IoU (TIoU) and visual IoU <cit.>. TIoU is defined as follows: S refers to the text labeled by the annotator, r is the index of annotator, i and j indicate the positions of the beginning and the end of the sarcastic span respectively. TIOU = min(S_r-1[j],S_r[j])-max(S_r-1[i],S_r[i])/max(S_r-1[j],S_r[j])-min(S_r-1[i],S_r[i]) For each annotation, we obtain two similarity scores with the other two annotations. The sum of them is defined as the confidence score of this annotation. The annotation with the highest confidence score is selected in our DocMSU. Due to the subtle nature of sarcasm, there are some samples whose ironic clues can be hardly distinguished. For these samples, we observe that all the three confidence scores are much smaller than those of other samples, and these samples take up about 5 percent of the data. Hence, we ask the annotator who achieves the overall highest confidence score among the 15 volunteers in the whole annotation procedure to further label these “challenging” instances. We also use GPT-3.5 to augment the text data and discard instances that may include sensitive information. §.§ Dataset AnalysisFigure <ref> details the statistics of our DocMSU.Figure <ref> shows the percentage of the 9 topics, “Science”, “Health”, “Sport”, “Technology”, “Entertainment”, “Education”, “Business”, “Environment”, and “Politics”, where the “Environment” topic is most popular and takes the largest portion 22.16%. Figure <ref> illustrates the distribution of sarcastic samples and non-sarcasm ones in each topic. Totally, our benchmark contains 34,130 sarcastic samples and 68,458 non-sarcastic ones. Figure <ref> shows the distribution of visual object type in each topic, where multiple types of visual objects enrich the feature for sarcasm understanding. We have 10 object types in our DocMSU. A sample contains 2.7 labeling targets on average, which are sarcastic clues, including textual spans in a document and bounding boxes in an image[We provide more details in Appendix: DocMSU Annotation Pipeline, including annotation user interface, data samples, etc. and appendices are available in the preprint version.].§ PROPOSED METHOD§.§ Motivation Figure <ref> shows two examples selected from our DocMSU.The first example highlights the irony in National Geographic's practices, as they extensively employ plastic packaging despite advocating against the overuse of plastic. The subsequent example exposes the contradictions within Netflix's service delivery. Despite boasting about their discs being “unbreakable" and implying exceptional durability, customers received damaged discs, contradicting the advertised durability.A model may face two new challenges for sarcasm understanding the above two examples. (1) Capturing the nuanced sarcastic clues that are concealed within very few words (e.g., “a broken disc”) in a document or in a very tiny area (e.g., the tag “unbreakable”) of the image.(2) Aligning the visual and text features for the accurate irony understanding (e.g., “unbreakable” and the broken disc). Existing approaches such as recent MSTI <cit.>, CLIP, and VILT have limitations in tackling these two challenges as they focus more on learning the overall information of the whole text and image representations. This motivates us to develop a new method to capture the fine-grained linguistic and visual sarcastic clues and align the two different types of clues for a better MSU.§.§ Overview Figure <ref> illustrates the architecture of our model, which mainly consists of three components, including a document encoder, an image encoder, and a fusion module. To capture the underlying subtle clues concealed within very few words in a document and a very tiny area in an image, our model generates two matrices for pixels-level image representations and token-level document representations. For the cross-modal interactions, we fuse the representations in two matrices with a sliding window for multimodal alignment. We will explore the specifics of this design in the following sections.§.§ Document EncoderWe denote a document as s = {w_i}_i=1^n, where w_i indicates the i-th token and n is the total number. We use BERT <cit.> to output contextualized token-level representationsμ∈R^n ×d , whereμ = [ν_1,ν_2,...,ν_n] = BERT(s)We use a fully connected layer f_c to transform word representations. Then, we convert the document representation into a square shape ϖ∈R^L ×L ×d,ϖ(i,j,:) = (f_c(θ))_L × (i-1)+jwhere 1 ≤ i,j ≤ L.We add paddings when n < L × L.This square document representation is used for the fine-grained alignment of the pixel-level visual representations. §.§ Image EncoderTo keep the high spatial resolution of the feature maps and retain the information of the image details, we only use the early three convolution layers of ResNet <cit.>.In this way, we can keep the original resolution of images.Then, we use a projection layer f_p to generate the visual representations for each pixel.Third, the image feature map is spatially divided into m sliding windows, with L× L pixels for each window. In this way, the representation of the entire image is as follows, ω = [ω_1,ω_2,...,ω_m] = f_p((ResNet(σ)),where σ is the input image andω_k∈ R^L × L × ddenotes the k-th window. §.§ Multimodal Sarcasm Fusion During the process of multimodal fusion, we add the document representation ϖ to each window ω_k. The result of addition is denoted as ω̂_k,ω̂_k = ϖ + ω_k k = 0,1,...,mWe apply four stages in Swin-Transformer <cit.> to deeply fuse the two modalities.Specifically, each stage contains one patch merging layer and several blocks containing mechanisms of shifted window attention, which calculates the attention between each element in each shifted window with little computational complexity. By doing so, interactions are built between each word of the document and each image pixel without adding additional calculations. The output of this method delivers multimodal fine-grained features that could be applied to sarcasm understanding.§ EXPERIMENTS §.§ Evaluation TasksTo evaluate our model, we perform two MSU tasks, i.e., sarcasm detection and sarcasm localization. Sarcasm detection: Sarcasm detection aims to identify whether visual or verbal irony exists in the given sample. This task can be formulated as a binary classification problem. Sarcasm localization: Sarcasm localization aims to find out the sarcastic clues or objects in a document with textual spans, as well as in the paired image with bounding boxes.§.§ Implementation Details and SettingsWe employ the pre-trained uncased BERT-base <cit.> as the text encoder. For sarcasm localization, we use a linear layer to predict whether a word token is sarcastic in the text and employ YoloX <cit.> as the head network to output the bounding box of the sarcastic object or region.For sarcasm detection andtextual sarcastic localization, we use the binary cross entropy loss function. For visual sarcastic localization, we employ the CIoU loss function <cit.>.We train our model with a single NVIDIA RTX 3090 GPU.The learning rate is set to 0.001 and 0.01 for sarcasm detection and localization, respectively. We employ AdamW <cit.> as the optimizer. The dataset is randomly split into 70%, 20%, and 10% for training, validation, and testing.The previous Swin-Transformer has three settings including Tiny, Small, and Base <cit.>.For the baseline, we configure Swin-Transformer with the Tiny setting for sarcasm localization, and Base for the detection task, as such two settings perform best among all three settings in corresponding tasks.More details are available in Appendix: Implementation Details and Settings. We repeat experiments for 5 times with different random seeds and report both mean and variance values.§.§ Evaluation MatricesFor sarcasm detection and sarcasm localization in images, we follow <cit.> and<cit.> to use average precision (AP) and F1 scores for evaluation, respectively, including AP_50, AP_60, F1_50 and F1_60. For textual sarcasm localization, Exact Match (EM) <cit.> is usually employed to measure the prediction accuracy, which is defined as the number of correct predictionsthat strictly (100%) match the boundaries of annotations divided by the total number of predicted samples.However, as shown in Figure <ref>, the original EM is too strict to reflect the prediction accuracy.Therefore, we introduce three new evaluation matrices as follows,* EM_50, EM_70: We use EM_50 and EM_70 to relax the standard EM. They are defined as the number of predictions that match more than 50% and 70% annotations divided by the total number of predicted samples.The original EM can be seen as EM_100. * BitError: BitError is the ratio of those wrongly classified tokens to the total number of tokens in a document sample.§.§ Sarcasm Detection ResultsIn this paper, we compare our method with BERT-base (text-only) <cit.>, Swin Transformer (image-only) <cit.>, CLIP <cit.>, Vision-and-Language Transformer (ViLT) <cit.> and CMGCN <cit.>, which detects sarcasm by the object types. For CLIP and ViLT, we first concatenate the global image and text features and then perform binary classification.Our model employs three settings of Swin-Transformer to extract image representations respectively.As shown in Table <ref>, our method with Swin-Transformer of Base achieves the best accuracy, demonstrating its superiority in sarcasm detection.Because the single-modal BERT-base and Swin-Transformer do not comprehensively exploit the image and text information, they only achieve suboptimal results. Moreover, the proposed method also outperforms the multimodal CLIP and ViLT models.This is because our method is based on more fine-grained visual signals, and the sliding-window-based Transformer mechanism can better capture the sarcasm clues. More detailed comparisons are available in Appendix: Analysis of Swin-Transformer under different settings.§.§ Sarcasm Localization ResultsFor the sarcasm localization, we additionally include the sentence-level Multimodal Sarcasm Target Identification (MSTI)<cit.> method, which aims at finding sarcasm clues in tweets. The experimental results are shown in Table<ref>.Our method outperforms those existing methods in both visual and textual sarcasm localization in terms of AP-, F1- and EM-based metrics.For example, in textual sarcasm localization, our method (SB) surpasses CLIP, ViLT, and MSTI by 12.85%, 7.20%, and 8.10%, respectively, in terms of EM_50. This shows the effectiveness of the proposed method in localizing nuanced clues in images or long text, and also implies the meaningfulness of collecting such a document-level benchmark. For textual sarcasm localization, we also observe that our method (SS) is the second best and performs slightly lower than MSTI by 2.13 points in terms of BitError. Nevertheless, our method (SS) significantly outperforms MSTI in terms of EM-based metrics, e.g., 5.81 and 7.20 points respectively in terms of EM and EM_50. The results suggest that our method can achieve a good balance between coverage and precision during the localization. We will further investigate such an interesting finding in the future. We also provide a case study to visually demonstrate how our model performs multimodal sarcasm localization. Due to the space limitation, we give such an illustration in Appendix: Case Study. §.§ Attention Visualization Figure <ref> depicts the attention map generated by our method. Our model focuses more on the fire extinguisher after four alignment stages, as discussed in Section <ref>. This demonstrates the superiority of our approach in capturing fine-grained textual and image clues in document-level multimodal news.§.§ Ablation Study§.§.§ Impact of Modalities on MSU.This section investigates the influence of visual and textual modalities on MSU. As shown in Table <ref> and Table <ref>, combining the two modalities significantly improves the detection and localization accuracy. Such comparisons confirm our hypothesis that multimodal cues can benefit sarcasm understanding at the very beginning of Section <ref>. §.§.§ Impact of Image-Text Fusion Method.As discussed in Section <ref>, this paper presents a new method to fuse visual and textual modalities for MSU.To evaluate the effectiveness of the fusion method, we compare ours with a baseline where the encoded image pixels and text tokens are directly concatenated for sarcasm detection and localization. As shown in Table <ref> and Table <ref>, the proposed fusion method can better capture and align the visual and textual sarcasm clues and achieves better accuracy than the sample concatenation fusion method. These findings show the superiority of our method for the challenging MSU task.§.§ DocMSU with Large Language ModelsWe conducted experiments on large language models (LLMs), including GPT-4 <cit.>, VideoChat <cit.>, Otter <cit.>, and mPLUG-Owl <cit.>.For the instances with obvious satirical clues, LLMs can yield satisfied performance. While for the challenging ones, LLMs still struggle to accurately comprehend sarcasm. Detailed results are presented in Appedix: Tests on LLMs.Particularly, we observe that LLMs encounter difficulty in accurately identifying the satirical object and its underlying cause when the text does not obviously indicate satire. Furthermore, it shows that LLMs excel in providing insightful explanations when the news involves intricate cultural knowledge and social context.§ CONCLUSIONThis paper presents DocMSU, a new benchmark for the challenging document-level multimodal sarcasm understanding in the news field. Compared with the existing ones, our DocMSU is more comprehensive, more challenging, and involves higher-quality annotations. We believe our DocMSU will encourage the exploration and development of various downstream tasks for document-level multimodal sarcasm perception closer to real-world applications. The proposed DocMSU also introduces two new challenges. This motivates us to present a new model that aims to capture fine-grained visual sarcastic clues in the image and word-level clues in documents and align them for effective fusion. Experiments on two MSU tasks show the effectiveness of our model on the challenging DocMSU. Future work could focus on MSU across various cultures, as well as the interesting expressive differences between males and females.§ ETHICAL STATEMENTWe have the copyright of contents collected from three websites, including TheOnion, UNNews, and NewsThump, as these sites automatically grant copyright for users who follow their online rules.We carefully study these rules and strictly conform to the requirements during data collection and annotation.These online copyright requirements are available on the above websites.To further fortify ethical compliance, we will take the following steps: 1). Implementing rigorous data anonymization techniques to safeguard personal information.2). Ensuring transparency about the data sources and collection methods in our revised manuscript.3). Committing to ongoing scrutiny and readiness to remove or alter data that may be deemed ethically inappropriate or has been collected from sources that do not provide the necessary authorization.4). Developing an online agreement to require every user of the dataset strictly conform to the rules of the websites from which we collected the data. § ACKNOWLEDGMENTS This work was partially supported by the joint funds for Regional Innovation and Development of the National Natural Science Foundation of China (No. U21A20449), the Beijing Natural Science Foundation under Grant M21037, and the Fundamental Research Funds for the Central Universities under Grant 2242022k60006.
http://arxiv.org/abs/2312.16023v1
{ "authors": [ "Hang Du", "Guoshun Nan", "Sicheng Zhang", "Binzhu Xie", "Junrui Xu", "Hehe Fan", "Qimei Cui", "Xiaofeng Tao", "Xudong Jiang" ], "categories": [ "cs.CL", "cs.MM" ], "primary_category": "cs.CL", "published": "20231226122414", "title": "DocMSU: A Comprehensive Benchmark for Document-level Multimodal Sarcasm Understanding" }
Cosmology on the largest scales with galaxies and gravitational waves] Cosmological Constraints from Combining Galaxy Surveys and Gravitational Wave Observatories^1 Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637, USA ^2 Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637, USA ^3 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA [^∗: Corresponding author: ][email protected] variations in survey properties due to both observational and astrophysical selection effects generate substantial systematic errors in large-scale structure measurements in optical galaxy surveys on very large scales. On such scales, the statistical sensitivity of optical surveys is also limited by their finite sky coverage. By contrast, gravitational wave (GW) sources appear to be relatively free of these issues, provided the angular sensitivity of GW experiments can be accurately characterized. We quantify the expected cosmological information gain from combining the forecast LSST 3×2pt analysis (the combination of three two-point correlations of galaxy density and weak lensing shear fields) with the large-scale auto-correlation of GW sources from proposed next-generation GW experiments. We find that in ΛCDM and wCDM models, there is no significant improvement in cosmological constraints from combining GW with LSST 3×2pt over LSST alone, due to the large shot noise for the former; however, this combination does enable an estimated ∼6% constraint on the linear galaxy bias of GW sources. More interestingly, the optical-GW data combination provides tight constraints on models with primordial non-Gaussianity (PNG), due to the predicted scale-dependent bias in PNG models on large scales. Assuming that the largest angular scales that LSST will probe are comparable to those in Stage III surveys (ℓ_ min∼ 50), the inclusion of next-generation GW measurements could improve constraints on the PNG parameterby up to a factor of ≃ 6.6 compared to LSST alone, yielding σ()=8.5. These results assume the expected capability of a network of Einstein Telescope-like GW observatories, with a GW detection rate of 10^6 events/year. We investigate the sensitivity of our results to different assumptions about future GW detectors as well as different LSST analysis choices.§ INTRODUCTION As cosmic surveys continue to grow in scale, they enable measurements of large-scale structure (LSS) with ever-greater precision that in principle translate into ever-tighter cosmological constraints. The current state of the art in extracting cosmological information from large photometric surveys is the so-called “3×2pt” analysis, which combines the information from three two-point correlation functions: the galaxy auto-correlation function, the weak lensing (cosmic shear, or shear-shear) correlation function, and the galaxy-shear correlation function. All three major Stage III photometric survey collaborations[The Stage-III and Stage-IV classification was introduced in the Dark Energy Task Force report <cit.>, where Stage-III refers to the dark energy experiments that started data taking in the 2010s and Stage-IV to those that started or will start in the 2020s.] have carried out such analyses: the Dark Energy Survey (DES, ), the Kilo-Degree Survey (KiDS, ) and the Hyper Suprime-Cam Subaru Strategic Program (HSC-SPP, ). While these correlations can be measured over a very wide range of length scales, so far cosmological inference from them has focused on intermediate spatial scales only – typically between ∼ 8 h^-1 Mpc and ∼ 100 h^-1Mpc – due to limitations at both larger and smaller scales. Smaller scale measurements, although having higher signal-to-noise, are challenging to model due to nonlinearities in the density field, uncertainties in the galaxy-dark matter connection, and complex baryon physics <cit.>. Structure on very large scales is easier to model, but the measurements suffer from observational and astrophysical systematic effects. For example, in the Year 3 (Y3) DES galaxy clustering measurements, it was shown that significant corrections (larger than 5σ, where σ is the measurement uncertainty) had to be made to the measurements due to observational systematic effects on angular scales larger than 250 arcmin (ℓ≲ 45), as shown in Fig. 2 of <cit.>. To fully exploit the cosmological information coming from galaxy surveys, it is imperative to re-examine some of these scale limitations with new tools and data that become available over time.While there has been substantial recent work employing emerging techniques and simulations to harness information from small scales, e.g., <cit.>, this paper focuses on extracting information from the largest scales. The Vera C. Rubin Observatory's Legacy Survey of Space and Time<cit.> will enable a 3×2pt analysis on large scales of unprecedented statistical power. But as an optical photometric survey, it will be subject to the same sources of astrophysical and observational systematics as Stage III surveys. Therefore, we consider using gravitational wave (GW) sources as a complementary probe of very large-scale structure and combining GW and LSST measurements.The rationale of this approach is that on very large scales, GW observations suffer significantly less from observational and astrophysical systematic effects compared to optically selected galaxies, and their selection function can be known extremely well <cit.> – unlike electromagnetic signals, GWs are not affected by issues such as spatially varying Galactic dust extinction, variations in survey depth due to time-varying observing conditions and spatially varying star density, and other similar phenomena. In addition, gravitational waveforms are a direct prediction of general relativity <cit.>. Moreover, a distinctive feature of GW observations is their full sky coverage, a significant advantage over ground-based electromagnetic observations, which typically have limited sky coverage due to their geographic location and the need to discard a substantial portion of the sky (25-40%) to obtain cosmological information, due to Galactic foregrounds. GW sources can thus be used to robustly measure clustering on very large angular scales.With a limited number of terrestrial GW detectors simultaneously operating, most GW sources cannot be localized on the sky with an angular precision better than a few square degrees.As a result, the angular clustering of GW sources cannot be measured on small scales. This is not a limitation for our analysis, however, as LSST will provide high signal-to-noise measurements on such scales. As we will see, combining LSST 3×2pt with GW clustering also allows us to break the degeneracy between the bias of GW host-galaxy sources, b_ GW, and the amplitude of mass clustering in the universe, σ_8, yielding interesting constraints on b_ GW. This is of interest in its own right, since the bias of GW host-galaxies is not currently well determined (e.g. ), and constraining it will have implications for understanding the formation channel of GW sources <cit.>. Previous works have studied the clustering of GW sources and its cross-correlation with galaxy positions. <cit.> and <cit.> considered GW×LSS cross-correlations to constrain cosmological models, particularly modified gravity theories. <cit.> used cross-correlations to infer H_0, and <cit.> constrained cosmological parameters including the GW bias. <cit.> and <cit.> studied such correlations to gain an astrophysical understanding of GW sources.<cit.> and <cit.> studied the impact of more realistic observational effects on measuring and modeling GW clustering. <cit.>, and <cit.> considered the weak lensing of GW events to constrain extended cosmological models. Some studies have also cross-correlated GW sources with other large-scale structure tracers such as supernovae <cit.> and HI intensity mapping <cit.>.Our paper takes a different direction from previous work in two ways: 1) we study the complementary combination of GW clustering and LSST 3×2pt, and 2) we explore constraints on both the standard ΛCDM and wCDM cosmological models as well as primordial non-Gaussianity (PNG), specifically, the local PNG model parametrized by . To date, the Planck cosmic microwave background (CMB) measurements have provided the most stringent constraints on PNG <cit.>, but these measurements are already near the cosmic variance limit of precision, so future CMB experiments are unlikely to significantly improve upon these results. Thus, the new frontier for constraining PNG lies in large-scale surveys of the galaxy and matter distributions. We note that <cit.> also considered GW clustering for constraining f_NL, but they did not combine GW with galaxy 3×2pt analyses, which offers some major advantages that we explore in this work, such as the self-calibration of the GW host-galaxy bias. This paper is structured as follows. In Sec. <ref>, we present the theoretical framework, in Sec. <ref>, we describe the analysis setups for galaxy surveys and GWs, and in Sec. <ref> we present the results of our analysis. We conclude in Sec. <ref>.§ MODELING §.§ Large-Scale Structure in ΛCDM and wCDM Given the limited photometric redshift precision achieved by imaging surveys that employ a handful of optical passbands, the lowest order LSS clustering statistic commonly measured is the 2D angular correlation function or equivalently the 2D angular power spectrum, C^ij(ℓ), within and between photometric-redshift bins i,j. Here, ℓ is the 2D multipole moment, which is roughly related to angular separation on the sky, θ, by ℓ∼π/θ. In the 3×2pt analysis, we focus on three angular power spectra: the auto-correlation of a foreground (or lens) galaxy population, C_ gg, the cross-correlation between lens-galaxy position and source-galaxy shear, C_ g γ, also known as galaxy-galaxy lensing, and the auto-correlation of source-galaxy shear, C_γγ, also known as cosmic shear.We follow common practice in using thefirst-order Limber approximation <cit.> to relate the angular galaxy-galaxy, galaxy-shear, and shear-shear power spectra to the corresponding 3D power spectra, C_ gg^ij(ℓ) = ∫ dχN_l^i(χ)N_ℓ^j(χ)/χ^2P_ gg(k=ℓ+1/2/χ, z(χ)),C_ g γ^ij(ℓ) = ∫ dχN_l^i(χ)q_s^j(χ)/χ^2P_ gm(k = ℓ+1/2/χ, z(χ)),andC_γγ^ij(ℓ) = ∫ d χq^i_s(χ)q_s^j(χ)/χ^2 P_ mm( k = ℓ + 1/2/χ, z(χ) ). Here, P_ mm(k,z) is the 3D matter power spectrum, P_ gm(k,z) is the 3D (lens) galaxy-matter power spectrum, and P_ gg(k,z) is the 3D lens-galaxy power spectrum, χ is the comoving distance, z is redshift, and N^i_l describes the true redshift distribution of the lens-galaxy population in the ith photometric-redshift bin:N_l^i(χ) = n^i_l (z)/n̅^i_ldz/dχ,where n^i_l is the lens-galaxy redshift distribution, and n̅^i_l is the mean number density of lens galaxies. In addition, q_s^i is the lensing kernel of the source-galaxy population:q_s^i(χ) = 3H^2_0 Ω_m /2c^2 p(ℓ) χ/a(χ) g(χ),where a is the scale factor, p(ℓ) is the ℓ-dependent prefactor in the lensing observables due to the spin-2 nature[Here we use p(ℓ) = (ℓ + 1/2)^-2, which corresponds to the 1st order extended Limber flat projection (), as defined in table 1 from <cit.>]of the shear, and g(χ) is the lensing efficiency kernel: g(χ) = ∫_χ^χ_h d χ'n^i_s (z) /n̅^i_sdz/dχ'χ'- χ/χ',where n_s^i(z) is the redshift distribution of the source galaxies in the ith photometric redshift bin, n̅^i_s is the mean number density of the source galaxies, and χ_h is the comoving distance to the horizon. The Limber approximation assumed in Eqn.(1) is known to break down at large angular scales, that is, for small ℓ. This is particularly the case if the kernel functions N^i_ℓ(χ) and q^i_s(χ) are narrowly peaked functions of χ. Given the relatively broad redshift bins we adopt in this analysis (see Fig. <ref>) and scaling from the results in <cit.>, we expect this to lead to a relatively small error in our analysis for angular multipoles ℓ≳ 3, one outweighed by the significant savings in computation time. As an additional complication, the mean of any estimator of C_ℓ in a survey of finite angular extent Θ will differ appreciably from the theoretical values in Eqn. (1) for multipoles ℓ < π/Θ. In practice, for the analysis set-up we consider, this is a very small correction, since GW surveys are effectively all-sky, and π/Θ_ LSST is comfortably below the minimum angular multipole that we consider in the analysis of LSST data. Throughout the paper, if not otherwise specified, we take the matter power spectrum P_ mm to be that given by the spatially flat 5-parameter ΛCDM model with fiducial parameters Ω_c = 0.22, σ_8=0.8, Ω_b = 0.0448, h=0.71, and n_s=0.963. We also consider the wCDM model, with an additional free parameter given by the (constant) equation of state parameter of dark energy, w_0, with fiducial value w_0=-1. The non-linear regime of the matter power spectrum is modeled using the Halofit prescription <cit.>. Since we are working on large length scales, we assume a linear, scale-independent galaxy bias model for the lens galaxies, in which P_ gg(k,z) = b_ g(z) P_ gm(k,z)=b_ g^2(z)P_ mm(k,z) . For GW sources, the populations of lens galaxies in Eqns. (1a) and (2) are replaced by those of GW sources, and we assume that the host galaxies of GW sources are also linearly biased with respect to the matter distribution but with a different bias parameter, b_ GW(z). Assuming that GW sources can be typically localized to within an area of size Θ^2_ GW, we only consider clustering of GW sources on angular scales ℓ < π/Θ_ GW, so that localization corrections to the estimator for Eqn. (1a) should be very small. To calculate angular power spectra and their covariances (see 3),we use the new differentiable cosmology library<cit.> to perform our analysis. Testing a variety of setups (see Sec. <ref>, <ref>) is core to our investigation, and 's differentiable functions provide the crucial ability to rapidly perform Fisher forecasts in a robust and stable way. §.§ Primordial Non-Gaussianity and f_NL In addition to the canonical ΛCDM and wCDM models, we consider ΛCDM models with non-Gaussian initial conditions. Inflation is an epoch of rapid expansion in the early Universe that is thought to generate the primordial density fluctuations that subsequently evolve into the structures observed today. In the simplest inflation models involving a single, weakly coupled scalar field, the initial density perturbations are predicted to be very close to Gaussian random fields. In models with multiple scalar fields, the initial perturbations can be (potentially measurably) non-Gaussian. We focus on models with local-type primordial non-Gaussianity (PNG) <cit.>, in which the initial conditions for the gravitational potential are given byΦ(x⃗) = ϕ(x⃗) + (ϕ(x⃗)^2 - ⟨ϕ^2⟩),where Φ(x⃗) is the gravitationalpotential, ϕ(x⃗) is a Gaussian field, andis a parameter that characterizes the amplitude of the PNG. In the simplest single-field inflation models, we generically expect ≪ 1, so any indication of >1 would imply that multiple fields were present during inflation <cit.>.PNG alters the behavior of the galaxy bias and causes galaxies to exhibit a scale-dependent bias with an amplitude that grows on large scales <cit.>; to lowest order, the galaxy bias in Eqn. (5) is replaced byb_ g(z) → b_ g(z)+b_(k,z)where the additional scale-dependent bias term is given byb_(k,z) = 3δ_c(b_ g(z) - 1)/k^2(Ω_mH_0^2/c^2D(z)T(k)) .Here, δ_c = 1.686 is the critical threshold for halo collapse in the spherical collapse model, D(z) is the linear growth factor (relative to the present), and T(z) is the ΛCDM transfer function. The additional bias term grows towards large-scales, as it is proportional to b_∝ 1/k^2 <cit.>.Formally, the choice above of (b_ g - 1) can be generalized to (b_ g - p), where p is some constant. Most works employ p = 1, which is derived by assuming universality of the halo mass function. However, there are motivations for selecting other choices for p as well <cit.>. We employ p = 1 to be consistent with previous works. Alternative choices of p that are higher (lower) than this value will result in weaker (stronger)constraints, since lower values of p increase the scale-dependent bias, b_, for galaxies with b_ g >1.To leading order,impacts structure formation only through the bias terms in P_ gg and P_ gm, and we do not include its impact on the matter power spectrum itself. This is a good approximation for linear/quasi-linear scales, as the -based correction goes as Δ P_ mm/P_ mm∼× 10^-5. For values of ∼𝒪(10), as found in our constraints below, this correction is completely subdominant. However, simulations show that the impact ofon the matter power spectrum is most prominent on non-linear scales <cit.>, since the non-Gaussianity of the initial density field changes the abundance of massive halos, which impacts the matter and halo power spectrum on small scales. In principle, this means that our analysis underestimates the full impact ofand thus overestimates the expected error on it. <cit.> show that a lensing-only analysis of LSST Year 10, with realistic scale cuts, leads to constraints of σ() = 92. This is broader than the constraints onfound in our analysis below, which indicates that the inclusion ofin the matter power spectrum would have negligible impact on our constraints compared to the signal due to scale-dependent bias. Our analysis of PNG models uses a modified version of thelibrary that includes thesignatures on the bias above[<https://github.com/DhayaaAnbajagane/jax_cosmo.git>]. An illustration of this effect on weak lensing and galaxy clustering observables is shown in Fig. <ref>, where we see that the effect is most prominent in the galaxy auto-correlation on large scales. The tightest current bounds oncome from Planck, with= -0.9 ± 5.1, and current LSS measurements provide σ() ∼25-50. Forecasts indicate that future galaxy surveys have the statistical power to decrease this by 1-2 orders of magnitude <cit.>. We extend these past efforts by including other tracer fields, namely the GW sources, and by including weak lensing to self-calibrate the galaxy bias parameters and further constrain the wCDM and ΛCDM cosmological parameters, which in turn improve the marginalized constraints on . § ANALYSIS SETUPIn this section, we specify the different sample characteristics and parameter priors that we use for the forecasts. In Sec. <ref>, we describe the samples from LSST, in Sec. <ref> we do the same for the samples from GW observations, and in Sec. <ref> we describe the procedure we use to compute angular correlation functions and Fisher matrices and predict the constraining power of each of the setups. §.§ Galaxy Surveys: The LSST Y10 setupTo generate the 3×2pt data vector we consider a survey corresponding to the LSST Year 10 setup covering 14,300 deg^2 of extragalactic (high Galactic latitude) sky, which corresponds to a fraction of the sky f_sky=0.3466.Galaxies are divided into source and lens populations – sources are objects for which we use both position and shape information, and lenses are objects for which we use only the position information. There is overlap between the two sets of objects, as some galaxies fall into both samples. We show the true redshift distributions for each of the 10 lens and 5 source tomographic photo-z bins in the first two panels of Fig. <ref>. The overall n(z) distribution is modeled as in <cit.>:dN/dz∝ z^2exp[(-z/z_0)^α] ,with (z_0,α)=(0.26, 0.90) for lenses and (0.11, 0.68) for sources.The lens distribution is separated into 10 equipopulated bins, which are convolved with Gaussians of widthcorresponding to the expected photo-z precision – 0.03(1+z) for lenses and 0.05(1+z) for sources. In Table <ref>, we list the peak redshift, total galaxy surface number density, andthe fiducial value of the linear galaxy bias for each lens redshift bin and the assumed shape noise (intrinsic variation in the ellipticity of source galaxies) for source galaxies. The source bins have been chosen to have the same surface number density in each bin.These values are taken from the LSST DESC Science Requirement Document (DESC SRD, <cit.>). We implicitly assume uninformative priors on the cosmological parameters and on the galaxy bias parameters. In Fig. <ref> we show the forecast ΛCDM 2D angular power spectrum for LSST Year 10 lens bin 3 and source bin 5 for each of the probes we use for the 3×2pt analysis: galaxy clustering (C_ gg), galaxy-galaxy lensing(C_ g γ) and cosmic shear (C_γγ).For the uncertainties, we assume a Gaussian covariance matrix, the diagonal elements (i=j) of which are given by 𝖢𝗈𝗏(C_ gg(ℓ)) = 2/(2ℓ + 1)f_sky Δℓ( C_ gg(ℓ) + 1/n_gal)^2, 𝖢𝗈𝗏(C_γγ(ℓ)) = 2/(2ℓ+1)f_sky Δℓ( C_γγ(ℓ) + σ_e^2/n_gal)^2, 𝖢𝗈𝗏(C_ gγ(ℓ)) = 1/(2ℓ+1)f_sky Δℓ[ C_ gγ(ℓ)^2+ (C_ gg(ℓ) + 1/n_gal)(C_γγ(ℓ)+σ_e^2/n_gal)],where Δℓ is the width of the chosen ℓ-binning, n_ gal is the surface number density of lens or source galaxies in the ith redshift bin, and σ_e is the shape noise for the source galaxies. We generate angular power spectra and compute the covariance matrix using thelibrary.The covariance of the angular power spectra comes from two effects: cosmic variance and shot or shape noise. The cosmic variance term is proportional to the signal, C(ℓ), and increases at larger scales. Shot noise in galaxy clustering and shape noise in cosmic shear are inversely proportional to galaxy number density n_ gal – as the number density of observed objects increases, the shot/shape noise component decreases. In the left panel of Fig. <ref> we show the diagonal elements of the gg covariance matrix for LSST Y10 lens bin 7, with the separate shot noise and cosmic variance terms. For LSST Y10 lenses, cosmic variance dominates by several orders of magnitude over shot noise for all scales of interest, due to the relatively high number density (large depth) of the sample. §.§.§ Range of scalesAs discussed in the Introduction, cosmological analyses of LSS cover a limited range of length or angular scales. Table <ref> summarizes the large-scale cuts that have been used in recent LSS analyses of Stage III surveys. These cuts are imposed for two main reasons: (i) given the limited sky coverage of these surveys, especially for HSC and KiDS, there are few to no accessible modes on very large scales, leading to very large uncertainties; (ii) observational systematics due to Galactic foregrounds and varying observing conditions have been shown to have the biggest impact on the largest scales. There is currently ongoing effort to developmethodological improvements to mitigate and control the impacts of these systematic effects on large scales. If this campaign does not succeed, then large scales will need to be discarded or downweighted in upcoming analyses.Given the Stage III analyses, we choose ℓ_ min=50 as the fiducial value for the LSST 3×2pt forecasts when we combine with GW clustering at very large scales. This cutoff corresponds approximately to the maximum of 250 arcmin that was used in the DES Y3 3×2pt analysis. In the analysis below, we also explore both larger and smaller ℓ_ min values. Larger values could potentially be needed if the corrections for systematic effects on large scales are not found to be robust enough, particularly given the smaller statistical errors expected from LSST. We also explore the constraining power for lower ℓ_ min values to understand the gain that would be possible from improved control of systematic effects on large scales.On small scales, we choose the ℓ_ max values (see Table <ref>) for the galaxy clustering and galaxy-galaxy lensing probes following the harmonic space scale cuts from the DESC SRD <cit.>. This corresponds to a minimum physical scale cut of k_max = 0.3h/Mpc, limited by the modeling of non-linearities and baryonic effects in the galaxy power spectrum. For cosmic shear, the DESC SRD used ℓ_ max=3000, which is more aggressive than previous work; we choose to adopt a more conservative cut of ℓ_ max=500 for our fiducial analysis. Later we explore the impact of extending the LSST analysis to smaller length scales.§.§ Gravitational Wave SamplesFor the GW samples, we include gravitational waves produced by coalescing binary black holes, black hole/neutron star binaries, and binary neutron stars. We expect these events to occur in galaxies, as they are home to the most intense star formation, implying that the distribution of GWs is expected to follow that of the underlying dark matter large-scale structure <cit.>.We describe each GW sample using three parameters that are determined by the chosen detector setups: the number of GW detections per year, their sky localization precision, and the redshift range of the detected sources.The number of detections n_ GW determines the shot noise contribution to the clustering power spectrum – the number density of predicted GW events is orders of magnitude lower than that of galaxies in photometric surveys, which leads to a higher contribution from shot noise. The precision with which GW events can be localized on the sky determines the minimum angular scale (maximum ℓ) on which the clustering of GW events can be inferred. Lastly, the redshift range (and precision) determines the cosmological information contained in the GW source clustering. Redshift range overlap between the GW and LSST sources is beneficial, as it potentially strengthens the constraints on galaxy bias for both sets of sources. In addition, for future GW detector networks with high sensitivity, a large fraction of GW events are predicted to be concentrated at higher redshifts – the peak of the forecast GW n(z) distribution is at z ∼ 2, since this is the redshift at which the star formation rate peaks, and a significant fraction of events occur at even higher redshift.We define three GW setups of increasing optimism, with different values for the detector network parameters n_ GW, ℓ_ max, and z range. The values of these parameters are listed in Table <ref>, and the corresponding redshift bins are shown in the right panel of Fig. <ref>. Setup 1 is the least optimistic, as it assumes a relatively low number density of events, moderate spatial localization (ℓ_ max) and limited z range. It corresponds roughly to a third generation (3G) GW detector network comprising the Einstein Telescope <cit.>, Cosmic Explorer <cit.>, and LIGO Voyager <cit.>. We estimate this network to have localization precision such thatℓ_ max=30 over the redshift range 0<z<0.5, a median luminosity distance uncertainty of ∼13%, and 10^4 detections/year. <cit.>. Setup 3 considers the ideal case in which all events are detected with extremely good localization; it is approximately based on a network composed of three ET-like third generation telescopes. We use a GW detection rate of 10^6 events/year, with a ∼5% uncertainty in luminosity distance <cit.>. Using values computed in <cit.>, we define Setup 3 to cover the redshift range 0<z<3, with ℓ_ max=90 over the redshift range 0<z<1.3 and ℓ_ max = 57 for 1<z<3.Setup 2 is a middle scenario between 1 and 3, with 10^5 detections/year, redshift range of 0<z<1.3, and localization allowing for analysis of scales with ℓ<60 over this range. The redshift binning of Setup 1 comprises three bins over the range 0<z<0.5.Setup 2 contains the three Setup 1 bins, as well as an additional nine covering the range 0.4<z<1.3.Setup 3 is composed of the twelve Setup 2 bins and 8 more bins over the range 1<z<3.The right panel of Fig. <ref> shows these distributions (the details of how they are constructed are given below). Although the lower assumed uncertainty in luminosity distance in Setup 3 would allow us to use finer redshift binning over the range 1<z<3, testing has shown that in our analysis a greater proportion of cosmological information is contained in the z≲1 range than at higher redshift. Additionally, overlap between the GW and LSST redshift bins improves constraining power, and the LSST redshift distributions reside mostly in the 0<z<1.5 range.A larger number of GW redshift bins also increases computational complexity and decreases the number density of objects in each bin; the three sets of GW n(z) bins we propose represent a happy medium between fine binning of the lower-z range and computational complexity.To generate the total GW redshift distribution, we assume that the number density of GW sources is proportional to the star formation rate described in <cit.>. The star formation rate function R_⋆(z) is used to compute the number of events expected at varying masses and redshifts:R_⋆(z) = (1+z)^2.6/1+(1+z/3.2)^6.2, dn/dz = A_sR_⋆(z)V_c(z)/1+z,where V_c is the differential comoving volume and A_s normalizes to the fraction of stellar mass that is converted to GW sources and our chosen number of detections per year. It is relevant to note that the GW formation channel affects the time delay between progenitor star formation and binary coalescence, and therefore the shape of the GW n(z) distribution.Our model assumes this time delay is relatively short (< 1Gyr), which <cit.> finds is favored in all formation channels they consider.Additionally, we assume that in the z ranges of our setups, the detection rate of events is not dependent on redshift. GW events within these ranges have a signal to noise ratio above the detection threshold for all networks considered <cit.>, therefore we assume a fixed fraction are detectable independent of redshift. The binning choice of the GW redshift distribution is dependent on the parameters of the chosen detector setup. We divide the redshift range for a particular setup into n_ bins subintervals such that each subinterval contains an equal number of GW events drawn from the total n(z) distribution. To transform these true redshift distributions into the observed distributions, we convolve the distribution in each subinterval with a Gaussian of width σ(z) determined by the detector's redshift uncertainty as inferred from the above uncertainty in luminosity distance. This generates the set of n_ bins redshift distributions shown in Fig. <ref>. Number density is determined based on the detection rate of the chosen network. We calculate the fraction of total events contained in each z range and divide by the number of bins within that range, giving the fraction of events contained within each individual (equipopulated) bin. Multiplying by the yearly detection rate with ten years of observation and dividing by the observed area (in this case, the full sky) gives us the number density of events per bin. For use with 's differentiable functionality, we convert the binned distributions to smail approximations described by smooth functions. The area under the curve of each bin is normalized to unity and then multiplied by the number density scalar.The fiducial large-scale bias for the GW sources is computed for the median redshift of each bin using this parameterizationb_ GW = 1.20(1+z)^0.59 ,from <cit.>, who extract tracer power spectra from mock data-sets of GW events and fit them to a model of the true power spectrum. The result is the above estimated bias. The power spectrum model used in this case neglects third order and scale-dependent contributions, assuming only contributions from the local matter field. The same bias parameterization is assumed in all of our setups. Bias evolution with redshift (as well as time delay between star formation and binary coalescence) depends on the formation channel that leads to GW events, which is currently uncertain – in Section <ref> we test the impact of other choices for the GW bias. We generate the GW clustering angular power spectrum using the sameframework as for LSST. This same software also computes the cross-covariance between the GW C(ℓ) arrays and the 3×2pt probe from LSST. §.§ Forecasting procedureWe perform a Fisher forecast for the constraining power of the different probes under different setups. The Fisher matrix elements in parameter space are defined byF_mn = ∑_l_1,l_2dC(ℓ)_l_1/dp_m(𝐂^-1)_l_1l_2dC(ℓ)_l_2/dp_n,where (𝐂^-1) is the inverse of the covariance matrix, and dC(ℓ)_l_1/dp_m is the derivative at point l_1 in the C(ℓ) data vector (which includes all angular and cross power spectra, including across redshift) with respect to parameter p_m. In the fiducial case, F has dimension (n_cosmo params + n_LSST lens bins + n_GW bins)^2, since there is an undetermined galaxy bias parameter for each LSST lens and GW redshift bin, andn_cosmo params=5 (ΛCDM) or 6 (wCDM orextension). We do not consider other nuisance parameters, such as those related to the n(z) distributions or the shear multiplicative bias. Thus, our results assume those systematics are subdominant.We construct the Fisher matrix from Jacobian and covariance matrices. Since we use the differentiableframework to generate the angular power spectra, it is trivial to perform these derivatives in a stable way.The (l_1,m)th term of the Jacobian matrix is defined as𝐉_lm = δ C(ℓ)_l/δ p_m,for the lth value in the C(ℓ) array and varied parameter p_m.We first compute one covariance and one Jacobian matrix containing all LSST Y10 lens, LSST Y10 source, and GW probes, spanning the range 2<ℓ<3000 and using an f_ sky of 1.We then apply cuts to remove data points/indices corresponding to LSST lens and source bins that do not fall within the actual ℓ cuts given in the tables above. The covariance matrix terms of all LSST-associated data points assume f_ sky=0.3466 whereas those of GW-associated data points assume f_ sky = 1.We compute the Figure of Merit (FoM) asFoM_θ = 1√([(F^-1)_θ]),where F^-1 is the inverted Fisher matrix of all parameters varied in the analysis (cosmological and galaxy bias), and θ represents the selection of parameters to be considered in the FoM. § RESULTSIn this section, we present the main results from this study.Sec. <ref> discusses ΛCDM and wCDM results for the baseline LSST 3×2pt analysis, for the LSST analysis extended to verylarge scales, and for the baseline LSST analysis augmented by GW clustering on very large scales. Sec. <ref> extends the results to PNG and constraints on . In Sec. <ref>, we vary some of our analysis choices to determine the impact of these setup changes and assumptions on our results. These variations include the LSST scale cuts (<ref>) as well as GW galaxy bias (<ref>) and GW scale cuts (<ref>). §.§ Λ and wCDMHere we present the results for the ΛCDM and wCDM models. For now, we only consider the most optimistic case for the GW detector network (GW setup 3).As an example, Figure  <ref> shows results for 3 different setups in the context of the wCDM model: the baseline LSST Y10 3×2pt analysis (green), the LSST analysis extended to very large scales, ℓ>2 (purple), and the LSST baseline combined with GW source clustering in setup 3 (light blue). The relative gains in precision of cosmological constraints for the latter two cases relative to the LSST baseline are given in Table <ref> for both ΛCDM and wCDM. For wCDM, the addition of the very large scales (2<ℓ<50) to the baseline LSST Y10 3×2pt analysis leads to a fractional improvement in the parameter constraints of between 2.8 and 6.2%. The ΛCDM results are qualitatively similar. Note that these results assume that residual systematic effects on very large scales (ℓ < 50) in LSST will be negligible. By comparison, combining the baseline LSST analysis with GW clustering at large scales improves parameter constraints over the baseline LSST analysis by less than 1%. This difference is not surprising, as even in the optimistic GW setup, the shot noise is much larger than for LSST (see Fig. <ref>). However, the neglect of systematic effects in the very large-scale GW measurements appears to be better justified than for LSST.Another result of this combined analysis is that in the ΛCDM model we are able to constrain the 19 GW source galaxy bias values to between 2.7% (GW bin 1) and 12% (GW bin 19) precision. This contrasts with a much larger ∼218% precision in GW galaxy bias from GW clustering alone, due to the degeneracy between σ_8 and GW host bias.We find similar results for wCDM, with precision between 2.8% (GW bin 1) and 12% (GW bin 19) from LSST + GW and 245% from GW alone. This is particularly interesting as constraining the galaxy bias of GW sources is an active area of study and can help us understand the galaxy populations that host GW events and thus GW formation pathways <cit.>. In the last three columns of Table<ref>, we explore the impact on parameter constraints and on the gain from adding very large scale measurements if we adopt more aggressive small-scale analysis cuts in the context of wCDM. For cosmic shear, ℓ_ max^ src is increased from 500 to 3000, and for clustering and galaxy-shear the maximum wavenumber is increased from 0.3 to 0.6 h/Mpc. The inclusion of smaller scales increases the constraining power of the baseline LSST Y10 analysis, reducing the relative improvement from measurements on very large scales: for wCDM, the addition of LSST 3×2pt large scales improves constraints on cosmological parameters by 0.9-2.6% and addition of GWs by 0.04-0.5%.Results for ΛCDM are similar. §.§ Primordial non-Gaussianity and f_NLHere we present results for the ΛCDM model with primordial non-Gaussianity characterized by . Using the same setup as in Sec. <ref>, we first consider the most optimistic case for the GW detector network (GW setup 3). We then explore in Sec <ref> how changing the setup affects the results. §.§.§ Large scales and Fig. <ref> shows the constraints on the ΛCDM+ parameters from the baseline LSST Y10 3×2pt alone (green), with the addition of GW clustering on large scales (blue), and with the addition of LSST 3×2pt analysis on large scales (purple). The baseline constraints on the ΛCDM parameters are slightly weaker than those in Table <ref> due to the marginalization over .We see that the addition of information from very large scales (ℓ<50), either from LSST or GW, dramatically improves the constraint on– from σ()=76.5 to σ() ≃ 8.5 for GWs and σ()≃4.6 for LSST large scales.This is not surprising, given that the scale-dependent bias in the PNG model grows at large scale. (Some previous works have found LSST galaxy clustering alone to provide tighter constraints on<cit.> due to either using a lower ℓ_ min or marginalizing over fewer parameters than we do here.) Fig. <ref> also shows that the addition of very large scale information noticeably improves the constraints on the Hubble parameter h; this is discussed further in Sec. <ref>. Although the extension of the LSST 3×2pt analysis to very large scales yields tighter statistical constraints than the addition of GW clustering, as noted above we expect the latter to be much more robust to systematic errors.§.§.§ Dependence on GW detector network parameters Fig. <ref> shows how the uncertainty ondepends on GW setup. Of the three GW detector network parameters we vary, the number of detections per year and the redshift range significantly impact the constraining power on , while ℓ_ max( GWs) is relatively unimportant. Once n_ GW increases above ∼10^5 detections/year, the drop in shot noise is such that the large-scale GW clustering measurement provides a useful constraint in combination with LSST. Additionally, we find that z_ max of at least ∼1.3 is required for GW clustering to add significant constraining power. For reference, the LSST Y10 lenses have z_ max of ∼1.4. This is important for multiple reasons – having more GW redshift bins allows for more cross-bin correlation and therefore tighter constraints, and higher z_ max also means more overlap with LSST and a larger fraction of total GW events captured.At z_ max of 1.3, GWs cover nearly the entire LSST lens range and most of the LSST source range. However, increasing z_ max to 3 allows us to include 8 more redshift bins and many more GW events – the peak of the GW n(z) distribution is around z=2. By contrast, we find that better localization of events and therefore higher ℓ_ max has minimal effect on constraining power. This is because most information onis found at extremely large scales.§.§ Sensitivity of results to analysis choicesWe now investigate how changes in the LSST large- and small-scale cuts, the assumed galaxy bias of GW sources, and the GW large-scale cut would affect our results. §.§.§ LSST scale cuts Since the maximum angular scale reachable by LSST 3×2pt analysis is not entirely certain, we investigate how different values of ℓ_ min( LSST) would impact our results. We summarize the results in Fig. <ref>. As expected, raising ℓ_ min( LSST) reduces the total amount of information gathered, and vice versa. Constraints onalready come almost entirely from the GW clustering, so raising ℓ_ min( LSST) has little impact on theuncertainty after the addition of GWs.As an example, constraints using ℓ_ min( LSST)=90 are listed in Table <ref>. Note that ℓ_ min( LSST) is determined independently of ℓ_ max( GW), so changes in the former are not implicitly referring to changes in the latter. Thus, the range of scales for LSST are allowed to overlap that of the GWs.In Fig. <ref> we see different sensitivities of the ΛCDM parameters to the LSST large-scale cut.In particular, as ℓ_ min( LSST) increases, the addition of GW information has greater relative impact upon the constraints on Ω_c, Ω_b, h, and b_ gal (i.e., the gap between LSST only and LSST + GW points increases). This greater dependence on large scales is due in part to the relation of Ω_m (=Ω_c+Ω_b) and h to the location of the peak of the 3D power spectrum, . To see this, we note that the value ofis close to the comoving value of k_eq, the inverse Hubble scale at the time of matter-radiation equality, and k_eq is proportional to (Ω_m h^2)^1/2. For the fiducial ΛCDM parameters, ≃ 0.0187 h/Mpc, which for the LSST redshift range corresponds to ∼20<ℓ<80.Further information on Ω_b and Ω_c is also found at higher k, but h is particularly dependent on large scales, which is why we see GWs impact constraints most prominently in the h plot of Fig. <ref>, and in the h contours of Fig. <ref>. In contrast, we see that σ_8 and n_s are much less reliant on large scales–removing LSST information on large scales does not cause the addition of GW large-scale information to become more impactful.We also test the impact of more aggressive small-scale cuts for the LSST sources and lenses.Results for k_ max^ lens=0.6 h/Mpc, ℓ_ max^ src=3000 are listed in Table <ref> (column 4), and results for a range of (k_ max^ lens, ℓ_ max^ src) values are shown in Fig. <ref>.Little information onis found at small scales, so while a more aggressive small-scale cut strengthens the constraints on the other ΛCDM parameters, the impact onfrom adding GWs is insensitive to this change.§.§.§ Bias of GW-source host galaxies Given the relatively small number of GW events to date, the bias for the host galaxies of GW sources is not yet well understood. While the parameterization for b_ GW adopted in our analysis above is reasonable, it is important to test the impact on our results of a change in the galaxy bias values for the GW sources. We consider two alternative GW galaxy bias models to the fiducial model of Eqn. <ref>:b_ GW1 = 1.20(1+z)^0.59 + 3,b_ GW2 = 2.0. These alternatives represent 1) a bias model with the same shape as the fiducial bias model but shifted to a higher value as a conservative upper limit, and 2) a model with a constant (redshift-independent) bias approximately equal to the average of the fidicial values. For model b_ GW1, the bias of GW sources is very large, leading to a higher clustering amplitude and tighter constraints on ΛCDM parameters from the combination of LSST and GW compared to the fiducial case. Moreover, in this case, the amplitude of the scale-dependent bias due tois larger (see Eqn. <ref>), leading to significantly stronger constraints onthan for the fiducial case–for example, in going from the fiducial b_ GW to b_ GW1, σ() improves from 8.46 to 1.56. These results are shown in Table <ref>, columns 5 and 6. §.§.§ GW scale cuts While an ℓ_ min( GWs) value of 2 is reasonable given GW detectors survey the full sky, we investigate whether the clustering of GW sources would still provide improvement to constraints from LSST 3×2pt if the analysis could not be performed on the largest scales. As expected, the loss of the largest scales decreases constraining power on , but we find that even with ℓ_ min( GWs)=5, GWs still provide a significant improvement (Table <ref>, column 7).§ CONCLUSIONSIn this work, we have studied the cosmological information that can be extracted from two-point correlation measurements on very large scales, focusing on the clustering of gravitational wave (GW) sources and its combination with two-point measurements from optical galaxy surveys on intermediate to large scales.The advantages of using GW detectors at the largest scales is that their angular selection function is well understood, they are sensitive to events over the full sky, they can extend to high redshifts, and the distance error per event is relatively small; moreover, they are not subject to the kinds of spatially varying observational and astrophysical systematics that afflict optical surveys on large angular scales. The disadvantages of using GW sources are the relatively low number density of sources (and therefore high shot noise) and poor sky localization (and therefore lack of small-scale information) compared to optical surveys. Given these relative advantages and disadvantages, we have explored the potential benefits to combining GW and optical survey measurements in a complementary way. We investigated the problem in the context of ΛCDM, wCDM, and ΛCDM+ and considered the combination of several future GW detector setups with projected LSST Y10 ×2pt measurements, varying a number of assumptions about the analysis. Our main findings are the following:* In ΛCDM and wCDM, there is no significant information gain (improvement in cosmological parameter constraints) from clustering on very large scales (ℓ<50).However, the LSST+GW combination does enable us to constrain the redshift-dependent bias of the host galaxies of GW sources to an average ∼6% precision, which is a currently largely unknown quantity.* For ΛCDM+, the addition of very large-scale information, either from LSST 3×2pt or the clustering of GW sources, results in a roughly order of magnitude improvement in the constraint on , with σ()=8.8. For GW to be competitive, the network of detectors must have sensitivity to detect more than 10^5 events per year out to redshifts up to z ∼ 3. We also find that including GW analysis of the range 30<ℓ<90 is relatively unimportant, as information onis found at even larger scales, and LSST already provides sufficient information on the 50<ℓ<90 range in the baseline analysis. * Changes to the scale cuts in the LSST 3×2pt analysis, to parameterization of the GW bias, and to the scale cuts on the GW analysis do not significantly affect the forecast constraints on the ΛCDM parameters. However, we find that a change in the GW bias parameterization can change the projected constraints onsignificantly, due to the relationship betweenand galaxy bias. With the upcoming wealth of data expected from a number of optical/NIR galaxy surveys (LSST, Euclid, Roman) and GW experiments (Einstein Telescope,Cosmic Explorer and LIGO Voyager), we will soon enter a regime where cleverly combining different datasets could allow us to tackle some of the systematic effects that are challenging to address in a single survey. Here we have investigated how using GW sources as LSS tracers can evade the large-scale systematic effects in optical galaxy surveys, but it is likely that these combinations would have other benefits worth studying, just as there have been many applications of combining galaxy and CMB datasets in the past 10 years <cit.>. The combination of galaxy surveys and GW experiments could have a similar potential, opening up a new area of multi-probe cosmology. § ACKNOWLEDGEMENTS We would like to especially thank Aditya Vijaykumar for many useful discussions and help with the gravitational waves observations setup. We also thank Daniel Holz, Jose Maria Ezquiaga Bravo, and Hayden Lee for helpful discussions. We thank François Lanusse for help on . ELG is supported by DOE grant DE-AC02-07CH11359. DA is supported by NSF grant No. 2108168. JP is supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program.CC is supported by the Henry Luce Foundation and DOE grant DE-SC0021949. JF is supported in part by the DOE at Fermilab and in part by the Schmidt Futures program. mnras_2author
http://arxiv.org/abs/2312.16289v1
{ "authors": [ "E. L. Gagnon", "D. Anbajagane", "J. Prat", "C. Chang", "J. Frieman" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20231226190012", "title": "Cosmological Constraints from Combining Galaxy Surveys and Gravitational Wave Observatories" }
AM CVn – System Parameters and Gravitational Waves J.  S m a k N. Copernicus Astronomical Center, Polish Academy of Sciences,Bartycka 18, 00-716 Warsaw, Polande-mail: [email protected] parameters are re-determined: M_1=0.86±0.18M⊙, M_2=0.103±0.022M⊙, A=1.508± 0.100× 10^10cm,and i=69±3^∘.The secondary component is a semi-degenerate helium star loosing massat a rate Ṁ=4.93± 1.65×10^-9M⊙/yr.The accretion disk is sufficiently hot to avoid thermal instability.The orbital light curve recovered from observations made in 1962shows minimum shifted to phase ϕ=0.50, corresponding to O-C=0.0060d.Together with mimima observed in 1992-99 this implies that the orbitalperiod is increasing at a rate dP/dt≈ 8.5 × 10^-13consistent with predictions involving the emission of gravitational waves.binaries: cataclysmic variables, stars: individual: AM CVn, gravitational waves §INTRODUCTION AM CVn (earlier known as HZ 29; Humason and Zwicky 1947) is a prototypeof the ultra short period, helium cataclysmic variables (Nelemans 2005,Solheim 2010, Kotko et al. 2012).Its light variations were discovered in 1962 (Smak 1967). Their period of only P≈ 18 minutes was originally interpreted as the orbital periodand Paczyński (1967), in his pioneering paper on graviational wavesand the evolution of close binaries, predicted that "this system could bemost easily used to check the existence of gravitational radiation".Regretfully, the light variations of AM CVn turned out to be quite complicated.So complicated that it took many decades and several, extensive photometricand spectroscopic studies (Patterson et al. 1993, Harvey et al. 1998,Skillman et al. 1999, Nelemans et al. 2001a, Roelofs et al.2006 and references therein) before they were fully interpreted.Thanks to those investigations the light variations of AM CVn are now knownto be a superposition of the three components:the superhumps with P_SH=1051.2s, the negative superhumps withP_nSH=1011.4s (both of them with variable amplitude), and variationswith the orbital period P_orb=1028.7s.The superhumps are the dominant component and their period is the mainobserved period.It may be added that light variations of AM CVn, when discovered in 1962,were the first – unrecognized at that time(!) – example of superhumps. System parameters of AM CVn were determined by Roelofs et al. (2006).As the two main constraints they used (1) the amplitude of the radial velocitycurve of the primary component K_1=92±4 km/s and (2) the mass ratio q=0.18,determined from K_1 and the amplitude A_S=471±11 km/s of the S-waveobserved in many lines in the spectrum of AM CVn, by assuming that it representsthe velocity of the stream at the point where the hot spot is formed.The orbital inclination i=43±2^∘ was obtained by requiring that the predicted absolute magnitude be equal to the absolute magnitude of AM CVn. Crucial for thisstep was the use of the HST parallax (π=1.65±0.30 mas; Roelofs et al. 2007a).Shortly afterwards, however, when the much larger Gaia parallax(π=3.351±0.045 mas) became available (Ramsay et al. 2018), it wasobvious that those elements must be significantly revised. The aim of the present paper is twofold: (1) to re-determine the systemparameters (Section 2) and (2) to perform the belated(!) test forthe existence of gravitational waves using the evidence provided bythe orbital light curve recovered from the earliest 1962 photometry (Section 3).§SYSTEM PARAMETERS §.§The Mass RatioRoelofs et al. (2006), neglecting the evidence from the superhump and negativesuperhump periods, determined the mass ratio from K_1 and the amplitudeof the S-wave A_S by assuming that it represents the velocity of the streamat the point where the hot spot is formed. This assumption, however, is far frombeing safe. Worth recalling are two well documented cases: WZ Sge and U Gem. The amplitude of the S-wave observed in the spectrum of WZ Sge (Krzemińskiand Kraft 1964) is variable between A_S=650 and 850 km/s. In the case of U Gem (Smak 1976) the parameters of the S-wave determined from differentspectral lines are different, for example:A_S=340 km/s and ϕ_o=0.126 from the CaII K line versusA_S=404 km/s and ϕ_o=0.069 from the HeI λ4471 line. We determine the mass ratio from the superhump period excess and the negativesuperhump period deficiency. This requires an extra comment.There has been a debate whether the well defined and well understoodq-ϵ_SH relation existing for the hydrogen rich cataclysmic variables can be applied to the AM CVn systems (e.g. Pearson 2007). Fortunately we havethe evidence from YZ LMi = SDSS J0926+3624 (Copperwheat et al. 2011)showing that the mass ratios obtained from superhumps and from the analysisof eclipses are practically identical. And, in addition to superhumps, we havethe negative superhumps for which a similar q-ϵ_nSH relation iswell established. Using the values of the three periods: P_SH, P_nSH and P_orb(Skillman et al. 1999) we obtain ϵ_SH=0.0218 andϵ_nSH=0.0168.The q-ϵ_SH relation from Kato et al. (2009, Fig.31)ϵ_SH = 0.16 q + 0.25 q^2 gives: q=0.116. For the negative superhumps we have the theoretical relationfrom Osaki and Kato (2013, Eq.1)ϵ_nSH1-ϵ_nSH = 37 q(1+q)^1/2 r_d^3/2 which gives q=0.122. The agreement between the two determinations is near perfectand we adopt q=0.120± 0.005.§.§The Orbital InclinationTo begin with we may ask what can be learned about the inclinationfrom the orbital light curve (Skillman et al. 1999, Fig.6). Its shapeis most likely due to the irradiation of the secondary component, but itsamplitude (∼ 0.01mag.) is too small to provide any meaningfulinformation on inclination. We follow the approach taken by Roelofs et al. (2006) and determinethe orbital inclination by using, as an additional constraint, the observed magnitude of AM CVn. Following the procedures described in Sections 2.3 and2.4 we calculate system parameters as functions of inclination and thenrequire that the predicted V magnitude (using the Gaia parallax:π=3.351±0.045 mas; Ramsay et al. 2018) be equal to thetrue magnitude: <V>≈ 14.1 (cf. Patterson et al. 1992). Results, shown in Fig.1, give i=69±3^∘.§.§System ParametersUsing K_1=92±4 km/s, q=0.120± 0.005 and i=69±3^∘ we obtain: A_orb=1.508± 0.100× 10^10cm, M_1=0.86±0.18M⊙ andM_2=0.103±0.022M⊙. Furthermore we also get the radius of the secondary component R_2=0.047±0.003R⊙ and note that it satisfies the M-R relationfor semi-degenerate secondaries (Nelemans et al. 2001b,Roelofs et al. 2007b)R/R⊙ = 0.043 M/M⊙^-0.062 .§.§The Mass Transfer Rate and the Absolute MagnitudeThe emission of gravitational waves results in the loss of angular momentum, the loss of mass from the secondary component and its transfer to the primary(cf. Paczyński 1967, Roelofs et al. 2006). The relevant formulae and results are:dlnJdt = -325 G^5/3c^5 ( P2π)^-8/3 M_1M_2(M_1+M_2)^1/3  = - 1.015±0.258 ×10^-15 , dlnM_2dt = 2ζ_2+5/3-2q dlnJdt  = - 1.488±0.378 ×10^-15 , where ζ_2=-0.062 is the exponent in the M-R relation for semi-degeneratesecondaries (cf. Eq.3). The corresponding mass transfer rate is thenṀ=4.93± 1.65×10^-9M⊙/yr. The accretion disk is quite hot. Using standard equationσT_e^4 = 38π GM_1R^3 Ṁ  [ 1 - (R_1R)^1/2 ] , we find that at the outer edge of the disk (i.e. at r_d=0.88r_Roche=0.50)the temperature is log T_e=4.27. This is safely higher thanlog T_e,crit=4.1 below which the helium accretion disk becomes thermallyunstable (cf. Smak 1983). The accretion disk is also quite bright. To calculate its absolute magnitudewe integrate fluxes over its two sides with local temperatures being determinedfrom Eq.6 and correct the resulting luminosity for inclination usingL_d(i) = <L_d> 63-u (1-u+ucosi)cosi ,with limb darkening coefficient u=0.6 (note that this relation is valid fori≪ 90^∘). The results (see also Subsection 2.2) are: M_V=6.72± 0.30 and (withπ=3.351±0.045 mas) V=14.09± 0.30, which agrees with the truemagnitude: <V>≈ 14.1 (cf. Patterson et al. 1992).§.§The Orbital PeriodThe orbital peiod is increasing (cf. Paczyński 1967) at a ratedlnPdt = 3 [1 - 2(1-q)ζ_2+5/3-2q]  dlnJdt = 8.82±2.24×10^-16 ,ordPdt = 9.14±2.30×10^-13 . §THE 1962 ORBITAL LIGHT CURVE We recover the mean orbital light curve from the oldest set of photometricobservations of AM CVn made in 1962 (Smak 1967, Tables 2a and 2b) by calculating their phases from the elements given by Skillman et al. (1999, Eq.1)Min = HJD 2448742.5610(4)  + 0.011906623(3) E .and forming normal points.The resulting light curve is shown in Fig.2. In spite of large scatterits shape is satisfactorily similar to the light curves obtained bySkillman et al. (1999, Fig.6). The only significant difference isthe position of the minimum at phase ϕ_min≈ 0.50,corresponding to (O-C)≈ 0.0060d.Adding the 1962 minimum to the minima observed by Skillman et al.(1999, Table 3) we determine the new light elements Min = HJD 2448742.5610(4)  + 0.011906621(2) E + A E^2 ,withA = 5.06±2.84×10^-15 ,giving dPdt = 8.50±4.77×10^-13 . The observed value of dP/dt agrees – within errors – with its predictedvalue (Eq.9).This could be treated as another test on the existence of gravitational waves.Regretfully, such a test is hardly needed today.Copperwheat, C.M., Marsh, T.R., Littlefair, S.P., Dhillon, V.S.,Ramsay, G., Drake, A.J., Gänsicke, B.T., Groot, P.J., Hakala, P.,Koester, D., Nelemans, G., Roelofs, G., Southworth, J., Steeghs, D.,Tulloch, S. 2011410 1113 Harvey, D.A., Skillman, D.R., Kemp, J., Patterson, J., Vanmunster, T.,Fried, R.E., Retter, A. 1998493 L105 Humason, M., Zwicky, F. 1947105 85Kato, T. et al. 2009 Publ.Astr.Soc.Japan 61 S395Kotko, I., Lasota, J.-P., Dubus, G., Hameury, J.-M. 2012 Å 544 A13Krzemiński, W., Kraft, R.P. 1964140 921Nelemans, G. 2005 ASP Conference Series 330 27 Nelemans, G., Steeghs, D., Groot, P.J. 2001a Å 326 621 Nelemans, G., Portegies Zwart, S.F., Verbunt, F., Yungelson, L.R.2001b Å 368 939 Osaki, Y., Kato, T. 2013 Publ.Astr.Soc.Japan 65 95 Paczyński, B. 196717 287 Patterson, J., Sterner, E., Halpern, J.P., Raymond, J.C.1992384 234Patterson, J., Halpern, J., Shambrook, A. 1993419 803 Pearson, K.J. 2007379 183 Ramsay, G., Green, M.J., Marsh, T.R., Kupfer, T., Breedt, E., Korol, V., Groot, P.J., Knigge, C., Nelemans, G., Steeghs, D., Woudt, P.,Aungwerojwit, A. 2018 Å 620 141 Roelofs, G.H.A., Groot, P.J., Nelemans, G., Marsh, T.R., Steeghs, D.2006371 1231 Roelofs, G.H.A., Groot, P.J., Benedict, G.F., McArthur, B.E.,Steeghs, D.,Morales-Rueda, L., Marsh, T.R., Nelemans, G.2007a666 1174 Roelofs, G.H.A., Groot, P.J., Benedict, G.F., McArthur, B.E.,Steeghs, D.,Morales-Rueda, L., Marsh, T.R., Nelemans, G.2007b ASP Conference Series 372 437 Skillman, D.R., Patterson, J., Kemp, J., Harvey, D.A., Fried, R.E., Retter, A., Lipkin, Y., Vanmunster, T. 1999111 1281 Smak, J. 196717 256 Smak, J. 197626 277Smak, J. 198333 333Solheim, J.-E. 2010122 1133
http://arxiv.org/abs/2312.16506v1
{ "authors": [ "J. Smak" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20231227102055", "title": "AM CVn -- System Parameters and Gravitational Waves" }
Application of the Green function formalism to the interplay between avalanche and multiphoton ionization induced by optical pulses Stefan Nolte January 14, 2024 =================================================================================================================================== 0.8Unsupervised learning has become a staple in classical machine learning, successfully identifying clustering patterns in data across a broad range of domain applications. Surprisingly, despite its accuracy and elegant simplicity, unsupervised learning has not been sufficiently exploited in the realm of phylogenetic tree inference. The main reason for the delay in adoption of unsupervised learning in phylogenetics is the lack of a meaningful, yet simple, way of embedding phylogenetic trees into a vector space. Here, we propose the simple yet powerful split-weight embedding which allows us to fit standard clustering algorithms to the space of phylogenetic trees. We show that our split-weight embedded clustering is able to recover meaningful evolutionary relationships in simulated and real (Adansonia baobabs) data.§ INTRODUCTIONThe Tree of Life is a massive graphical structure which represents the evolutionary process from single cell organisms into the immense biodiversity of living species in present time.Estimating the Tree of Life would not only represent the greatest accomplishment in evolutionary biology and systematics, but it would also allow us to fully understand the development and evolution of important biological traits in nature, in particular, those related to resilience to extinction when exposed to environmental threats such as climate change. Therefore, the development of statistical and machine-learning theory to reconstruct the Tree of Life, especially those scalable to big data, are paramount in evolutionary biology, systematics, and conservation efforts against mass extinctions.Graphical structures that represent evolutionary processes are denoted phylogenetic trees. A phylogenetic tree is a binary tree whose internal nodes represent ancestral species that over time differentiate into two separate species giving rise to its two children nodes (see Figure <ref> left). The evolutionary process is then depicted by this bifurcating tree from the root (the origin of life) to the external nodes of the tree (also denoted leaves) which represent the living organisms today. Mathematically, a rooted phylogenetic tree T on taxon set X is a connected directed acyclic graph with vertices V = {r}∪ V_L ∪ V_T , edges E and a bijective leaf-labeling function f : V_L → X such that the root r has indegree 0 and outdegree 2;any leaf v ∈ V_L has indegree 1 and outdegree 0, and any internal node v ∈ V_T has indegree 1 and outdegree 2. An unrooted tree results from the removal of the root node r and the merging of the two edges leading to the outgroup (taxon 4 in Figure <ref> left). Traditionally, phylogenetic trees are drawn without nodes (Figure <ref> center) given that only the bifurcating pattern is necessary to understand the evolutionary process. The specific bifurcating pattern (without edge weights) is denoted the tree topology. Edges in the tree have weight w_e ∈ (0,∞) that can represent different units, evolutionary time or expected substitutions per site being the most common. One of the main challenges when inferring phylogenetic trees is the fact that different genes in the data can have different evolutionary histories due to biological processes such as introgression, hybridization or horizontal gene transfer <cit.>. An example is depicted in Figure <ref> (right) which has one gene flow event drawn as a green arrow. This gene flow event represents the biological scenario in which some genes in taxon 2 get transferred from the lineage of taxon 3, and thus, when reconstructing the evolutionary history of this group of four taxa, some genes will depict the phylogenetic tree that clusters taxa 1 and 2 in a clade (Figure <ref> left) and some genes will depict the phylogenetic tree that clusters taxa 2 and 3 in a clade (Figure <ref> center). Evolutionary biologists are interested in inferring the correct evolutionary history of the taxon group accounting for the different evolutionary histories of individual genes.In the absence of gene tree estimation error, the identification of the main trees that represent the evolution of certain gene groups would be trivial. However, estimation error combined with unaccounted biological processes such as incomplete lineage sorting or gene duplications complicate the identification of the main gene evolutionary histories in the data. Thus, novel and scalable tools to accurately classify gene trees into clusters are needed.Surprisingly, unsupervised learning methods have not been explored in phylogenetics.The only implementations of unsupervised learning in phylogenetics aim to cluster DNA sequences <cit.> or species <cit.>, not trees.The main challenge to implement unsupervised learning algorithms for phylogenetic trees is the discreteness of tree space. While there exists a proper distance function (Robinson-Foulds distance <cit.>) in the space of phylogenetic trees that can be used in clustering algorithms <cit.>, the new trend in machine learning is to embed data points into a intelligently selected vector space <cit.>. Traditionally, there are multiple natural embeddings designed for the space of phylogenetic trees. For example, BHV space <cit.> is a continuous space for rooted n-taxon phylogenetic trees in which each tree is represented as a vector of edge weights in an orthant defined by its topology (bifurcating pattern). While the space is equipped in the geodesic distance, it is not a fully Euclidean space which complicates the embedding, computation of distance, and ultimately, clustering. Other embeddings such as tropical geometric space <cit.> or the hyperbolic embedding <cit.> are mathematically sophisticated, yet unnecessarily complicated for standard clustering tasks.Here, we define a simpler, yet equally powerful, embedding which we denote split-weight which relies on the edge weights of taxon splits to embed a phylogenetic tree into a fully Euclidean vector space. We prove that this embedding preserves phylogenetic similarity and allows us to cluster samples of gene trees into biologically meaningful groups. We implement three standard unsupervised algorithms (K-means, Gaussian mixture model and hierarchical) on the embedding space, and test the performance of our algorithm with simulated and real gene trees. Last, we present a novel open-source Julia package publicly available on GitHub (name and link removed for anonymity) which is easy to use for maximum outreach among the evolutionary biology.§ SPLIT-WEIGHT EMBEDDING OF PHYLOGENETIC TREESBipartitions. A bipartition (or split) of the whole set of taxa (X) into two groups X_1 and X_2 is represented as X_1 | X_2 such that X_1 ∩ X_2 = ∅ and X_1 ∪ X_2 = X. Let n=|X| denote the total number of taxa in the data. The number of bipartitions is given by 2^n-1 - 1. For example, for the case of n=4 taxa (Figure <ref>), there are 7 splits (Table <ref>). Split-Edge Equivalence. Let ℬ_X be the set of all bipartitions for taxon set X (with cardinality 2^|X|-1 - 1 as mentioned). Let T be an unrooted phylogenetic tree on taxon set X. For every edge e ∈ E in T, there exists a bipartition b_e ∈ℬ_X such that the removal of e from the tree T results in the two subsets of taxa defined by b_e. For example, the edge with weight 0.39 in Figure <ref> (left) is an internal edge which, if removed from the tree, would split the taxa into two groups X_1={1,2} and X_2={3,4}, and thus, this edge is uniquely mapped to the bipartition b_e=12|34.Thus, there is a mapping function f_b: E →ℬ_X such that f_b(e) is the bipartition defined by e. We note that not every bipartition is defined by an edge in T. For example, the split 13|24 is not defined by any edge in the tree in Figure <ref> (left), and thus, the range of the mapping function f_b(E) ⊂ℬ_X. Split Representation for Trees. Let T be an unrooted n-taxon phylogenetic tree with taxon set X. For simplicity, instead of f_b(E), we denote by ℬ_T the bipartitions defined by T as the splits {f_b(e)} defined for every edge e ∈ E in T. For example, the unrooted version of the phylogenetic tree in Figure <ref> (left) has five edges that define the bipartitions: ℬ_T = {1|234, 2|134, 12|34, 3|124, 4|123} in post order traversal. Note that for the unrooted version of this tree, the two children edges of the root are merged into one edge with weight 6.04=1.79+4.25.In addition, we define a mapping function f_w: ℬ_T →(0,∞) that assigns a numerical value to each bipartition which corresponds to the edge weight of the uniquely mapped edge in the tree. For example, f_w(12|34) = 0.39.Split-Weight Embedding. Let T be an n-taxon unrooted phylogenetic tree on taxon set X. Let ℬ_T the bipartitions defined by T, and let ℬ_X the set of all bipartitions of X. We define the split-weight embedding of T as a (2^n-1 - 1)–dimensional numerical vector y(T) ∈[0,∞)^2^n-1 - 1 such that the ith entry of y(T) corresponds to the ith bipartition (b_i) in ℬ_X with entry value given by:y(T)_i = {[ f_w(f_b(e)) f_b^-1(b_i) = e ∈ E; 0ow ].,That is, if there is an edge e ∈ E in T such that f_b(e) = b_i, then the ith entry of y(T) is given by the corresponding edge weight f_w(f_b(e)). For example, for 4 taxa, there are 7 bipartitions; five of them correspond to edges in the tree in Figure <ref> left (Table <ref>) and for those, their value in the split-weight embedding vector correspond to the edge weights in T. In contrast, the two bipartitions that do not belong to ℬ_T (13|24 and 14|23) have a value in the split-weight embedding vector of 0, as in <cit.>. Thus, the split-weight embedding for the unrooted version of the phylogenetic tree T in Figure <ref> (left) is the numerical vector y(T)=(2.06, 2.06, 2.46, 6.04, 0.39, 0, 0).§ CLUSTERING ALGORITHMSFor a sample of n-taxon unrooted phylogenetic trees {T_1,...,T_N}, we first embed them into the split-weight numerical vectors: {y(T_1),...,y(T_N)}.Note that all vectors have the same dimension as all the trees have the same taxon set (X). The input matrix (M ∈ [0,∞)^(2^n-1 - 1) × N) for unsupervised learning algorithms becomes the concatenated embedded vectors. Then, we implement three different types of unsupervised learning methods which are modified to use the input matrix and directly output the predicted labels for each tree in the sample. Figure <ref> shows a graphical representation of our unsupervised learning strategy.§.§ K-meansThe algorithm <cit.> is a partitioning method that divides data into K distinct, non-overlapping clusters based on their characteristics, with each cluster represented by the mean (centroid) of its members. It starts by selecting K initial centroids and iterates through two main steps: assigning data points to the nearest centroids, and then updating the position of the centroids to the mean of their assigned points. The algorithm minimizes the squared Euclidean distances within each cluster and ends when the positions of the centroids no longer move. Instead of traditional K-means, we use Yinyang K-means <cit.> which has less runtime and memory usage on large datasets. Traditional K-means calculates the distance from all data points to centroids for each iteration. Yinyang K-means uses triangular inequalities to construct and maintain upper and lower bounds on the distances of data points from the centroids, with global and local filtering to minimize unneeded calculations.The parameters of this algorithm include the number of clusters, the initialization method, and the seed. To select the initial centroids efficiently, we use the K-means++ <cit.> as default seed algorithm. We also employ the repeating strategy <cit.> for large datasets to avoid the instability of standard K-means clustering algorithm due to the initial centroid selection. This instability can easily result into convergence to a local minimum, even with the help of a better initial point selection algorithm like K-means++. Therefore, we use a simple strategy of repeating the K-means clustering at least 10 times and retaining the result with the highest accuracy when calculating the accuracy for each large group in our simulations. §.§ Gaussian mixture model (GMM) The algorithm <cit.> is a model-based probabilistic method that assumes data points are generated from a mixture of several Gaussian distributions with unknown parameters. It uses the Expectation Maximization (EM) algorithm to update the parameters iteratively in order to optimize the log-likelihood of the data until convergence. It is similar yet more flexible than K-means as it allows for mixed membership.The parameters of this model include the number of clusters, the initialization method, and the covariance type. We use K-means as the method for selecting initial centrals and choose diagonal covariance as covariance type. We also employ repeating K-means <cit.> to find the best starting centers.§.§ Hierarchical clustering The algorithm is a hierarchical method that creates a clustering tree called a dendrogram. Each leaf on the tree is a data point, and branches represent the joining of clusters. We can 'cut' the tree at different heights to get a different number of clusters. This method does not require the number of clusters to be specified in advance and can be either agglomerative (bottom-up) or divisive (top-down). The parameters of this model are the linkage method, and the number of cluster. As default, we use Ward's method <cit.> as linkage method for hierarchical clustering.§ SIMULATION STUDY We perform simulation tests of K-means, GMM, and hierarchical clustering with K = 2 for samples of N trees, each with n taxa where we vary N and n as described below. For each combination of n and N, we generate 100 datasets to account for performance variability. Tree variability is simulated with the Julia package <cit.> which takes a species tree as input and simulates gene trees under the coalescent model <cit.> which generates tree variability due to random sorting of lineages within populations. In this sense, we expect simulated trees to cluster around the chosen species tree. The level of expected variability in the sample of gene trees is governed by the edge weights in the species tree (longer branches resulting in less tree variability) <cit.>. All trees are embedded into the split-weight space, and the vectors are standarized prior to clustering. Visualization of clusters is performed via PCA.To quantify the performance of unsupervised learning models, we use the Hungarian algorithm <cit.> to find the grouping method with the largest overlap between predicted and true labels. In this grouping method, the fact that the predicted label is different from the true label means that the tree is not classified into the correct group. Therefore, the accuracy is defined as 2N -t/2N where t is the number of trees with different predicted and true labels. §.§ Clustering of trees with different topologies For n=4 taxa, there are 15 possible bifurcating patterns (tree topologies). For each of the 15 4-taxon species trees,we simulated samples of N=50, 100, 500, 1000, 5000 trees with<cit.> for two choices of edge weights in the species trees: 1) all edge weights set to 1.0 coalescent unit, or 2) each edge weight randomly selected from an uniform distribution (0.5,2) coalescent units. The rationale is that 1.0 coalescent unit generates medium levels of gene tree discordance <cit.>, and thus, we expect the clustering algorithms to perform accurately, whereas shorter branches (e.g. 0.5 coalescent units) produce more tree heterogeneity further complicating clustering.Figure <ref> (left) shows a heatmap of the prediction accuracy of the different clustering algorithms. Each cell in the heatmap represents a comparison between the row and column tree (only rows labeled, but the order of the columns is the same). Trees are ordered depending of their unrooted topology. For example, the first 5 rows (and columns) correspond to the 5 different rooted versions of the unrooted tree corresponding to the split 12|34; the next 5 rows (and columns) to the split 13|24, and the last 5 rows (and columns) to the split 14|23. The darker the color, the more accurate the classification of the two trees. We can see a diagonal block pattern in the heatmaps which illustrates the difficulty of separating two clusters defined by two rooted trees with the same unrooted representation. The heatmaps are arranged by clustering algorithm (three columns: K-means, GMM and hierarchical) and number of trees in the sample (N=50 and N=5000). Similar plot for N=100,500,1000 can be found in the Appendix. We can see that the accuracy of K-means is robust to sample size, while the accuracy of GMM is higher for larger number of trees (N=5000). Hierarchical clustering shows the worst performance of the three methods. Figure <ref> (right) shows a different type of heatmap in which we summarize the performance of all three methods. Each cell in the heatmap can have four values: 0 if none of the three methods have a classification accuracy above 80%; 1 if one of the three methods has a classification accuracy above 80%; 2 if two of the three methods have a classification accuracy above 80%, and 3 if all three methods have a classification accuracy above 80%. The two columns correspond to the strategy to set edge weights (all edge weights equal to 1.0 in the left column and edge weights randomly selected in (0.5,2) in the right column). Unlike standard phylogenetic methods that tend to perform better when edges are 1.0 coalescent unit long, the clustering methods here tested perform better with variable edge weights. Furthermore, we can see that with larger sample sizes (N=5000), most methods are able to distinguish samples originated from trees that do not have the same unrooted topology (off-diagonal blocks). For the case of more than 4 taxa, we cannot list all of the tree topologies (10,395 total unrooted trees for 8 taxa and 213,458,046,676,875 total unrooted trees for 16 taxa), so we randomly generate 15 8-taxon trees and 15 16-taxon trees using the simulating algorithm in the R package<cit.> under a birth-death model. We focus on the case of edge weights randomly chosen uniformly in the interval (0.5,2). Figures <ref> and <ref> in the Appendix show the classification of all methods to cluster samples of trees accurately. As classification appears to be easy for randomly selected trees, we design another simulation study to compare the accuracy on trees that are more similar to each other (next section). §.§ Clustering of trees in NNI-neighborhoods The Nearest Neighbor Interchange (NNI) move is a type of phylogenetic tree arrangement that selects an internal branch of a given tree and then swaps adjacent subtrees across that branch. It generates alternative tree topologies that are “nearest neighbours” to the original tree, differing only in the local arrangement of the subtrees connected by the chosen branch. For a given tree, we can define a NNI-neighborhood as all the trees that are one NNI move away from the selected tree (or any number of NNI moves away). In this section, we test whether the clustering algorithms are accurate enough to distinguish trees within the same NNI-neighborhood.Using the simulating algorithm in the R package<cit.> under a birth-death model, we randomly generate one 8-taxon tree and one 16-taxon tree, and then we perform 1, 2, 3, 4 and 5 NNI moves on each tree to produce 10 new trees (5 with 8 taxa and 5 with 16 taxa). The NNI function for phylogenetic trees is implemented in<cit.>. The edge weights are randomly selected from an uniform distribution (0.5,2) coalescent units. For each of the trees,we simulate samples of N=50, 100, 500, 1000, 5000 trees with<cit.>.Figure <ref> shows the classification accuracy of trees with 8 and 16 taxa, and their corresponding neighbor trees obtained by performing 1,2,3,4 or 5 NNI moves on the original tree. Despite the similarity of the trees under comparison, the methods are able to classify quite accurately clusters of trees originated from two similar trees. This implies that the split-weight embedding is able to preserve the necessary signal to classify phylogenetic trees even for closely related clusters. Figure <ref> in the Appendix shows the same figure for N=100,500,1000 trees.§.§ Clustering of trees with the same topology, but different edge weights To test the performance of the clustering algorithms to classify trees that have the same topologies, but different edge weights, we simulate trees under the same species tree topology with six different sets of edge weights: 1) all edge weights equal to 1 (denoted org in the figures); 2) all edge weights lengths equal to 1.5 (denoted inc in the figures); 3) randomly selected edge weights uniformly in (1, 2) (denoted inc_r in the figures); 4) all branch lengths equal to 0.5 (denoted dec in the figures); 5) randomly selected edge weights uniformly in (0, 1) (denoted dec_r in the figures), and 6) randomly selected edge weights uniformly in (0, 2) (denoted all_r in the figures). We only focus on the 4-taxon tree topologies for these tests. For each of the trees,we simulated samples of N=50, 100, 500, 1000, 5000 trees with<cit.>.While we showed that GMM can accurately identify tree clusters defined by different topologies (Figure <ref>), it appears that this algorithm does not have enough sensitivity to identify clusters originated from the same tree topology (Figure <ref>). K-means, on the other hand, is able to identify such clusters as long as the edge weights are sufficiently different (dec vs inc, for example). Figures for other tree topologies are in the Appendix with similar conclusion. § RETICULATE EVOLUTION IN BAOBABSIn-frame codon alignments of baobab target-enrichment data <cit.> are used to estimate gene trees under maximum likelihood (ML) <cit.> withv.2.1.3 and default settings <cit.>. The ML analyses treats alignments as nucleotide data and the best model is determined by<cit.>, which uses the Bayesian Information Criterion for model selection. The data then consist of 371 estimated trees in 8 species of Adansonia: A. digitata (continental Africa), A. gregorii (Australia), A. grandidieri (Madagascar), A. suarezensis (Madagascar), A. madagascariensis (Madagascar), A. perrieri (Madagascar), A. za (Madagascar), and A. rubrostipa (Madagascar). The data also contain an outgroup Scleronema micranthaum, so in total, there are 9 taxa. We remove 4 trees that only had 8 taxa, and 5 outlier trees with pathologically long edges so that the final dataset contain 363 trees which we embed in the split-weight space. We standarize the resulting matrix as in the simulation study, and cluster the vectors using the K-means algorithm (K=2).After clustering, we use Densitree <cit.> to identify the consensus tree of each cluster via the root canal method.The two resulting clusters are not balanced: 341 trees in cluster 1 and 22 trees in cluster 2 (Figure <ref> in th Appendix). This is expected given the evolutionary history of the baobabs group (Figure <ref> left). The original publication <cit.> identified one reticulation event (blue arrow) representing 11.8% gene flow. This means that we expect 11.8% of the genes to follow the blue arrow back to the root in their evolutionary history, and thus, most of the genes (88.2%) will have a tree that puts clade ((A.mad, A.ped), A.zaa) sister to (A.gra, A.sua) (Figure <ref> center; clade highlighted in blue) whereas a few genes (11.8%) will place ((A.mad, A.ped), A.zaa) sister to A.rub (Figure <ref> right; clade highlighted in green). The accurate identification of the consensus trees for each cluster, in spite of estimation error and other biological processes in addition to reticulation, is an important result for phylogenetic inference. Estimating a phylogenetic network with 9 taxa (tree plus gene flow event in Figure <ref> left) could take up to two days of compute time depending on the method used <cit.>. Clustering the input trees, building the consensus trees and reconstruct the network from the consensus trees cannot take more than a few minutes. Our work shows the promise of machine learning (unsupervised learning specifically) to aid in the estimation of phylogenetic trees and networks in a scalable manner. § DISCUSSIONHere, we apply the first (to our knowledge) implementation of unsupervised learning in the space of phylogenetic trees via the novel and powerful split-weight embedding. Via extensive simulations, we show that the split-weight embedding is able to capture meaningful evolutionary relationships while keeping the simplicity of a standard Euclidean space. Our implementation is able to cluster trees with different topologies, and even trees with the same topology, but different edge weights. As usual in machine learning applications, the larger the sample size (number of trees), the more accurately the different clusters were recovered. On average, K-means was the desired choice of algorithm as it showed robust performance across sample size and number of taxa, yet for large trees (8 or 16 taxa), hierarchical clustering outperforms K-means in terms of running efficiency and accuracy. For the case of 4 taxa, GMM outperforms K-means when the sample size increased.The bottleneck of our implementation is the curse of dimensionality. In its current version, we do not perform dimension reduction except for visualization purposes. The dimension of the split-weight embedded vector is given by the number of bipartitions which is 2^n-1 - 1 for n taxa. In addition, the embedded vector is highly sparse. For a tree of n taxa, there are 2n-3 edges, and thus, only 2n-3 entries of the embedded vector will be different than zero.So, for example, for n=8 taxa, the embedded vector is 127-dimensional with 13 non-zero entries; for n=16 taxa, the embedded vector is 32767-dimensional with 29 non-zero entries. The field of phylogenetic trees deals with datasets of hundreds or thousands of taxa consistently. Future work will involve the study of dimension reduction techniques, as well as compression, such as autoencoder models in order to improve the scalability and stability of our algorithms. Future work will also involve the comparison of different tree embeddings in terms of evolutionary signal. Here, we tested the embedding's ability to accurately cluster trees into meaningful groups, yet the embedded vectors could be used to calculate pairwise taxon distances and build distance-based trees such as neighbor-joining. Different embeddings could retain better phylogenetic signal, and thus, could be preferred when the goal is to reconstruct a phylogenetic tree accounting for gene tree discordance. Finally, here we utilize Densitree <cit.> to obtain a representation of the consensus tree per cluster. We can, however, explore similar ideas to those in BHV space regarding the computation Fréchet sample means and Fréchet sample variances <cit.> which could set the foundation of classical statistical theory on the split-weight embedding space. § DATA AND CODE AVAILABILITY The baobabs dataset was made publicly available by the original publication <cit.> and can be accessed through the Dryad Digital Repository <http://doi.org/10.5061/dryad.mf1pp3r>. The split-weight embedding and unsupervised learning algorithms are implemented in the open source publicly available Julia package available in the GitHub repository<https://github.com/solislemuslab/PhyloClustering.jl>.All the simulation and real data scripts for our work are available in the public GitHub repository <https://github.com/YiboK/PhyloClustering-scripts>.§ ACKNOWLEDGEMENTSThis work was supported by the National Science Foundation [DEB-2144367 to CSL]. The authors thank Marianne Björner and Reed Nelson for help with setting up tests, and Zhaoxing Wu for providing the testing data. plain§ SUPPLEMENTARY FIGURES
http://arxiv.org/abs/2312.16074v1
{ "authors": [ "Yibo Kong", "George P. Tiley", "Claudia Solis-Lemus" ], "categories": [ "q-bio.PE", "stat.ML" ], "primary_category": "q-bio.PE", "published": "20231226145039", "title": "Unsupervised Learning of Phylogenetic Trees via Split-Weight Embedding" }
Blind Image Quality Assessment (BIQA) is essential for automatically evaluating the perceptual quality of visual signals without access to the references. In this survey, we provide a comprehensive analysis and discussion of recent developments in the field of BIQA. We have covered various aspects, including hand-crafted BIQAs that focus on distortion-specific and general-purpose methods, as well as deep-learned BIQAs that employ supervised and unsupervised learning techniques. Additionally, wehave explored multimodal quality assessment methods that consider interactions between visual and audio modalities, as well as visual and text modalities. Finally, we have offered insights into representative BIQA databases, including both synthetic and authentic distortions. We believe this survey provides valuable understandings into the latest developments and emerging trends for the visual quality community.Blind image quality assessment (BIQA), hand-crafted BIQA, deep-learned BIQA, unimodal BIQA, multimodal BIQA, BIQA database. Blind Image Quality Assessment: A Brief Survey Miaohui Wang, Senior Member, IEEEM. Wang is with the State Key Laboratory of Radio Frequency Heterogeneous Integration and the Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China. E-mail: [email protected] 14, 2024 ===================================================================================================================================================================================================================================================================================§ INTRODUCTION Blind image quality assessment (BIQA) aims to automatically and accurately estimate objective visual scores, thereby overcoming the challenges of subjective experiments, such as being time-consuming, unstable, and non-automated <cit.>. It plays a significant role in monitoring the quality of industrial products <cit.>. In the BIQA task, the human visual system (HVS) is the final receiver of visual signals <cit.>, but human visual perception is essentially the joint action of multiple sensory information <cit.>. However, existing hand-crafted and deep-learned BIQA methods rarely consider multimodal information <cit.>, and their ability to measure complex image quality is limited. As a result, exploring how to utilize multimodal learning to enhance the accuracy of visual quality assessment is a promising research direction, as illustrated in Fig. <ref>.When scoring the quality of visual signals, humans can perceive multiple sensing information (e.g., sight, hearing, etc) simultaneously <cit.>.After acquired exercise, our brains can easily make connections between different modality data and further construct a comprehensive representation of visual characteristics <cit.>. For example, when the image modality is influenced by various artifacts, other auxiliary modalities, such as audio-based ambient information <cit.> and text-based visual understanding <cit.>, are expected to provide additional clues for describing quality. Consequently, BIQA aims to establish a visual indicator that mimics the manner of human visual perception, and it is expected to learn better quality descriptors to represent visual perception.§.§ Related BIQA SurveysObjective image quality assessment (IQA) <cit.> has been widely applied to visual communication systems. This is due to the degradation introduced in visual signal acquisition, compression, processing, transmission, and display. Objective IQA is favored for its efficiency, and numerous studies have explored subjective and objective visual quality assessment.Traditionally, IQA can be categorized into three types: full-reference, reduced-reference, and no-reference. No-reference IQA, also known as BIQA, specifically refers to the evaluation of image quality without access to the original reference, which is a challenging and important task in real-world scenarios <cit.>. It is worth noting that Zhai et al. <cit.> reviewed recent developments on full-reference IQA methods, while Dost et al. <cit.> provided the latest survey on reduced-reference IQA methods. In this section, we will mainly review the surveys for BIQAs as follows.In 2015, Manap et al. <cit.> reviewed BIQA from the perspective of non-distortion-specific cases. Thissurvey categorized existing approaches into natural scene statistics-based and learning-based BIQAs. Additionally, they discussed the performance, limitations, and research trends. In 2017, Xu et al. <cit.> presented a brief review of BIQA, with a focus on general-purpose algorithms. Theycovered the fundamental developments in BIQA, categorizing them into distortion-specific and general-purpose categories. They also discussed recent progress in feature extraction and quality prediction. Moreover, they have conducted comparisons on several benchmark databases and highlighted the ongoing BIQA challenges. In 2019, Yang et al. <cit.> provided a survey of BIQA methods, focusing on recent developments in deep neural networks (DNNs). Performance comparisons on both synthetic and authentic databases (e.g., LIVE, TID2013, CSIQ, LIVE multiply distorted, LIVE challenge) have offered valuable insights into the strengths and limitations of various DNN-based BIQAs.This survey summarized the evolving landscape of DNN-based BIQA methods, demonstrating the intrinsic relationships among different BIQAs, and discussed the challenges and directions for future research.In 2022, Stepien et al. <cit.> reviewed recent developments of BIQA methods specific to magnetic resonance imaging (MRI). They discussed the common distortions found in MR images, categorized popular BIQA methods, and outlined existing approaches in describing MRI images and developing quality prediction models. Evaluation protocols and benchmark databases were also discussed. Despite the limited number of studies focusing on MR image assessment, this work highlighted diverse approaches, with a particular focus on the increasing importance and main challenges of deep learning architectures. §.§ Necessity of New BIQA SurveysBIQA has attracted significant attention due to the increasing demand for high-quality user experience <cit.> across various domains, including broadcasting, remote education, game streaming, social media, and more. However, existing BIQA surveys face challenges in thoroughly exploring image quality models. It suggests the urgency for a comprehensive survey that can effectively identify emerging trends and challenges in the rapid evolution of BIQA technologies.Firstly, one of the main reasons for the new BIQA survey is to adapt to rapidly developing imaging technologies. With the advancement in camera sensors and display technologies, the content of digital images are becoming more diverse and complex (e.g., screen content, user-generated content, 360^∘ visual content, etc). Therefore, traditional BIQA models that rely on simplistic features or assumptions may struggle to capture the characteristics of image quality, leading to inaccurate assessments. A comprehensive survey can help to reveal the associated challenges for BIQAs and the emerging trends in imaging technologies.Secondly, diverse applications have also increased the demand for quality assessment of application-specific content, making BIQA important to measure the perceptual quality of distortion-specific images without explicit knowledge of the reference images. However, the performance of distortion-specific BIQA models under real-world scenarios, such as low-light conditions, immersive display devices, and visual-audio preferences, remains a concern. Conducting a new BIQA survey can update the overview of state-of-the-artmethods and their ability to deliver accurate quality assessments.Thirdly, image processing has undergone significant changes due to the proliferation of deep learning tools, which have demonstrated remarkable capabilities in learning intricate quality features. By leveraging large-scale datasets and complex network architectures, deep learned BIQA models can effectively capture subtle visual cues and patterns that are indicative of image quality. Moreover, deep learning paradigms, such as meta learning, transfer learning, and multimodal learning, have been employed to represent different types of distortions and diverse image contents in BIQA. Despite these advancements, BIQA still faces learning-based challenges, including domain adaptation, robustness to various distortions, and generalization to diverse image datasets. Therefore, ta new survey summarizing the latest deep learning methods, including meta-learning, transfer learning, and CNN-based approaches, in the context of no-reference scenarios is highly warranted. Such a survey can provide valuable insights into the strengths and limitations of existing deep learning-based BIQA models. To sum up, there is a clear need for a new BIQA survey due to the rapid evolution of imaging technologies, the widespread use of visual applications, and the increasing availability of intelligent tools. Conducting a comprehensive survey that reviews the latest research findings, identifies emerging challenges, and inspires future directions in BIQA methods will be beneficial to researchers, practitioners, and industry professionals in the field of visual quality assessment. §.§ Scope of This SurveyIn this survey, we focus on recent developments of BIQA, and provide a brief analysis and discussion in various aspects, covering hand-crafted BIQA methods (e.g., distortion-specific and general-purpose), deep-learned BIQA methods (e.g., supervised learning-based and unsupervised learning-based), multimodal quality assessment methods (e.g., visual-audio and visual-text), and representative databases (e.g., synthetic and authentic distortion).§ HAND-CRAFTED BIQA METHODSHand-crafted BIQA methods <cit.> typically rely on the feature extraction techniques derived from expert and engineering experience. These methods are designed to capture the characteristics or attributes of image quality based on domain knowledge.Hand-crafted BIQAs often require fewer resources in terms of database size, hardware platform, and computing power, and theyhighly achievable and easy to deploy in practical applications. Based on the specific application context, existing BIQA methods can be categorized into two main types: 1) distortion-specified and 2) general-purpose methods. §.§ Distortion-specific BIQAsDistortion-specific BIQAs <cit.> quantify the image quality by considering both degradation manners and distortion types of a particular application, as shown in Fig. <ref>. In this section, we present a concise overview of several representative applications and the significant milestones in the field.§.§.§ Screen ContentThe development of computer-generated technology has promoted the widespread of screen content (SC) visual signals <cit.>, and further drives the requirement of SC-based BIQA. Various SC-based databases have been established to facilitate the investigation in this area, including Screen Image Quality Assessment Database (SIQAD) <cit.>, Screen Content Database (SCD)<cit.>, quality assessment of compressed SCI database (QACS) <cit.>, and Screen Content Image Database (SCID) <cit.>.These databases provide valuable resources for evaluating the quality of SC images. Given the distinct features of SC images, many representative BIQAs have been proposed, such as structural feature-induced <cit.>,brightness and texture-driven <cit.>, region division-based <cit.>, etc. The continued development of SC-based BIQA methods is crucial for ensuring the quality of SC-based visual signals, which is prevalent in various domains such as multimedia streaming, video conferencing, and remote education. §.§.§ Low-lightImaging in weak-illumination environments can lead to uneven brightness, poor visibility, impaired color, and increased hybrid noise. These factors not only degrade the user experience but also affect the product value <cit.>. Recognizing the significance of addressing low-light distortion, researchers have developed specialized low-light databases and BIQA methods. The Natural Night-time Image Database (NNID) <cit.> is one of the earliest low-light BIQA databases, and later a new ultra-high-definition low-light database, Dark-4K, has been established<cit.>. To tackle the challenges associated with modeling low-light distortions, some preliminary BIQAs have been developed for low-light images, such as brightness-guided <cit.>, colorfulness-inspired <cit.>, visibility-induced <cit.>, and comparative learning-based <cit.>. These efforts in developing low-light BIQAs is crucial in improving the assessment of image quality in weak-illumination scenarios, which can unlock the full potential of imaging technologies in challenging lighting conditions.§.§.§ High Dynamic Range (HDR)HDR-based BIQAs are designed to evaluate the quality degradation primarily caused by tone mapping or multiple exposure fusion techniques in High Dynamic Range (HDR) images. To facilitate research in this region, several representative HDR databases have been established, such as Tone-mapped image database (TMID) <cit.>, HDR JPEG XT database (HDR-XT) <cit.>,HDR compression database (HCD) <cit.>,ESPL-LIVE HDR image database <cit.>,and HDR visual difference predictor 2 (HDR-VPD2) <cit.>.In the development of HDR-based BIQAs, various quality assessment methods have been proposed, taking into consideration representative quality descriptors such as gradient <cit.>, color <cit.>, and brightness <cit.>. These methods aim to capture the visual distortions introduced during the tone mapping or multiple exposure fusion processes and quantify their impact on the perceived quality of HDR images. With the continuous development of HDR, the use of new technologies (tone-mapping or exposure fusion algorithm) has improved HDR imaging results. It remains to be verified whether existing methods can accurately measure the distortion introduced by new HDR technologies.§.§.§ EncryptionIn secure communication, copyright protection, and privacy preservation, the need to guarantee the quality experience for encrypted images has driven the development of encryption-based BIQA. These methods focus on evaluating the quality of images that have been encrypted to protect them. In this context, two representative databases have been established: the IVC-SelectEncrypt database<cit.> and the perceptually encrypted image database (PEID)<cit.>.Different from traditional BIQAs, the quality metrics for encrypted images require a unique consideration: striking a balance between maintaining a relatively high level of quality to attract unauthorized users while ensuring security from unauthorized viewers <cit.>.Encryption-based BIQAs aim to address this dual objective by developing quality metrics for encrypted images.§.§.§ Omnidirectional StereoThe popularity of immersive multimedia applications has promoted the development of stereoscopic-omnidirectional BIQA tasks. Several benchmark databases have been established to address this need, including the omnidirectional image database (OID) <cit.>, Compressed 360-degree Image Database (CVIQD2018) <cit.>, Omnidirectional IQA (OIQA) <cit.>,Head-mounted Immersive Image Database (HIID)<cit.>, IISc Stitched Image QA (ISIQA)<cit.>, LIVE 3D VR IQA <cit.>, Compressed VR Image Quality (CVIQ) <cit.>, Multi-Distortions Visual Attention Quality Database (MVAQD) <cit.>, Omnidirectional Image Quality (OIQ)<cit.>, and the psychophysical-related database(PRD) <cit.>.One of the main paradigms for quality feature extraction is based on multi-viewdecomposition <cit.>. However, in addition to the quality of the visual signal itself, there are many factors that affect theperceived quality of virtual reality experiences, such as viewingbehavior <cit.> and camera motion <cit.>. Therefore, there is still significant room for exploring and developing new quality descriptors specifically tailored to the stereoscopic-omnidirectional scenario.§.§.§ Other ApplicationsBesides the application mentioned above, there are several other promising distortion-specific BIQAsthat have been designed for emerging computer vision tasks, such as super-resolution <cit.>, segmentation <cit.>, dehazing <cit.>, deraining <cit.>, aesthetic <cit.>, light-field <cit.>, underwater <cit.>, 3D geometry <cit.>, etc. Note that each of these tasks presents unique challenges and requires specialized BIQA methods designed to the specific distortion type. Due to the length of the article, we will not discuss other types of distortion-specific BIQAs here. A brief illustration of the related work is organized as shown in Fig. <ref>. §.§ General-purpose BIQAsUnlike distortion-specific methods tailored to address particular types of distortion, general-purpose BIQAs <cit.> are not designed with the typical image applications mentioned above. These methods usually utilize employ a set of common quality-aware features to assess image distortion, which can be roughly divided into two main groups: natural scene statistics (NSS) based methods and HVS-guided methods. In this subsection, we provide a concise overview of some representative general-purpose BIQAs and databases. §.§.§ NSS-basedNSS-based BIQAs are based on the assumption that high-fidelity images obey specific statistical characteristics that are altered by quality degradation <cit.>. For instance, early methods quantified these alterations in the frequency domain, such aswavelet transform <cit.> or discrete cosine transform<cit.>. However, due to the computational complexity associated with domain transformations, some methods instead directly select descriptive quality-aware features in the spatial domain,such as normalized luminance <cit.>, gradient magnitude <cit.>, and local binary pattern <cit.>. To better capture the NSS characteristics, parametric models have also been developed,such as generalized Gaussian density (GGD) <cit.>, multivariate GGD <cit.>, and asymmetric GGD <cit.>. It is worth noting that existing NSS-based BIQAs usually struggle to fully consider the distortion characteristics of different types of image distortions, so it is difficult for such methods to achieve the optimal performance in practical applications.§.§.§ HVS-guided Considering that humans are the ultimate recipients of visual signals, it is significant to leverage the perceptual characteristics of the HVS in the design of image quality indicators.HVS-guided BIQAsaim to represent these characteristics and enhance the assessment of image quality. Two prominent categories of HVS-guided methods are the free-energy principle-driven and the visual sensitivity-based methods. The free-energy principle interprets the quality perception in visual signals as an active inference process <cit.>, which provides a theoretical framework for understanding the perception of image quality. On the other hand, the visual characteristics-based BIQAs primarily rely on visual sensitivity featuresto assess image quality, such as luminance <cit.>, structure <cit.>, texture <cit.>, andcolor <cit.>. By considering the specific HVS sensitivities, these methods provide a more accurate evaluation of image quality. However, one of the main challenges in HVS-guided BIQAs is the complexity of general-purpose distortions, which can be affected by various factors. Simple HVS models and visual sensitivities may struggle to adequately characterize and capture the intricacies of these distortions. Thus, further research is needed to develop more sophisticated models and visual features that can effectively address the complex distortions in HVS-guided BIQAs.§.§.§ DatabasesThe performance evaluation of general-purpose BIQA methods depends heavily on the types of distortionscontained in IQA databases. Several commonly used databases for general-purpose BIQAs include theLaboratory for Image & Video Engineering (LIVE) database <cit.>, Tampere Image Database (TID) 2008/2013 <cit.>, Categorical Subjective Image Quality (CSIQ) database <cit.>, Subjective Quality Assessment IRCCyN/IVC Database <cit.>, Waterloo Exploration Database (WED) <cit.>, Konstanz Artificially Distorted Image quality Database (KADID-10k) <cit.>, KonIQ-10k <cit.>, and others.These IQA databases consider various types of distortions, with the most prevalent ones being compression distortion(e.g., JPEG, JPEG2K, H.26X), blur distortion (e.g., Gaussian blur, motion blur, out-of-focus blur), and different types of noises (e.g., white noise, impulse noise,multiply noise).Table <ref> provides a brief summary of some representative databases in terms of the distortion type (e.g., authentic and synthetic) and the application scenario (e.g., distortion-specific and general-purpose). Existing general-purpose BIQAs generally achieve satisfactory performance on synthetic databases, buttheir performance is relatively limited when faced with images captured in weak-illumination scenarios. As industrial products and services continue to evolve, the imaging quality of vision sensors in specific scenes (e.g., taking pictures at night) is more likely to be used as a product value indicator <cit.>. Therefore, there is a practical significance in both industry and academia for the development of distortion-specific BIQA databases that feature authentic distortions, which can better reflect real-world scenarios and enable more accurate assessment of image quality in specific application domains.§.§ SummaryWhile existing distortion-specific and general-purpose BIQAs have demonstrated reliable performance in their respective applications, it is highly desired to develop specialized quality indicators for authentic image distortions. This necessity arises for the following reasons.Firstly, it is important to highlight the fundamental difference between synthetic and authentic distortions. Synthetic distortions are typically straightforward and can be easily controlled and manipulated during the generation of IQA datasets. On the other hand, authentic distortions are far more complex and can arise during various stages of image acquisition, compression, and processing in real-world scenarios.Therefore, developing quality indicators specific to authentic distortions holds greater practical value in the context of quality assessment. Secondly, the characteristics of authentic distortions are dramatically different from their synthetic counterparts, as detailed in Table <ref>. Authentic image distortions often exhibit distinct features such as uneven brightness, variations in visibility, color impairment, and diverse types of noise, etc. Consequently, existing BIQA methods struggle to achieve satisfactory performance when applied to authentic distortions. In light of these reasons, developing distortion-specific BIQAs for real-world distorted images areimperative to address the challenges posed by complex and authentic artifacts. Such advancements will pave the way for more accurate and comprehensive visual quality assessment in practical applications. § DEEP-LEARNED BIQA METHODSDeep-learned BIQA methods <cit.> have emerged as a powerful approach that directly learns quality features from distorted images in an end-to-end manner. In contrast to hand-crafted BIQAs, these methods leverage deep neural networks to automatically optimize quality forecasting models, resulting in promising performance<cit.>.Deep-learned BIQAs can be generally divided into 1) supervised learning-based and 2) unsupervised learning-based methods. Supervised methods require annotated training data, whereas unsupervised learning-based methods do not rely on quality labels during training. It is worth noting that there are other learning types, such as reinforcement learning <cit.>, that can be employed in the deep-learned BIQA task. However, these learning types are relatively less commonly used compared to supervised and unsupervised BIQA methods. §.§ Supervised Learning-based BIQAsIn supervised learning-based BIQA <cit.>, the primary learning objective is to minimize the discrepancybetween the predicted score and the subjective MOS value provided by human observers. The development of supervised learning has significantly advanced the field of BIQAs.Existing supervised learning-based BIQAs tackle the challenge of limited training samples by leveraging specific strategies, and they can be broadly categorized into two main types: 1) sample-based BIQAs and 2) constraint-based BIQAs.§.§.§ Sampled-basedSample-based BIQAs are mainly based on expanding the capacity of training samples, which usually utilize patch-level quality features to predict an image-level score. Existing sample-based BIQAs mainly consists of 1) annotation allocation-based and 2) annotation generation-based methods.Allocation-based. Allocation-based BIQAs directly share the same image-level annotations to all patches within a given image <cit.>. These methods have initially established a correlation between patch-level features and the overall image-level quality scores. Subsequent improvements have focused on refining this correlation by incorporating weighted features <cit.>, weighted decisions <cit.>, and voting decisions <cit.>. More recent attempts have extended the input image into multi-scale patches and learned a general feature representation <cit.>. However, the inherent uncertainty in image content poses challenges for these allocation-based methodsto capture highly non-stationary feature representations <cit.>. By developing more robust feature representations that can adapt to the local variations and complicated correlations between the content and distortion, allocation-based BIQAs can enhance their performance and provide more accurate quality predictions. Generation-based. Generation-based BIQAs usually learn a supervised model via patch-level scores. In the early stages, these methods have faced the challenge of lacking a region-level MOS database.To overcome this limitation,early methods have employed full-reference models to generate quality scores for image patches<cit.>.However, the forecasting performance is highly dependent on the adaptability and correlation between the full-reference model and the target BIQA task<cit.>.Recent progress has been greatly facilitated by substantial human involvement in building new databases with image and patch-level scoring labels <cit.>. This has allowed for better training of BIQA models by providing more accurate and reliable annotations.However, it is worth noting that establishing a large enough IQA database for practical quality inspection applications poses a significant cost challenge. The manual process of collecting and annotating large-scale datasets with quality labels can be time-consuming, labor-intensive, and expensive.To sum up, existing sample-based BIQAs mainly expand training samples based on a single image modality. While this strategy aims to address the problem of limited sample annotations,a potential drawback is the introduction of `sick' label augmentation, which can introduce noises and ultimately reduce the prediction accuracy and generalization ability of a learned model <cit.>. In the future,the training data volume problem can be explored by the association between different modalities.The expansion of data modalities helps a deep-learned model to enrich low-level embedding features from different perspectives, thereby improving the robustness of the forecasting performance <cit.>. §.§.§ Constraint-based Constraint-based BIQA methods optimize multiple loss functions simultaneously within a supervised learning framework. These methods can be roughly divided into 1) multi-task and 2) multi-objective BIQAs. Multi-task-based. Multi-task based BIQAs aim to train additional goals that are highly associated with image quality in addition to the primary quality prediction task. These additional goals can include the identification or estimation of specific distortion types within the image, the generation of error maps that highlight regions of image quality <cit.>, the classification of natural scene categories <cit.>, and the prediction of content attributes that contribute to image quality <cit.>. By incorporating these additional tasks into the training process, multi-task based BIQAs can leverage the inherent correlations between these auxiliary tasks and the image quality to enhance the overall prediction performance. Multi-objective-based. Multi-objective based BIQAs aim to simultaneously optimize multiple constraints or regularization term to improve the model training <cit.>. These methods employ various techniques, such asemploying multiple loss functions for multi-scale supervision <cit.>, introducing new normalization embeddings into the objective function <cit.>, and using additional constraints to adjust initialization parameters <cit.>. Recent developments have explored the integration of constraints learned from other databases. For example, incremental learning has been used to measure correlation across databases <cit.>. Uncertainty prediction has been used to rank the fidelity across databases <cit.>. Continuous learning has been utilized to measure similarity across databases <cit.>. By optimizing multiple constraints at the same time, multi-objective based BIQAs offer a more comprehensive and robust approach to visual quality assessment. In summary, while pre-training can be effective in learning latent quality feature descriptions, it may not capture specific distortions that are not well represented in existing IQA databases. For instance, it is difficult to learn effective quality descriptions for low-light distortions because pre-training samples are mostly captured under normal lighting conditions.To address this difficulty, the future exploration can be conducted by leveraging fromcross-modalities. By utilizing different modalities, we can benefit from the complementary information they provide, enabling a more comprehensive assessment of image quality. Furthermore, the shared cross-modal knowledge <cit.> may help to extract more expressive quality representations. §.§ Unsupervised Learning-based BIQAsUnsupervised learning-BIQAs <cit.> aim to extract latent embedding features without relying on ground-truth MOS labels. These unsupervised approaches measure the quality difference between training samples based on their latent features. Unlike supervised methods, unsupervised BIQAs do not require a large number of hand-crafted labels, making it more feasible in scenarios where explicit quality annotations are scarce. However, the absence of explicit objective functions in unsupervised learning poses challenges in designing effective training models. Existing unsupervised BIQAs can be broadly categorized into two main types: 1) metric-based and 2) domain-based methods. §.§.§ Metric-based Metric-based BIQAs employ some widely used distance measurements, such as cosine similarity andWasserstein distance, to extract latent embedding features. These features are then employed to measure the difference between the current image sample and the other samples in the training database. The difference can be related to various factors, including distortion type, distortion level, and content category. Recently, various distortion descriptors have been developed to extract global embedding features, such as distortion type-baseddiscriminative learning <cit.>, distortion level-based contrastive learning <cit.>, and content category-based similarity learning<cit.>. In general, metric-based methods treat the training samples as mutual quality references and aim to maximize the differences in quality features between samples. By leveraging distance measurements and distortion descriptors, these methods enhance the discriminative ability of the learned embeddings and improve the accuracy of quality assessment. §.§.§ Domain-based Domain-based BIQAs commonly design domain alignment constraints, and measure the quality difference between samples from different domains. These methods leverage error metrics defined in a target domain to assess the quality difference of each sample in a source domain. Early domain-based methods directly added different levels of synthetic distortion and learned to rank them <cit.>. Subsequent domain adversarial methods employed a generator to construct a target domain and then guidedthe source domain for adversarial learning<cit.>.More recently, domain adaptation methods have utilized additional large databases to construct the target domain and guide the source domain to learn the rules of quality description in the target domain <cit.>. It is important to note that domain-based methods usually require strict assumptions, which make them difficult to meet the model requirements when the distortion type of a testing image is unknown.§.§ SummaryExisting deep-learned BIQAs have primarily focused on a single image modality, while multimodal-drivenBIQAs are still in their early stage. However, the incorporation of multimodal information provides a new and feasible solution to the problem of insufficient training samples. One advantage of multimodal BIQA is the homology of multimodal data, which suggests that training information between different modalities can be complementary and shared, which facilitates the learning of highly descriptive quality features. This enables the learning of highly descriptive quality features that may not be fully captured by a single modality alone. Furthermore, the heterogeneity of multimodal data expands the width and depth of training information. Each modality provides unique perspectives and cues related to image quality, allowing for a more comprehensive understanding of quality assessment. Given these advantages, it is both necessary and meaningful to embark on a new exploration of multimodal approaches in the BIQA task. § MULTIMODAL QUALITY ASSESSMENTMultimodal quality assessment methods <cit.> are currently in their early stages of development. To our knowledge, there are two primary modality combinations that have been explored: visual-audio and visual-text methods. These modalities aim to leverage the complementary information from cross sources, such as visual and auditory cues or visual and textual cues, to quantify the overall quality of multimedia content. §.§ Visual-audio Quality AssessmentVisual-audio quality assessment <cit.> refers to the quantitative evaluation of user experience,involving the quality assessment across multiple media modalities. It can be further divided into two main types: degradation-based methods and perception-based methods. By jointly considering the quality of both visual and audio modalities, visual-audio quality assessment provides a comprehensive evaluation of the overall user experience in multimedia applications. §.§.§ Degradation-basedExisting degradation-based methods commonly measure the user-perceived quality score of visual and audio independently, and then employ a combination rule to predict the overall score. This paradigm assumes that the degradation of video and audio is solely influenced by factors such as signal capture equipment <cit.>, compression, transmission, and terminals.It also assumes that the degradation of one modality does not directly impact on that of the other (i.e., video degradation does not cause audio degradation). The combination rule used in these methods mainly involve addition <cit.>, multiplication <cit.>, vote <cit.>, weighted Minkowski <cit.>, etc. However, these degradation-based approaches overlook the potential interaction and dependencies between cross-media information. As a result, simply combining individual quality scores may not accurately measure the user experience score.§.§.§ Perception-basedThe perception-based methods aim to learn a joint perceptual feature space, where video and audio features can both be represented and combined to predict the overall quality score. This paradigm ismotivated by psychophysical experiments, such as the McGurk effect <cit.>, where cross-media perceptual interaction has a greater impact on the user-perceived quality. These experiments reveal that the user-perceived video quality is more favorable when paired with high-fidelity sound than when used alone. Conversely, the perception of audio quality is more favorable when assessed alone, rather than when paired with high-quality videos <cit.>.Therefore, the user-perceived quality should be considered based on the combination of perceptual features to capture the above cross-media biases. Existing explorations mainly focus on thecombination of spatial perception <cit.>, temporal perception <cit.>, and spatio-temporal perception <cit.>. By incorporating these perceptual features and capturing their interactions, these methods enable more reliable assessments of the user experience in multimedia applications. §.§ Visual-text Quality AssessmentRecently, a multimodal low-light image quality (MLIQ) <cit.> database has been constructed, including the image and text modalities. The text modality mainly consists of semantic descriptionsthat capture the visual quality of each image sample. As discussed in <cit.>, there are several multimodal quality assessment databases separately established for video and audio, respectively. Among these visual-audio databases, the closest to the MLIQ is the LIVE-SJTU audio and video quality assessment database (A/V-QA) <cit.>. Unlike the A/V-QA, MLIQ consists of image samples and their text modalities: 1) In addition to visual modality, the text modality in the MLIQ reflects the image quality in terms of semantic visual descriptions, while that the audio modality in A/V-QA can be silence, noise, pure music, pure speech, or speech over background sound. 2) MLIQ contains various types of authentic distortions, while A/V-QA only contains synthetic compression and sampling distortions. 3) MLIQ is specifically established to study the impact of auxiliary modalities on the multimodal-basedIQA task. In contrast,A/V-QA is constructed to study the impact of both visual and audio distortion on the overall user experience. This distinction implies that the MOS labels of MLIQ solely depend on the image modality, while these of A/V-QA depend on both the visual and audio modalities.Based on the MLIQ database, they have developed a blind multimodal quality assessment (BMQA) method to integrate cross-modal features. This method consists of several learning modules, including multimodal quality representation, latent feature alignment, and fusion prediction. To improve the efficiency of deep model training, they have also employed an effective BMQA method by incorporating both multimodal self-supervision and supervision.The information integration between visual and text modalities is beneficial to expanding the breadth of quality information available, allowing for a more robust and comprehensive evaluation of image quality. By incorporating cross-modal features, the BMQA method demonstrates superior generalization ability, enabling accurate prediction performance even on unseen or unfamiliar data.§.§ SummaryIt is worth noting that humans are better at measuring image quality by semantic description rather than quantitative score <cit.>, and hence the text-based quality descriptions can be a very useful modality in the modeling of BMQA. By incorporating text-based information, BMQAs can simulate the inherent ability of humans to capture and represent visual quality in a more comprehensive manner. The relationship between unimodal and multimodal BIQA is illustrated in Fig. <ref>, which provides a visual representation of how the different modalities, such as image, audio, and text, interact and contribute to the overall quality assessment process. In addition, the differences between existing visual-audio and visual-text quality indicators are obvious. Firstly, the former is designed to forecast the joint quality score of the user experience across different media types, while the latter is focused on predicting the visual quality of a given image.Secondly, in video-audio methods, the audio modality is usually unavailable in the BIQA task, and they become inapplicable. In the visual-text methods, the text modality can be easily generated by image captioning <cit.> or large language models (LLMs) <cit.>, and the lack of the text modality is no longer a problem. Finally,multimodality-driven BIQAs are currently in their nascent phase, but BMQA provides a new and promising perspective. On one hand, the homogeneity of multimodal data shows that training information can be supplementary or shared, which facilitates the learning of highly descriptive quality features. On the other hand, the heterogeneity of multimodal data can expand the width and depth of training information, which is expected to improve the forecasting performance. To this end, it is highly desired to develop a multimodal quality indicator. § CONCLUSION AND FUTURE WORKThis survey has provided a detailed analysis and discussion of recent progress in BIQAs. We have examined various aspects of BIQAs, including hand-crafted methods, deep-learned approaches, multimodal quality assessment, and representative databases. Through this comprehensive overview, we have outlined the developments and highlighted the challenges that still remain in the field of BIQA. Specifically, hand-crafted BIQA methods have evolved to address distortion-specific and general-purpose images. Deep-learned BIQA methods, which utilize supervised and unsupervised learning techniques, have shown promising results in capturing complex image quality attributes. The integration of multimodal information, such as visual-audio and visual-text interactions, has emerged as a valuable direction to improving the accuracy and robustness of BIQA models. Additionally, the use of databases containing synthetic and authentic distortions has played a crucial role in training and evaluating these models.Despite the promising progress made, there are several challenges that still need to be addressed in the region of BIQA. These challenges include handling diverse real-world conditions, such as low-light environments and immersive displays, as well as incorporating user preferences and subjective assessments. Future research should focus on developing more efficient and accurate BIQA models capable of addressing these challenges and providing reliable application-specific quality assessments. We believe that this new survey serves as a valuable resource for academic researchers and industry professionals, offering insights into the development of more accurate and reliable BIQAs.ieee_fullname IEEEtran
http://arxiv.org/abs/2312.16551v1
{ "authors": [ "Miaohui Wang" ], "categories": [ "cs.CV", "cs.MM" ], "primary_category": "cs.CV", "published": "20231227122813", "title": "Blind Image Quality Assessment: A Brief Survey" }
RUP-23-28 [email protected] Department of Physics, Rikkyo UniversityTokyo 171-8501, Japan [email protected] SYRTE, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, LNE, 61 avenue de l’Observatoire 75014 Paris, [email protected] of Physics, Faculty of Science, Okayama University of Science1-1 Ridaicho, Okayama, 700-0005, JapanWe study the collisions of elastic superconducting strings, also referred to as current-carrying strings, formed in a U_ local(1)× U_ global(1) field-theory model, using three-dimensional numerical field-theoretic simulations. The breaking of U_ local(1) leads to string formation via the Higgs mechanism, while the scalar field of the second U_ global(1) carries the current, which condenses onto the string.We construct straight and static superconducting string solutions numerically and identify the regions in which they exist in the model parameter space. We then perform dynamical simulations for colliding superconducting strings with various collision angles and collision velocities. We explore the kinematic parameter space for six sets of model parameters characterising the coupling between the two scalar fields and the current on the string. The final states of the strings (after the collision) are reported diagrammatically. We classify them into four categories: (i) regular intercommutation, (ii) double intercommutation, (iii) bound state, and (iv) expanding string solution. We find that the outcome of the collision process is the regular intercommutation of the colliding strings in most of the kinematic parameter space while they form bound states for small velocities and small angles. We also find that the strings undergo two successive intercommutations and therefore pass through one other in a small region corresponding to relatively small angles and velocities of order c/2. The string structure breaks down when there is a relatively large coupling between the two scalar fields even if each string is stable before the occurrence of the collision. 11.27.+d, 98.80.Cq, 98.80.-kDynamical simulations of colliding superconducting strings Daisuke Yamauchi January 14, 2024 ==========================================================§ INTRODUCTIONCosmic strings can form in many models of grand-unification, see, e.g., <cit.>, and hence, determining their observational consequences — be it via gravitational waves, lensing, cosmic microwave background physics, or any other means — is of great interest <cit.>. Indeed, if any signature of cosmic strings was to be observed (so far, only bounds exist), it would provide invaluable information about the Universe during its phase transitions, and open up a window on fundamental physics at very high energies, such as axion physics <cit.>, grand-unification <cit.> and the seesaw mechanism <cit.>.Central to any calculation of the observational consequences of cosmic strings is that the string network reaches a “scaling solution”, in which the energy density in the strings is a small fixed fraction of the energy density in the Universe.Most of the work related to string networks has been based on the Abelian-Higgs model for cosmic strings; in this particular case, strings generally intercommute when colliding (see below). This crucial fact means that the string network will form closed loops during its evolution; these loops radiate energy in the form of gravitational waves (see, e.g., <cit.>), and it is for this reason that the network evolves towards an attractor scaling solution (see, e.g., <cit.>). In this regime, there is a unique length scale, called the correlation length. The (velocity-dependent) one-scale model was developed with this particular property in mind <cit.>, and allows one to estimate the present power spectrum of the gravitational waves generated by cosmic strings as well as the contribution of the cosmic string network to the cosmic microwave background <cit.>.Generally, the outcome of a collision of two U(1) Abelian-Higgs cosmic strings depends on the relative velocity v of the colliding strings, their relative angle α, and of course, on whether the strings are in the Type-I or Type-II regime <cit.>.In the Type-II regime, no bound states can form (in other words, a string with winding 2 has higher energy per unit length than two strings with winding 1). In that case, extensive studies have shown that the strings intercommute – that is, they exchange partners – in most of the (α,v) plane, except for the v→ 1 case, where double reconnections occur <cit.>. In the Type-I regime or for cosmic superstrings, bound states with corresponding Y-junctions can form <cit.>, thus leading to a more diverse set of final states.Analytic studies of these collisions have shown that at low v and α, bound states are kinematically allowed to form while numerical simulations have shown that they generally form dynamically, see <cit.>.In this paper, we extend previous studies to the collisions of U(1)_ local×U(1)_ global strings, in which the second U(1)_ global leads to the formation of currents on the strings.The motivation for this work is precisely that in GUT models, such current-carrying cosmic strings, also referred to as “superconducting strings”, are thought to form generically <cit.>. Superconducting strings and their physical properties have been studied extensively in the literature <cit.>. As in the case of standard cosmic strings, semi-analytic models of superconducting strings have been developed <cit.>, and the radiation of gravitational waves <cit.> and the radiation of electromagnetic waves <cit.> by superconducting string networks have been studied. It is of interest to find out whether the network of such strings follows a scaling law, as Abelian-Higgs cosmic strings do. The key to obtaining such a scaling law is that the strings can efficiently reconnect with each other to create loops. Here, we focus on the collision of two current-carrying strings using field-theoretic numerical simulations.Collisions of current-carrying strings will depend not only on (α,v) but also on the current on each string.In a previous paper <cit.>, we have studied these collisions analytically using the well-known elastic string model <cit.> applied to the case of electric and magnetic strings. In the elastic string model, for an infinite string in a stationary state, the energy per unit length of the string U is related to its tension T by a barotropic equation of state U(T).In <cit.> we showed that this description breaks down when electric or magnetic current-carrying strings collide to form a bound state, which we interpret as being due to the inability of the elastic string model to take into account non-conservative processes, as for instance the appearance of dissipative processes at the Y-junction. In <cit.> and <cit.>, the authors performed a similar analysis for strings carrying chiral currents and for transonic strings. In both cases, by contrast to the general case, a wave-like solution exists, and the authors were able to show that a bound state with Y-junctions exists in the context of the elastic string model.Hence, unlike in the Nambu-Goto case (without currents), it would appear to be impossible – at least with simple models – to determine the outcome of a collision between either electric or magnetic current-carrying strings.For this reason, we present here, as an alternative, a numerical study of the same situation: the collision of two straight current-carrying elastic strings in a U_ local(1)× U_ global(1) field-theory model. We focus on the role of the current in the collision and study the outcome of collisions as a function of the couplings in the field theory and as a function of the collision velocity and angle.This paper is organized as follows. In <ref>, we briefly introduce the field theory model used to describe current-carrying cosmic strings and outline how to determine the one-dimensional current-carrying vortex solution. We discuss the region of parameter space in which current-carrying strings can form and then numerically obtain straight and static superconducting string solutions. In <ref>, we discuss the setup used to perform the field theory simulations for colliding superconducting strings. In <ref>, we show the numerical results of the colliding superconducting strings and summarise the final states resulting from the collision. In <ref>,we classify the final states in the plane defined by the collision velocity and the collision angle.Finally, we conclude in <ref>.§ SUPERCONDUCTING STRING MODEL AND VORTEX SOLUTION§.§ Model We study aU(1)_ local× U(1)_ globalcurrent-carrying string model <cit.> defined by the actionS = -∫d^4x √(-g)((D_μφ)^*(D^μφ)+ (∂_μσ)^*(∂^μσ)+ 1/4F_μνF^μν+ V(φ,σ)) ,where φ is a complex scalar field, and F_μν≡∂_μ A_ν - ∂_ν A_μ is the field strength, with A_μ the U(1) gauge field. The covariant derivative is defined as D_μ≡∂_μ - ieA_μ. The additional scalar field σ which has a global U(1) symmetry, will be referred to as the current-carrier. In Ref. <cit.>, the authors consider a local U(1) symmetry for the current-carrier, while in Ref. <cit.>, the model considered is similar to the one studied here, but using global strings with a gauged current-carrier instead.We use the same potential V(φ,σ) as in Ref. <cit.>, namelyV(φ,σ) =λ_φ/4(|φ|^2-η^2)^2 + λ_φσ(|φ|^2-η^2)|σ|^2 + m_σ^2/2|σ|^2 + λ_σ/4|σ|^4.The first term is the familiar Mexican-hat potential (with the vacuum expectation value, η) which gives rise to vortex solutions for φ. The second term couples the string forming field to the current-carrier σ, and thus plays an important role in the current condensation on the string. The correspondence with the notations in Ref. <cit.> is summarised in <ref>. We consider the model in Minkowski space. In Lorenz gauge, the field equations readφ̈ -^i_iφ+2ieA^μ_μφ+e^2A^μ A_μφ + dV/dφ^* = 0 Ä_ν - ∂^j∂_j A_ν= 2e Im(φ^* ∂_νφ)-2e^2A_ν|φ|^2, σ̈ - ∂^i∂_iσ + dV/dσ^* = 0,with the dot denoting a derivative with respect to time. §.§ Vortex solutions We use cylindrical coordinates and look for axially-symmetric Abrikosov-Nielsen-Olsen (ANO) vortex solutions[For the Abelian-Higgs model, see Refs. <cit.>]. Following Ref. <cit.>, we use the ansatzφ()= η f(r)e^inθ, A_i()= -ϵ_ijx_jn/er^2α(r),σ(t,)= η g(r)e^i(kz-ω t),where f, g and α are dimensionless functions of the radial coordinate, r. Note that the first two expressions are exactly equivalent to those in the Abelian-Higgs model. Substitution into the field equations (<ref>)-(<ref>), we obtainfx+ 1/xfx- n^2f/x^2(α-1)^2= β_φ(f^2-1)f+ 2β_φσg^2f, αx - 1/xαx = 2f^2(α-1), gx+ 1/xgx- γ g= 2β_φσ(f^2-1)g+ β_σ g^3,where β_φ≡λ_φ/2e^2, β_φσ≡λ_φσ/2e^2, β_σ≡λ_σ/2e^2, μ^2_σ = m_σ^2/2e^2η^2, K^2 = k^2/e^2η^2, Ω^2 = ω^2/e^2η^2, x = eη r.andγ = K^2-Ω^2+μ^2_σ. Note that, in Ref. <cit.>, the author introduced a parameter w≡ k^2-ω^2 that becomes w = e^2η^2(γ-μ_σ^2) in our notation. For later convenience, we define the conserved current of the current-carrier,j^σ_μ = 2 Im (σ^*∂_μσ). In the setup considered here, the current only has a z-component, j_z^σ (r) = 2kη^2 g(r)^2 = 2eKη^3g(r)^2. The parameter K thus characterises the strength of the current.§.§ Conditions on model parameters for superconducting string formationWe can obtain vortex solutions on which the current-carrier σ condenses, i.e. superconducting strings, only when the model parameters are in a certain region of parameter space.Here we determine under which conditions the superconducting strings exist.Substituting Eqs. (<ref>)-(<ref>) into <ref>, we obtain-1/e^2η^4ℒ=(df/dx)^2+ n^2/x^2f^2(1-α)^2+ (dg/dx)^2+ n^2/2x^2(dα/dx)^2+ (K^2- Ω^2)g^2+ V/e^2η^4 .The effective potential readsV_ eff ≡V/e^2η^4+(K^2-Ω^2)g^2, =β_φ/2(f^2-1)^2 + 2β_φσ(f^2-1)g^2 + β_σ/2g^4 + γ g^2,where again γ≡ K^2-Ω^2+μ_σ^2. This effective potential determines whether a superconducting string can exist or not.The shape of this potential is shown in <ref> for three distinct cases with (β_φ,β_σ, γ)=(1.0,1.0,0.05) and β_φσ=0.02, 0.49 and 0.60.The four critical points of the effective potential are given by(f,g) = (0,0),(1,0), (0,√(2β_φσ-γ/β_σ)), ( √(1-2β_φσγ/4β_φσ^2-β_σβ_φ), √(β_φγ/4β_φσ^2-β_σβ_φ)).We label them [A] to [D] in <ref>.When g=0, there is no condensate, and this occurs at locations [A] and [B].The field profile f(r) for a string carrying no current connects [A] and [B]. The field g is non-zero at [C] and at the saddle point [D]. [C] corresponds to the desired vacuum state describing the string core on which the current-carrier σ condenses, i.e. φ=0 and |σ|>0. [D] is a saddle point, i.e. one eigenvalue of the Hessian matrix is positive while the other is negative. Note that [C] and [D] do not always exist.To obtain a superconducting string configuration, we first impose V_ eff(A)>V_ eff(B). This means that β_φ>0, a necessary condition for spontaneous symmetry breaking. A stable superconducting string configuration also requires the existence of the extremum [C]. This condition yields (2β_φσ-γ)/β_σ>0, see <ref>.This condition is satisfied in the upper-right and lower plots of Fig. <ref> while it is not satisfied in the upper-left plot.In order for the condensate to form onto the string, the potential energy at [C],V_ eff(C)= β_φβ_σ-(2β_φσ-γ)^2/2β_σ.should be smaller than that at [A], i.e. V_ eff(A)=β_φ/2>V_ eff(C). This means that β_σ>0. Using this condition and the one required to guarantee the existence of [C], we obtain 2β_φσ>γ. On the other hand, the current-carrier σ should vanish far from the string core, requiring [B] to be a minimum. In other words, the Hessian matrix at [B] should be positive-definite, i.e. γ>0.We thus immediately obtain β_φσ>0. Moreover, we require V_ eff(C)>V_ eff(B) in order for [B] to be the global minimum, which yields the condition β_φβ_σ>(2β_φσ-γ)^2, since V_ eff(B)=0. Owing to this condition, the (true) vacuum manifold of V_ eff is always equivalent to U(1) even if the current-carrier exists. Note that if the latter condition is not satisfied, namely if V_ eff(A)>V_ eff(B)>V_ eff(C),the potential is similar to the one shown in the lower panel of Fig. <ref>, and the fields φ and σ tend to go into the global minimum [C]. Roughly speaking, in this case, the interior of the string continues to grow forever since [C] is energetically favoured. As shown in the next subsection, a metastable static string configuration that does not satisfy this latter condition can indeed be constructed numerically. Of course, the configuration of such a metastable string is broken if it collides with another string. In summary, in order to obtain a superconducting string configuration, we require2β_φσ > γ>0, β_φβ_σ>(2β_φσ-γ)^2.These conditions, which are determined only by considering the shape of the effective potential, are necessary but not sufficient for the obtention of a superconducting string. In reality, the configuration is determined by the balance of the potential energy and the gradient energy of the string. In the next subsection, we compute the region of parameter space in which superconducting strings form and are viable by solving <ref> to <ref> numerically.§.§ Viable parameter regions for static and stationary stringsIn order to take the gradient energy of the string configuration into account, we solve <ref> to <ref> numerically.We impose the following boundary conditions for f,α and g, f(0)= α(0)= .dg/dx|_x=0 = 0, f(∞)=α(∞)=1,g(∞) = 0,The conditions for f and α are the same as those in the Abelian-Higgs model. The condition for g stems from the regularity of the field profile at x=0 and the requirement that the current vanishes far from the string core.In practice, we truncate the radial coordinate at finite x. The values of f, α and g for large x can be computed by considering the asymptotic behaviour of their field equations (see Ref. <cit.> for the Abelian-Higgs case and Ref. <cit.> for the case of superconducting strings). It is easy to find that δ f(x)=1-f(x), δα(x) = 1-α(x), and g(x) satisfy a modified Bessel equation of the second kind in the asymptotic region. More precisely, they behave as δ f(x) ≈ K_0(√(2β_φ)x), δα(x) ≈ xK_1(√(2)x) and g(x) ≈ K_0(√(γ)x) for γ>0. Therefore we may impose those boundary values for f(x),α(x) and g(x) at x=x_ max instead of the boundary conditions at infinity. The typical value of x_ max in our study is x_ max=50, chosen so that g(x_ max) is sufficiently close to zero, say g(x_ max)∼ 10^-10. As a result, the boundary conditions aref(0)= 0, f(x_ max)= 1-K_0(√(2β_φ)x_ max), α(0)= 0,α(x_ max)= 1-x_ maxK_1(√(2)x_ max), dg/dx(0)= 0, g(x_ max)= K_0(√(γ)x_ max).We solve the field equations using an iterative method developed in Ref. <cit.> with spatial resolution Δ x=0.25. The desirable field configurations of f(x), α(x) and g(x) are shown in the left-hand plot of <ref>, where (β_φ,β_σ,β_φσ,γ)=(1,1,0.49,0.05). This parameter set yields a vortex solution on which σ∝ g takes a non-zero value in the string core.In contrast, when β_φσ is small, the current-carrier σ no longer condenses on the string, as shown in the right-hand plot of <ref>, where we set β_φσ=0.30. This string behaves as an Abelian-Higgs string. In this case, the difference between V_ eff(C) and V_ eff(A)is too small to compensate for the increase in the gradient energy ∼ |∂_xσ|^2. The regions of viability for superconducting strings and non-superconducting strings, obtained by solving Eqs. (<ref>)-(<ref>), in the range 0 ≤γ≤ 0.3 and 0 ≤β_φσ≤ 0.9, with β_φ=β_σ = 1 are shown in Fig. <ref>.In <ref> we report on the cases with β_σ=0.5 and 1.0 and β_φ=0.1, 0.5 and 1.0.Superconducting strings (with field profiles as in the left-hand plot of Fig. <ref>) are obtained in the region labelled “SC”. This region is separated from a neighboring region labelled “MS” in which strings are in a meta-stable state. The line separating the two regions is given by γ=2β_φσ-√(β_φβ_σ).As discussed in <ref>, to the right of the dashed line of Fig. <ref>, extremum [B] (see <ref>) is not the global minimum of the potential. This is why a superconducting string solution in the “MS” region is meta-stable.For small β_φσ or large γ, strings can form, but no condensate can be obtained (region “NC” in Fig. <ref>). The corresponding field profiles are shown in the right-hand plot of Fig. <ref> with g(x) suppressed to a level of order 10^-10.For large β_φσ, strings do not form (region “NS” in Fig. <ref>). In this region, one cannot obtain field profiles satisfying the boundary conditions (<ref>) in the asymptotic region. In fact, in this region, the potential energy at extremum [C] in <ref> becomes large and negative.Note that the numerical study conducted in this section of the paper, which is restricted to the case of static string configurations, reveals that there exists sharp transitions between the “NC”, “SC” and “NS” regions. These transitions are represented by solid red lines.Note also that our study is not able to resolve the transition between the “SC” and “MS” regions, which instead stems from the analysis of <ref>. The expression for the boundary of the region in which a string forms but no condensate forms (region “NC”), obtained by considering the gradient energy as well as the potential energy of a superconducting string, is derived in <ref>. While the analysis in this subsection was restricted to static strings, in the following section, we perform dynamical simulations of colliding strings starting from the static superconducting string solutions in the “SC” region. § SIMULATION SETUPLet us now consider the collision of two moving strings. An individual axisymmetric static vortex solution is obtained by solving <ref> to <ref> with the boundary conditions given in <ref>.We then obtain a moving string with velocity v and angle α by performing a Lorentz transformation, as explained in <ref>. These moving strings are placed in the computational domain as a superposition of the field configurations given in <ref>. A diagram depicting the initial string configuration is shown in <ref>. The coordinate origin is at the centre of the domain. It is labelled O. The strings have the same initial velocity, v, and move along the x-axis. For later convenience, we define hyper-surfaces Σ_I for I=x,y and z, as the surface perpendicular to the I-axis. The collision angle α is defined as the angle between two colliding strings projected onto the surface Σ_x at x=0. As seen along the x-axis, towards positive values of x, the initial configuration and the angle α are shown in <ref>.We define parallel and anti-parallel strings as the vortex solutions given in <ref> to <ref> with sign(n)= sign(K) and sign(n)=- sign(K), respectively. In other words, the parallel (p) string is a string such that the direction of the magnetic flux, B:=∇×A, and the direction of the conserved current, j^σ, are parallel to each other while for the anti-parallel (a) string, they are anti-parallel to each other. We can consider three combinations: (p,p), (p,a) and (a,a), as shown in <ref>. The solid arrow indicates the direction of B, and the triple arrow is that of j^σ, defined in <ref>. In all simulation runs, we assume β_φ=β_σ=1 for simplicity. As we mentioned in <ref>, stable superconducting strings can be obtained in a restricted region of the γ–β_φσ plane (recall that γ = K^2-Ω^2+μ_σ^2). Hence we can fix Ω=0 without a loss of generality. We then vary other model parameters, K, μ_σ^2 and β_φσ. Then we vary the kinematical parameters of strings, the collision velocity v and the collision angle α. For each simulation, we solve the dynamical equations <ref> to <ref> in Cartesian coordinates. The strings are assumed to extend infinitely outside the computational domain. To realise this, we use <ref> as the boundary condition.We use the Leap-Frog method in the time domain and approximate the spatial derivatives using second-order central finite differences. For consistency with the boundary conditions, each simulation ends before the result of the collision reaches the boundaries of the computational domain. We compute the optimal size of the computational domain by considering the velocity of strings and the collision angle. For details, see <ref>. Finally, we classify the final configuration of strings after the collision based on the criteria introduced in <ref>. § FINAL STATES OF COLLIDING STRINGS §.§ Regular intercommutationLet us first illustrate the familiar intercommutation process. We simulate a (p,p) pair with model parameters (β_φσ,μ_σ^2,K^2)=(0.45,0.01,0.04) and v/c=0.5, α=0.2π. In <ref>, the isosurface defined by |φ|=η/2 at t=0, 17.5/(eη), 35/(eη) and 70/(eη) is shown in the first four 3D surface plots while the isosurface defined by |j^σ_i|=0.01η/e at t=70/(eη) is shown in the fifth 3D surface plot. Note that unless otherwise specified, in <ref> to <ref>, the z-axis points upward in the vertical direction, the x-axis is oriented outwards and the y-axis points from left to right.Intercommutation occurs at t≈ 17.5/(eη) (in the second panel from the left), after which the strings move away from each other. This process is familiar in the Abelian-Higgs model in the absence of σ. The fifth plot of <ref> shows that the currentj^σ_μ on the reconnected strings survives the collision process.We shall discuss the evolution of the current during the collision process in  <ref>.§.§ Bound statesAn interesting property of colliding strings is the possible formation of bound states ending at string Y-junctions. This phenomenon is known to appear in Type-I Abelian-Higgs strings where the gauge coupling is stronger than the self-coupling of the scalar field. In our setup, this corresponds to β_φ<1 and β_φσ=0 <cit.>. We find that superconducting strings, i.e. strings with β_φσ>0, can form bound states even when β_φ=1. Indeed, while in the context of the Abelian-Higgs model with β_φ=1, no interactions between strings exist because the forces due to the gauge interaction and the scalar interaction balance each other out, in the case of a superconducting string and in the presence of the extra field σ, the balance of forces is modified, and bound states will form even when β_φ=1, as long as we set a relatively small velocity and small collision angle. An example of a bound state formation for a (p,p) pair is shown in <ref>. While the model parameters are the same as those in the previous subsection, the kinematic parameters are v/c=0.2 and α=0.1π. The reconnection occurs soon after the collision, but the strong coupling between strings forms the bound state with Y-junctions at t≈ 75/(eη). The length of the bound state is expected to grow with time until the string tension interrupts the development of the two Y-junctions.§.§ Double intercommutationIn a collision process, strings may intercommute once, as already discussed, or intercommute multiple times <cit.>. If an even number of intercommutations occur, the strings will consequently “pass” through each other. This phenomenon was first reported for Abelian-Higgs strings with large velocities (v/c≳ 0.9) and large collision angles (α∼π/2) <cit.>. Here we discuss double intercommutation, as intercommutations occurring more than twice are rare.In <ref>, we display the case of a (p,p) pair with (β_φσ,μ_σ^2,K^2)=(0.49,0.01,0.04), v/c=0.5 and α=0.14π.In the plots (a)-(d), the 3D surface plots are taken at t=0, 32.5/(eη), 42.5/(eη) and 70/(eη) respectively. The strings collide and intercommute in plot (b), similarly to what is shown in plot (c) of <ref>. Then, they intercommute once more in plot (c) and “pass” through one another; see plot (d). By looking at the locations of the string endpoints at the top and bottom boundaries of the field theory computational domain, one can confirm that the strings intercommute twice and “pass” through each other.Note that the currents survive the double intercommutation process, as shown in (e).§.§ Expanding bubble Finally, superconducting string collisions can result in the nucleation of a bubble. As mentioned in <ref> and <ref>, static strings for which the parameters of the effective potential are in the “MS” region of <ref> are unstable. Here, we consider a (p,p) pair with (β_φσ,μ_σ^2,K^2)=(0.51,0.01,0.04), i.e. γ=0.05, v/c=0.6 and α=0.22π. Although this set of parameters belongs to the “SC” region of <ref>, the dynamical simulation reveals that the string configuration is broken after the collision, as shown in <ref>. The strings collide at t=15/(eη) in plot (a) and intercommute in plot (b). The reconnected segments then come into contact once more at t=32.5/(eη) in plot (c). The reconnected region then expands, see plot (d). In plot (e), we plot the corresponding current amplitude at t=32.5/(eη). Note that they merge into a single current.As pointed out in <ref>, the stable vortex solution cannot be obtained if the vacuum [C] is energetically favoured (see <ref>).Indeed, in such a case, the volume where the fields lie at [C] in the interior of the string grows with time, so that the string configuration is broken, as shown in the right plot of <ref>.According to <ref>, the case with β_φσ=0.51 belongs to the “NS” region. Our numerical result suggests that the colliding string has, locally, entered this region.§.§ Current on strings after collisionTo quantify the magnitude of the current of σ on the strings after the collision, we define the current evaluated on the hypersurface Σ_I for I=x,y,z, defined in <ref>, asJ_I := ∫ j^σ_μ dΣ^μ_I,where j^σ_μ is the conserved current defined in <ref> and dΣ^μ_I is the surface element of the hypersurface Σ_I. For a (p,p) pair, in the case of (β_φσ,μ_σ^2,K^2)=(0.49,0.01,0.04), the initial currents on the string flow in the positive direction across Σ_z, and its magnitude is J_z(0) ≈ 2.4η/e. There is no current across Σ_x. The currents of each string flow in opposite directions and with the same magnitude across Σ_y, so that the net currents J_x=J_y=0.In <ref>, we show the time evolution of the current J_z(t) for α=0.02π (red), 0.14π (green), 0.30π (blue) and 0.46π (magenta) with v/c=0.1 for a (p,p) pair. The final configurations for α=0.02π, 0.14π and 0.30π are bound states, while the choice α=0.46π results in regular intercommutation. We find that, after the collision, the current on the strings fluctuates significantly and can sometimes take on negative values.We can confirm that finite box size effects do not affect these findings. However, we cannot confirm whether the current will be sustained in the final state at large t in the present numerical setup. To confirm it, one would need a longer simulation with alarger simulation box. § CLASSIFICATION AND PHASE DIAGRAMS§.§ ClassificationThere is a total of five possible outcomes for superconducting string collisions. A summary of these final states is shown in <ref>. The first four states, (a) to (d), were discussed in previous sections. The last one, state (e), is one for which the outcome of the collision is an indeterminate final string state, which, for longer simulation times, may evolve into either (c) or (d). Let us now introduce some criteria with which we can classify the final states of colliding strings, using the simulated end states of φ(t_ end,) and σ(t_ end,), at t_ end.Our code computes, at t_ end, the surface areas for which |φ|<η/2 on Σ_x, Σ_y and Σ_z, see <ref>. Those surface areas are denoted by S_x, S_y and S_z, respectively[Note that the origin of the coordinate system is at the centre of the computational box and that we set up the simulation such that the collision occurs at that particular location.].If the strings undergo regular intercommutation, S_x0, S_y=0 and S_z 0 for all α. Indeed, since the strings have velocities along the x-axis and momentum is conserved after the collision, the strings must cross Σ_x, while it is the intercommutation process that results in S_y=0 and S_z 0 for this particular configuration of string collision.This is confirmed by Fig. <ref>.If the strings undergo double intercommutation, they “pass” through each other such that the final string configuration is identical to the starting configuration. In this case, S_x=0, see plot (b) in Fig. <ref>.If S_x, S_y and S_z are all non-zero, the strings either create a bound state or become unstable and the volume for |φ|<η/2 grows with time.If the strings form a bound state, the final configuration is parallel to the z-axis. Hence S_z should be much smaller than S_x and S_y (plot (c) in <ref>). Instead, if S_x, S_y and S_z are comparable, the final state would be like plot (d) or (e) in <ref> and thus corresponds to bubble nucleation or an indeterminate string state.To distinguish between (c) and (d) or (e), we introduce an ellipticity parameter defined as e_z ≡ S_z^-1/(S_x^-2+S_y^-2+S_z^-2)^1/2. If the strings are bounded along the z-axis, e_z∼ 1. In what follows, we consider the final string state to be a bound state when e_z ≥ e_z, crit=0.9. As mentioned before, because simulation time is limited, some end configurations at t_ end see plot (e) of <ref> are not in their definitive state and could evolve into either (c) or (d) for longer simulation times. These strings are in an indeterminate state at t_ end. To decide whether a final string configuration is in state (d) or (e), we introduce the effective volume V = √(S_xS_yS_z), and consider final states with e_z < e_z, crit and V≥ V_ crit=3× 10^4η^-3 to be expanding bubbles, while if V<V_ crit, we consider them to be indeterminate string states.The classification of the final states is summarized <ref>. In the following, we use, throughout, the critical values e_z, crit=0.9 and V_ crit=3× 10^4η^-3 mentioned above.These criteria, while they have no rigorous physical motivation, are useful for the classification of strings end states into types (a) to (e). Note that slight changes in these parameters do not significantly affect our findings.Note that we do not discuss whether currents remain on the string after the collision. In the previous section, we saw cases for which the current disappears after the collision but we believe this phenomenon to be transient, with a non-zero current re-established for a simulation with increased duration and a larger computational domain.While in principle the final state of the current is of interest, the current on the strings after the collision continues to fluctuate significantly in our simulations, see <ref>.For this reason, in what follows, we do not report on the current's final configuration.Finally, we would like to point out that bound states may also be transient and may disappear under the effect of the string tension for longer simulation times, particularly for cases in which the bound state is formed on the edge of its region of viability in parameter space. §.§ Phase diagram In this section, we report on the final states resulting from the collision of superconducting strings in the (v,α) plane. We consider six sets of model parameters, listed in <ref>. In Model I, the parameters are the same as the ones used in <ref>. In Model II, β_φσ is taken to be slightly larger than in Model I. In Models III and IV, we set K^2 to be respectively smaller and larger than in Models I and II. In Model V, β_φσ is smaller than in Model III. Finally, in model VI, K^2 and μ_σ^2 are interchanged with those in Model IV, while keeping γ=0.08. We vary the collision velocity v/c from 0.1 to 0.9 in steps Δ(v/c)=0.05, and the collision angle α (see <ref> for the definition of α) from 0.02 π to 0.46 π in steps Δα=0.04 π. In Models I and II, we also consider angles α in the range 0.58π≤α≤ 0.98π, with Δα = 0.08 π. We classify the final configurations according to <ref>. §.§.§ Model I The phase diagram for Model I is shown in <ref> for (p,p) and (p,a)-type string pairs (see <ref> for details).Strings with v/c ≲ 0.5 and α≲ 0.3 π form bound states. This is similar to the outcome of the collisions between Type-I Abelian-Higgs strings with e^2>λ/2 <cit.>. Strings with v/c∼ 0.5 and α∼ 0.2π pass through each other via double intercommutation. This behaviour is observed in Type-I/II Abelian-Higgs strings only for strings with large velocities <cit.>.Note that while bound states form in a wide range of velocities and angles, double intercommutation occurs only in a restricted region of the kinematic parameter space.As is well known, in a bound state, Y-junctions may form <cit.>. At such junctions, the string tension in the two outer strings may unbind the bound string state. Whether this unbinding is observed or not depends on the duration of the simulation. For this reason, the border between the regions of regular intercommutation and bound states comprises some indeterminate string states.One also finds that collisions with large angles, α>π/2, result in regular intercommutation for all values of the velocity. If the collision angle is large, the curvature of the outgoing strings around the impact point is significant, and the string velocity near the impact point is large. For this reason, in such a configuration, no bound state can be formed, and no double intercommutation can occur.Note that the phase diagram for (a,a)-type strings is identical to the one for (p,p)-type strings in <ref>.This means that while the relative orientation of the currents affects the outcome of superconducting string collisions, the relative orientation of the current and the winding is not essential. This also holds true for models II to VI. §.§.§ Model II Let us now consider the collisions of strings whose parameters belong to the region of Fig. <ref> labelled “MS”.In this region, the extremum [C] in <ref> becomes the global minimum (see lower plot).Strings in this region of parameter space are dynamically unstable, which means that collisions of such strings can lead to expanding bubbles around the impact point (see <ref>).This is confirmed by the results of the dynamical simulations, see <ref>, which demonstrate that the collisions of strings belonging to model II form bubbles in a large part of the kinematic phase space when α≲π/2, see <ref> while they intercommute when α≳π/2. As can be seen from <ref>, strings with γ≥min( 0, 2β_φσ - √(β_φβ_σ)) for values β_φσ≤ 0.5 belong to region “SC”.In a hypothetical bound state resulting from a collision of two model II strings, the total current is given by 2Kcos(α/2). Given the constraint on γ given above and its definition (γ = K^2+μ_σ^2), one can deduce that the collision of strings in region “MS” can result in stable bound states for small angles α. In addition, the interaction time will scale as the ratio of the string thickness, d, and the velocity of the incident strings, taken to be v. That is to say, the interaction time and the probability to form bound states will be greater for small velocities.It is worth noting that the region in the (v/c, α/π) plane in which the result of the collision of model II strings is an unstable string state is greater for greater values of β_φσ.This can be seen in <ref> and from the constraint on γ given in the preceding paragraph, and it is confirmed by a set of additional simulations (not shown) with 0.50≤β_φσ≤ 0.51.In these simulations, the region of parameter space in which a collision results in the appearance of an unstable string state (in the form of an expanding bubble) is larger for large values of β_φσ, while it shrinks considerably for β_φσ≲ 0.50.§.§.§ Model III & IV In going from Model III to IV, the squared amplitude of the currents in the two colliding, K^2,is increased from 0.02 to 0.07.The corresponding phase diagrams are shown in <ref> and <ref>, respectively, and are similar to the ones of Model I, see <ref>.In particular, strings intercommute in the usual way for all α>π/2. For this reason, in <ref> and <ref>, the phase plots are shown for the reduced range 0≤α≤π/2 rather than for the full range 0 to π. The similarity of these phase plots suggests that the initial current K does not play an essential role in the outcome of the collision as long as β_φσ<0.5. This applies especially to the regions in which bound states form and regular intercommutation occurs. We refer the reader to the discussion in <ref> for the case β_φσ>0.5.Note that the region of double intercommutation is smaller when the amplitude of the current in the colliding strings is larger. This is visible by comparing the results for model I and IV with those of model III.One can conclude that larger currents hinder the occurrence of double intercommutation and instead favor regular intercommutation and/or the formation of bound states. §.§.§ Model V Let us now study the influence of β_φσ, the coupling constant between φ and σ. To do so, we compare the results of Model III in <ref> with those of Model V in <ref>. In going from Model III to Model V, β_φσ is decreased from 0.49 to 0.40, while all other parameters are kept identical.We find that bound states form less frequently for smaller β_φσ (Model V). This result is reasonable. Indeed, if we consider the limit β_φσ→ 0, the strings are similar to critical Abelian-Higgs strings, which do not form bound states. While in comparing Model I and III, we found that the size of the region in which bound states form is largely insensitive to K, here, we find, by comparing models III and V, that its size has a clear dependence on β_φσ. We thus conclude that the parameter which determines whether bound states can form is the coupling β_φσ. §.§.§ Model VI Finally, let us consider a case with large μ_σ^2, the bare mass of σ. Note that Model IV (<ref>) and VI both have γ=0.08, with the values of K and μ_σ interchanged.As explained in <ref>,the stability of a straight and static string is determined by two parameters, β_φσ and γ=K^2-Ω^2+μ_σ^2 (recall that Ω=0 in our simulations). While the individual values of μ_σ^2 and K^2 do not play an essential role in the analysis of static strings, we find, by comparing the results of Model IV and Model VI, that the dynamical process does depend on μ_σ^2 itself.The phase diagram of Model VI is shown in <ref>. We find a significant decrease in the size of the region of kinematic parameter space in which bound states can form in Model VI, in comparison to Model IV.To explain this difference, we note, from <ref>, that theeffective mass of σ, in the core of the string, is bounded from below by μ_σ (m_σ).Correspondingly, the length scale over which two strings interact, which is roughly given by the inverse of the effective mass, is bounded from above by μ_σ^-1. § CONCLUSION In this work, we studied the collision process of two superconducting strings using field-theoretic simulations.We considered a complex scalar field φ(,t) with its associated gauge field A_μ(,t) and another complex scalar field σ(,t).This model has U_ local(1)× U_ global(1) symmetry, and the breaking of the U_ local(1) symmetry leads to the formation of cosmic strings.We first considered an Abrikosov-Nielsen-Olesen vortex solution on which the current-carrier σ condenses. We solved the field equations numerically in order to obtain the field configuration and determine the region of parameter space in which strings are viable. The parameter space considered is the(β_φσ,γ) plane, where β_φσ=λ_φσ/2e^2 is the coupling constant between φ and σ, and γ=K^2-Ω^2+μ_σ^2 is the effective mass of σ condensed on the string.We then considered two straight current-carrying strings in a collision process.From the static string solutions, we performed a Lorentz transformation in order to obtain moving strings with velocity v and relative angle α. In a three-dimensional computational domain, we then performed dynamic simulations of the collision process.Using a set of criteria defined in <ref> we classified the final string configuration after the collision. The results are summarised in the phase diagrams of  <ref>–<ref> for the six sets of string parameters defined in <ref>.While most colliding strings do intercommute in the same manner as in the Abelian-Higgs model in parts of the string parameter space and collision configuration space, we also found a variety of other final states depending on the model parameters and on the kinetics of the simulations. With a relatively small velocity, v/c≲ 0.5 and a small angle, α≲ 0.3π, the colliding strings form a bound state; this is consistent with what can be observed in Type-I Abelian-Higgs strings. We also observe that two strings can pass through one another in a double intercommutation mechanism. This phenomenon is also observed in the collision of Type-I/II Abelian-Higgs strings at speeds close to c. Furthermore, two strings with a relatively large β_φσ form an expanding bubble after the collision, and the string configuration breaks down. In the bubble's interior, φ and σ lie at the extremum labelled [C], defined in <ref>. Once a small segment of string around the impact point enters the region of string instability region in the (β_φσ,γ) plane, it expands and tends to fill the entire 3D computational domain. This phenomenon can be observed only if β_φσ is larger than its critical value.In summary, our numerical studies demonstrate that there exists a substantial variety of final string configurations in a dynamical current-carrying string collision process. This suggests that in a cosmological context, the evolution of the network of superconducting strings should be more complicated than that of Abelian-Higgs strings. It is therefore non-trivial that a superconducting string network would follow scaling laws similar to what is found in the Abelian-Higgs model <cit.>.This being said, should the superconducting string network indeed follow a scaling law, the number density of superconducting strings may differ from the one found in the Abelian-Higgs case due to the formation of bound states and double intercommutation. This could leave a distinct characteristic imprint on the cosmic microwave background and gravitational wave background. Furthermore, if standard model particles or dark matter particles condense on strings, the resulting superconducting strings could generate fast radio bursts <cit.> and could also be at the origin of gamma-ray bursts <cit.>.Let us finally note that in the present study, the scalar field that condenses onto the string induces a long-range force, which enhances interactions with distant strings. The dynamics of the strings would change were the scalar field gauged, since its effect would then be confined around the strings <cit.>. We leave this type of superconducting string for a future study.This project has been launched based on the discussion with Danièle A. Steer, and we thank her for the useful discussion that followed. This work was supported by JSPS KAKENHI Grant Numbers 16K17695, 21K03559, 23H00110 (T. H.) and 17K14304, 19H01891, 22K03627 (D. Y.). § NOTATIONSWe use the same model as used in Peter's paper <cit.>, but follow the different notations. The correspondence of the variables and parameters between the literature and ours is as follows:φ = 1/√(2)Φ^ Peter, σ = 1/√(2)Σ^ Peter, A_μ = -1/√(4π)B_μ^ Peter,e = √(4π)q^ Peter, η = 1/√(2)η^ Peter, m_σ = √(2)m_σ^ Peter,λ_φ = 2λ_ϕ^ Peter, λ_φσ = 4f^ Peter, λ_σ = 4λ_σ^ Peter.The dimensionless parameters α_i (i=1,2,3) and w, q defined in Ref. <cit.> can be represented by our notations,α_1 = m_σ^2/λ_ση^2 = μ_σ^2/β_σ, α_2 = λ_φσm_σ^2/2λ_φλ_ση^2 = β_φσμ_σ^2/2β_φβ_σ, α_3 = m_σ^4/2λ_φλ_ση^4 = μ_σ^4/2β_φβ_σ,q^2 = 1/4πβ_φ, w = μ_σ^2W/2β_φβ_σ,and we can rewrite β_φ,β_φσ,β_σ and μ_σ^2 with α_i and q. In Ref. <cit.>, the authors used two kinds of parameter sets,1) α_1=1.68× 10^-2, α_2=5.26× 10^-3, α_3=5.26× 10^-4,4πq^2 = 0.1,2) α_1=2× 10^-3, α_2=5× 10^-4, α_3=2× 10^-6,4πq^2 = 0.1.These respectively correspond to1) β_φ = 10.0, β_φσ = 6.26, β_σ = 37.3, μ_σ = 0.791,2) β_φ = 10.0, β_φσ = 5.0, β_σ = 10.0, μ_σ = 0.141.From 5 of Ref. <cit.>, we can readw∼ 10^-3 for the first parameter set called 'moderate' case in the main text. This is corresponding to W≈ 1.19(w/10^-3). § VIABLE PARAMETER REGION FOR A SUPERCONDUCTING STRING In <ref>, we explored the viable parameter region for a superconducting string in (β_φσ,γ) space. In the left panel of <ref>, we vary β_φ as β_φ=1.0 (red), 0.5 (green) and 0.1 (blue). Basically, as β_φ decreases or β_σ decreases, the viable region ('SC' region in <ref>) moves toward the left to satisfy the positivity of Eq. (<ref>). We find that the boundary of the left-hand region (“NC" region in <ref>) is insensitive to β_σ. We discuss this point in <ref>. § DETERMINING THE BOUNDARY OF THE “NC” REGIONWe shall analytically derive the boundary of the “NC” region in <ref>. In <ref>, we considered the condition for the current-carrier to condensate on a string from the potential shape of V_ eff. However, an actual string configuration also highly depends on the gradient energy. So we derive the condition taking into account the gradient energy by approximating the field configuration.In the case with β_φ=1 and the winding number n=1, we find thatf(x) = tanhx/d, α(x) = tanh^2x/d,g(x) = g_0/cosh(x/d).can approximate the numerical solutions in the right panel of <ref> with d≈ 1.2 and g_0≈ 0.5. For β_σ=0.5 as in the right panel of <ref>, d changes within only 10%, so these ansatz work also in this case.Plugging these ansatz with β_φ=n=1 into <ref>, the Lagrangian becomes-ℒ/e^2η^4 = x/2d^2(2+d^2+d^2β_σg_0^4+g_0^2(-1+d^2(-4 β_φσ+γ))+g_0^2(1+d^2γ)cosh2x/d) sech^4x/d +(1+2/d^2)1/x sech^4x/dtanh^2x/d.The tension of a superconducting string, the energy per the unit length, is given by integrating the Lagrangian over x=[0,∞]. The integral of the last term cannot be expressed in terms with elementary functions, which gives only a constant contribution to μ. Then we obtain the tension of a superconducting string,μ = 2π∫ℒ xdx= 2π( c + a_2g_0^2 + a_4 g_0^4 ),where c is constant anda_2= π/3( 1+2ln 2 - 2d^2 ((4ln 2-1)β_φσ-3ln 2γ)), a_4= π/6(4ln 2-1)β_σd^2>0.Requiring the minimum of the tension to exist, we obtain the condition, a_2<0, which yields2d^2β_φσ(4ln 2-1)- (2ln 2+1) > 6d^2γln 2.This equation with d=1.2 can well reproduce the boundary of the 'NC' region in <ref>. Too small β_φσ cannot satisfy this condition, preventing the condensation of the current-carrier on the string, namely, g_0=0. Clearly, the simplest analysis taking into account only the potential shape discussed in <ref> is not enough to explain the location of the boundary. The condensation is an outcome of the balance of the potential energy and the gradient energy.In addition, this condition is independent to β_σ. The effect appears only through the weak dependence of d. This result is consistent to the numerical result in <ref>, where the location of the boundary is insensitive to β_σ .This analysis can be applied to the case with β_φσ≲ 1. From the numerical analysis with β_φσ≳ 1, we find that the boundary deviates from the condition <ref>, implying that the details of the field configuration become important for the condensation. § MOVING STRINGS AND FIELD SUPERPOSITION We construct a moving string by taking the Lorentz-boost of the 1-vortex solution given in <ref>. We consider a string rotated by θ_x around the x-axis, where θ_x=0 corresponds to a string along the z-axis. The string moves with velocity v along the x-axis. Suppose the scalar fields and the gauge field in the static frame of the string are given as {φ(),A_μ(),σ()}. Those in the observer frame, where the string is rotated around the x-axis and Lorentz-boosted, {φ'('),A'_μ('),σ'(')}, are given byφ'(') = φ(),A'_μ(') =G^ν_μ(-β;-θ_x)A_ν(), σ'(') = σ(),with β=v/c andx^μ' = G^μ_ν(β;θ_x) x^ν.with the matrix G^μ_ν defined asG(β;ϕ_x) ≡Λ_x(β)R_x(θ_x),whereΛ_x(β) = [ 1/√(1-β^2) β/√(1-β^2)00; β/√(1-β^2) 1/√(1-β^2)00;0010;0001 ],R_x(θ_x) = [ 1 0 0 0; 0 1 0 0; 0 0cosθ_x -sinθ_x; 0 0sinθ_xcosθ_x ]. The field theory's computational domain is defined in the observer frame, '. Let us now the prime, and from hereon denoteas the observer frame. At time t=0, the 2-vortex solution in the observer frame is given as the superposition of each 1-vortex solution <cit.>,φ() = 1/ηφ^(1)(-^(1)_0)φ^(2)(-^(2)_0), A_μ() = A_μ^(1)(-^(1)_0)+A_μ^(2)(-^(2)_0), σ() = σ^(1)(-^(1)_0)+σ^(2)(-^(2)_0),where the superscript (i) labels each moving vortex. The superposition of fields with different Lorentz transformations is justified by the fact that the Lorentz gauge condition is, of course, Lorentz invariant. At the initial time, vortex (1) and vortex (2) are sufficiently distant from one another, |^(1)_0-^(2)_0|=20/(eη), so that their interactions can be neglected. § OPTIMAL GRID SIZE We briefly explain how we determine the size of the computational domain for the colliding strings. The box size depends on the collision velocity v, the collision angle α and how long we want to follow the time-evolution of the strings after the collision.If we require that the strings with a small angle α are sufficiently far apart from each other at the top and bottom of the computational domain, one can imagine that the box is elongated in the z-direction. If the velocity v is large, the strings also need sufficient volume in the x-direction, as the strings will travel a long distance in the x-direction. Also, after the collision, the aftermath travels along the strings towards outside the collision point. The simulation must be completed before this aftermath reaches the edge of the computational domain. Therefore, fixing the simulation time determines the size of the entire computational domain.In <ref>, we show the 3D computational domain. The strings are placed on the planes parallel to the Σ_x plane at x=± d. The length of the box along the x-axis, L_x, is given asL_x =max(2v t_ end-2d, 2d) + 2m_x,where m_x is the constant margin and t_ end is the simulation time. Here we assume that the aftermath of the impact propagates on the string at the speed of light. Then the simulation time t_ end is related to the velocity v and the propagation distance of the aftermath ℓ_s0 ast_ end = d/v + ℓ_s0/c. If the length of the box along the z-axis is L_z and the minimum separation of the strings at z=± L_z/2 is s_ min, we obtain the relation,L_z tanα/2 > s_ min.Note that if s_ min is too small, the collision between strings can suddenly occur across the entire computational domain. The aftermath of the collision propagates along the strings. Its distance along the z-axis should be less than L_z,L_z > 2ℓ_s0cosα/2.Combining <ref> and <ref>, we determine L_z from L_z= max{s_ min/tan(α/2), 2ℓ_s0cosα/2}.Finally, the length of the box along the y-axis is given byL_y= L_z tanα/2+2m_y,where m_y is the constant margin.Throughout this paper, we setm_x=15/(eη), m_y=15/(eη), s_ min=10/(eη)and ℓ_s0=100/(eη). For some cases whose final configuration cannot be clearly judged, we set ℓ_s0=300/(eη) to perform simulations for a longer time. From this procedure, the typical number of grid is 202≤ N_x≤ 482, 162≤ N_y≤ 914, 302≤ N_z≤ 1274 with the spatial interval Δ x=0.25/(eη).apsrev4-1
http://arxiv.org/abs/2312.16091v1
{ "authors": [ "Takashi Hiramatsu", "Marc Lilley", "Daisuke Yamauchi" ], "categories": [ "hep-ph", "astro-ph.CO", "hep-th" ], "primary_category": "hep-ph", "published": "20231226152934", "title": "Dynamical simulations of colliding superconducting strings" }
Capacity Enhancement of n-GHZ State Super-dense Coding Channels by Purification and Quantum Neural Network Rong Zhang, Xiaoguang Chen, Yaoyao Wang and Bin Lu Department of Communications Science and Engineering School of Information Science and Technology, Fudan University Shanghai, 200433, China (e-mails:[email protected]) =================================================================================================================================================================================================================================================================================== Although prompt-based multimodal learning with missing modalities shows impressive performance with reduced number of computations, it has two issues: 1) The number of prompts exhibits quadratic growth as the number of modalities increases; and 2) It lacks robustness in cases where the different missing modality settings between training and inference. In this paper, we propose a simple yet effective prompt design to solve these issues.This paper tackles two issues in previous prompt-based multimodal learning for missing modalities: 1) The number of prompts exhibits quadratic growth as the number of modalities increases; and 2) It lacks robustness in cases where the different missing modality settings between training and inference.To this end, we propose a simple yet effective prompt design to solve these issues.Recently, multimodal prompting, which introduces learnable missing-aware prompts for all missing modality cases, has exhibited impressive performance. However, it encounters two critical issues: 1) The number of prompts grows exponentially as the number of modalities increases; and 2) It lacks robustness in scenarios with different missing modality settings between training and inference. In this paper, we propose a simple yet effective prompt design to address these challenges. Instead of using missing-aware prompts, we utilize prompts as modality-specific tokens, enabling them to capture the unique characteristics of each modality. Furthermore, our prompt design leverages orthogonality between prompts as a key element to learn distinct information across different modalities and promote diversity in the learned representations.Extensive experiments demonstrate that our prompt design enhances both performance and robustness while reducing the number of prompts. Multimodal, Missing modality, Prompt learning§ INTRODUCTIONAdvances in sensing technology allow us to acquire a variety of signals including images, text, heat map, depth map, and more. Various methods have emerged for multimodal downstream tasks, such as classification <cit.>, action recognition <cit.>, and emotion recognition <cit.>.Recently, several methods <cit.> have been proposed based on transformer <cit.> to deal with the missing modality scenario during the inference phase. Nevertheless, these works assume that the training dataset is full-modality. In real-world scenarios, users may acquire modality-incomplete data due to device constraints regardless of the training and inference phase. Moreover, expanding the number of modalities results in high computational and memory costs, making it more challenging to train the entire model. Inspired by prompt learning <cit.>, Lee et al. <cit.> propose multimodal prompting, which adopts learnable tokens, missing-aware prompts (MAPs), to transfer knowledge from pretrained models on large-scale datasets to downstream tasks.This method focuses solely on training MAPs and parameters associated with downstream tasks while keeping the backbone network frozen. By assigning prompts to missing modality cases, prompts are updated while learning how to deal with each missing modality case.As a result, it can handle general scenarios where modality-incomplete data exist during the training phase, requiring less than 1% of the total parameters to finetune the pretrained model.While MAPs are a promising solution for addressing the missing modality scenario during both training and inference, there are two notable challenges.First, the number of prompts exhibits exponential growth as the number of modalities increases.For a M modality task, the requisite number of MAPs is 2^M-1 to treat all missing modality scenarios, as depicted in Figure <ref>.This predicament is critical as the advancement of artificial intelligence ensures the inexorable integration of an increasing number of modalities <cit.>. Second, it lacks robustness in unseen missing modality settings at the training phase. Unless an ample number of samples are available for all missing modality cases, the model remains incapable of addressing unseen missing modality cases that were not encountered during the training phase. For example, if we can leverage only modality-complete cases and text-only cases in a training phase, the model can not deal with image-only cases in an inference phase. In this paper, we propose a simple yet effective prompt design to treat these issues. Note that we are focusing on the image-text multimodal classification task.In contrast to the previous study <cit.>, which creates MAPs for each missing modality case (image-only, text-only, and modality-complete), we are developing modality-specific prompts (MSPs) for each individual modality. MSPs acquire knowledge specific to modality from the backbone pretrained on large-scale datasets.We utilize image-specific prompt and text-specific prompt as input of multimodal transformer for image-only and text-only cases, respectively.We adopt a combination of image-specific prompt and text-specific prompt for modality-complete cases so that both prompts can be updated simultaneously. In this way, we train the prompts assigned to the modalities present in samples, disregarding the missing modality case.Furthermore, we incorporate an additional loss term to enforce orthogonality among MSPs, ensuring the diversity of information learned across different modalities.Extensive experiments show that the proposed straightforward prompt design boosts the performance and significantly enhances the robustness for unseen missing modality settings in the training phase.We expect that our prompt design will serve as a valuable guiding principle for a wider range of modality tasks, paving the way for further advancements in this field.§ METHOD §.§ Problem Definition We utilize a multimodal dataset labeled as D, comprising two types of modality (i.e., text and image) denoted as m_1 and m_2. The multimodal dataset is symbolized as D={D_c, D_m_1, D_m_2}. Here, D_c = {x_m_1^i, x_m_2^i, y^i} signifies the complete set containing both modality input and label, while D_m_1 = {x_m_1^j, y^j} and D_m_2 = {x_m_2^k, y^k} represent subsets with a single modality. In accordance with prior work <cit.>, we assign meaningless values to missing-modality instances, like empty text or an image with a pixel value of one. To retain the multimodal input structure, we introduce placeholder inputs, x̃_m_1 and x̃_m_2 (e.g., empty strings or pixels for text/images), for the absent modality data. This leads to D̃_m_1 = {x_m_1^j, x̃_m_2^j, y^j} and D̃_m_2 = {x̃_m_1^k, x_m_2^k, y^k}. Consequently, the multimodal dataset with missing modalities is denoted as D̃ = {D_c, D̃_m_1, D̃_m_2}. This results in datasets covering different scenarios of missing data, including complete data D_c, text-only data D_m_1, and image-only data D_m_2. The overall dataset utilized in the experiments is represented as D̃. The objective of the task is to enhance the ability to generalize in cases where modalities are missing. To tackle this, we concentrate on MAPs, as proposed in the previous study <cit.>, to strengthen the resilience of prompts when dealing with missing modalities.§.§ Revisiting multimodal prompting For the image-text multi-modality, there are three missing modality cases (i.e., image-only, text-only, and modality-complete). Lee et al. <cit.> introduce MAPs to categorize input data. Depending on the missing modality case of the input data, the corresponding prompt is selected. For example, P_i for the image-only sample, P_t for the text-only sample, and P_c for the modality-complete sample as depicted in Fig. <ref>. The prompt is concatenated to the feature in the multimodal transformer and updated through backpropagation in the training phase.In this way, the prompt distinguishes between all missing modality cases, and the pretrained backbone transfers knowledge to the prompt accordingly.§.§ Prompt designInstead of using prompts for missing awareness, we design MSPs that capture the distinct characteristics of each modality. Note that P_is and P_ts are our image-specific prompts and text-specific prompts, respectively.Each prompt learns knowledge from the pretrained backbone specific to its corresponding modality.As shown in Fig. <ref>, we select P_is and P_ts for the image-only case and text-only case, respectively.Instead of P_c in MAPs, we do not have a modality-complete case-specific prompt. Rather, we simply combine P_is and P_ts to handle the modality-complete case. By utilizing the element-wise summation of two prompts, both prompts P_is and P_ts are updated when the input is a modality-complete case. Generating differentiable complete case-specific prompts based on the given text and image modalities, rather than random generalization as used in MAPs, allows each prompt to learn more frequently while reducing the number of prompts.Furthermore, we incorporate an additional loss term L_ortho to enforce orthogonality among MSPs. By reducing the similarity between P_is and P_ts, we maximize the gap between the two prompts to ensure the diversity of information learned from image and text. Therefore, we minimize the cosine similarity between P_ts and P_is as follows: L_ortho = |f(P_is) · f(P_ts)|/max(f(P_is)_2 ·f(P_ts)_2, ϵ), where f is the flatten function, |.| is the absolute value function, ._2 is the L2-norm, and ϵ is a small value to avoid division by zero.§.§ Training objectivesFor finetuning, we freeze all parameters in the text encoder, image encoder, and multimodal transformer and train only MSPs and task-specific layers (pooler and classifier). We note that L_cls is a task-specific objective function (e.g., binary cross-entropy loss for movie genre classification and cross-entropy loss for food classification). Finally, the overall objective function L_total for finetuning our model can be represented as follows: L_total = L_cls + λ L_ortho, where λ is the balancing hyperparameter.§ EXPERIMENTS§.§ Datasets and evaluation metrics MM-IMDb <cit.> is a comprehensive movie genre multi-label classification dataset that incorporates both image and text modalities. The dataset offers various multi-label movie categories, such as action, comedy, drama, thriller, and more.UPMC Food-101 <cit.> is a food category multi-class classification dataset including image and text. All images are collected from the Google Image Search engine and not post-filtered by human labor so that each category contains about 5% noisy images. In the case of MM-IMDb, the multi-label classification performance is evaluated using F1-Macro. For UPMC Food-101, we employ classification accuracy as the evaluation metric.§.§ Implementation detailsFor a fair comparison, we followed the experiment settings with previous work <cit.>. We adopted a multimodal transformer ViLT <cit.> pretrained on large-scale vision-language datasets as our backbone network. For MAPs and our MSPs, we used input-level prompting, which attaches prompts to the input of the multi-head self-attention layer.We finetuned the entire pretrained ViLT with a target dataset for comparison. In contrast, we froze the pretrained ViLT and solely trained prompts and task-specific layers for MAPs and our MSPs as depicted in Fig <ref>.We set the missing rate to 70% in all our experiments, which is the total percentage of modalities removed among image or text modalities.To consider various missing modality scenarios, we set up a total of three scenarios, including two scenarios in which 70% of image or text were removed and a scenario in which both modalities were equally missing by 35%.Three scenarios were used for both training and inference as shown in Table <ref> and  <ref>. We set the balancing hyperparameter λ to 0.15 and 0.1 for MM-IMDb and UPMC Food-101 datasets, respectively.We conducted all experiments, including ViLT, MAPs, and our method, with the same initial learning rate 1×10^-2, weight decay 2×10^-2, AdamW optimizer <cit.>, and batch size of 6 by utilizing an NVIDIA RTX 3090 GPU. §.§ Comparisons with the previous methodWe show quantitative results of ViLT, MAPs, and our MSPs on MM-IMDb and UPMC Food-101 datasets in Table <ref>.Compared to ViLT, MAPs demonstrate superior performance when the missing modality cases encountered during inference align with those seen during training. However, MAPs can not cope with the unseen case where the modality composition ratio of the training and inference is unbalanced.For example, consider a training set consisting of 100% image-modality and 30% text-modality samples. In this scenario, MAPs successfully train the modality-complete prompt P_c and the image-only prompt P_i while the text-only prompt P_t remains unlearned. Consequently, the prediction of a text-only sample is likely inaccurate due to the absence of information in P_t.Due to the reasons mentioned above, MAPs lack robustness when faced with missing modality cases that were not encountered during the training phase, and they even demonstrate inferior performance compared to ViLT.On the other hand, it shows that proposed MSPs achieve robustly uniform performance improvement across various missing modality scenarios.Regardless of the missing modality case, our approach trains prompt for learning information from each modality in the input sample, as mentioned in Section <ref>.In summary, MSPs require fewer prompts than MAPs while outperforming both ViLT and MAPs in terms of both performance and robustness. §.§ Ablation study As shown in Table <ref>, we conducted an ablation study on MM-IMDb to verify that our MSPs and orthogonal loss L_ortho are effective.MSPs not only yield improvements in performance but also enhance the robustness for unseen missing modality cases.Furthermore, L_ortho guides the prompts P_ts and P_is to be orthogonal and acquire distinct information from each modality. This strategy significantly contributes to enhancing performance. In conclusion, our study demonstrates that leveraging our novel prompt design and orthogonal loss for extracting unique information from each modality improves both performance and robustness.§ CONCLUSIONIn this paper, we have proposed a simple yet effective prompt design for robust real-world missing modality scenarios.We introduce modality-specific prompts that learn distinct information from each modality distribution so that they are missing-modality-case-agnostic.Prompt-based learning with orthogonal loss function enforces orthogonality among prompts and enlarges the diversities between different modalities. Evaluations conducted on two benchmark datasets show that our method outperforms in terms of performance and robustness compared to previous work.As the integration of increasingly diverse multimodal signals, we firmly believe that our prompt design will open the way for multimodal finetuning with missing modalities.In the future, we will apply our technique to the task with more modality settings, e.g., action recognition with RGB, depth, and IR as inputs.§ ACKNOWLEDGEMENT This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2021-0-00193, Development of photorealistic digital human creation and 30fps realistic rendering technology).elsarticle-num§ COPYRIGHT FORMSYou must submit your fully completed, signed IEEE electronic copyright release form when you submit your paper. We must have this form before your paper can be published in the proceedings.§ RELATION TO PRIOR WORKThe text of the paper should contain discussions on how the paper's contributions are related to prior work in the field. It is important to put new work incontext, to give credit to foundational work, and to provide details associated with the previous work that have appeared in the literature. This discussion may be a separate, numbered section or it may appear elsewhere in the body of the manuscript, but it must be present.You should differentiate what is new and how your work expands on or takes a different path from the prior studies. An example might read something to the effect: "The work presented here has focused on the formulation of the ABC algorithm, which takes advantage of non-uniform time-frequency domain analysis of data. The work by Smith and Cohen <cit.> considers only fixed time-domain analysis and the work by Jones et al <cit.> takes a different approach based on fixed frequency partitioning. While the present study is related to recent approaches in time-frequency analysis [3-5], it capitalizes on a new feature space, which was not considered in these earlier studies."§ REFERENCESList and number all bibliographical references at the end of the paper. The references can be numbered in alphabetic order or in order of appearance in the document. When referring to them in the text, type the corresponding reference number in square brackets as shown at the end of this sentence <cit.>. An additional final page (the fifth page, in most cases) is allowed, but must contain only references to the prior literature.
http://arxiv.org/abs/2312.15890v2
{ "authors": [ "Jaehyuk Jang", "Yooseung Wang", "Changick Kim" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226054355", "title": "Towards Robust Multimodal Prompting With Missing Modalities" }
1]Daniel Ngo[Denotes equal contribution.] 2]Keegan Harris^* 3]Anish Agarwal 4]Vasilis Syrgkanis 2]Zhiwei Steven Wu[1]University of Minnesota [2]Carnegie Mellon University [3]Columbia University [4]Stanford University[ ],[ ] Incentive-Aware Synthetic Control:Accurate Counterfactual Estimation via Incentivized Exploration [=================================================================================================== gobbleWe consider the classic panel data setting in which one observes measurements of units over time, under different interventions. Our focus is on the canonical family of synthetic control methods (SCMs) which, after a pre-intervention time period when all units are under control, estimate counterfactual outcomes for test units in the post-intervention time period under control by using data from donor units who have remained under control for the entire post-intervention period. In order for the counterfactual estimate produced by synthetic control for a test unit to be accurate, there must be sufficient overlap between the outcomes of the donor units and the outcomes of the test unit. As a result, a canonical assumption in the literature on SCMs is that the outcomes for the test units lie within either the convex hull or the linear span of the outcomes for the donor units. However despite their ubiquity, such overlap assumptions may not always hold, as is the case when e.g. units select their own interventions and different subpopulations of units prefer different interventions a priori.We shed light on this typically overlooked assumption, and we address this issue by incentivizing units with different preferences to take interventions they would not normally consider. Specifically, we provide a SCM for incentivizing exploration in panel data settings which provides incentive-compatible intervention recommendations to units by leveraging tools from information design and online learning. Using our algorithm, we show how to obtain valid counterfactual estimates using SCMs without the need for an explicit overlap assumption on the unit outcomes.arabic § INTRODUCTION A ubiquitous task in statistics, machine learning, and econometrics is to estimate counterfactual outcomes for a group of units (e.g. people, geographic regions, subpopulations) under different interventions (e.g. medical treatments, weather patterns, legal regulations) over time. Such multi-dimensional data are often referred to as panel data (or longitudinal data), where the different units may be thought of as rows of a matrix, and the time-steps as columns.A prominent framework for counterfactual inference using panel data is that of synthetic control <cit.>. Synthetic control methods (SCMs) assume access to a pre-intervention time period, during which all units are under control (i.e. no treatment). After the pre-intervention time period, every unit is given exactly one intervention from a set of possible interventions (which can include the control) and remains under the intervention for the remaining time-steps (i.e. the post-intervention time period). In order to estimate unit-specific counterfactuals under control, SCMs use the pre-intervention time period to learn a model to predict the outcomes for the test unit from the outcomes of the units who remained under control (i.e. the donor units). Once the model is learned, it is then extrapolated to the post-intervention time period in order to predict the counterfactual outcome for the test unit, had they remained under control. Since first being introduced in the field of economics over two decades ago, SCMs have become a popular tool for counterfactual inference and are routinely used across a variety of domains ranging from public policy <cit.> to big tech <cit.>. Additionally, the synthetic control framework has been extended to estimate counterfactual outcomes under different treatments, in addition to control, via the synthetic interventions framework <cit.>. In order for the counterfactual estimate produced by SCMs to be valid, it should be the case that the test unit's potential outcomes may be expressed “reasonably well” by the observed outcomes of the donor units. When providing statistical guarantees on the performance of SCMs, such intuition has traditionally been made formal by making an overlap assumption on the relationship between the donor units and the test unit, for instance of the following form:-1[Unit Overlap] ass:overlapUnit Overlap Assumption Denote the outcome for unit i under intervention d at time t by y_i,t^(d)∈ℝ. For a given unit i and intervention d, there exists a set of weights ω^(i,d)∈ℝ^N_d such that𝔼[y_i,t^(d)] = ∑_j ∈ [N_d]ω^(i,d)[j] ·𝔼[y_j,t^(d)]for all t ∈ [T], where T is the number of time-steps,N_d is the number of donor units who have received intervention d, and the expectation is taken with respect to the randomness in the unit outcomes. More broadly, previous work which provides finite sample guarantees for SCMs assumes that there exists some underlying mapping ω^(i,d) (e.g. linear or convex) through which the outcomes of the test unit (unit i) may be expressed by the outcomes of the N_d donor units.Since such a condition appears to be necessary in order to do valid counterfactual inference, assumptions of this nature have become ubiquitous when proving statistical guarantees about SCMs (see, e.g. <cit.>).[Similar assumptions are also prevalent in the literature on other matching-style estimators typically used in panel data settings, such as difference-in-differences <cit.> or clustering-based methods <cit.>.] However despite their ubiquity, such overlap assumptions may not hold in all domains in which one would like to apply SCMs. For example, consider a streaming service with two service plans: a yearly subscription and a pay-as-you-go model, and suppose that the streaming service wants to determine the effectiveness of its subscription program (the treatment) on user engagement. Under this setting, the subpopulation of streamers who self-select the subscription plan are most likely those who believe they will consume large amounts of content on the platform. In contrast, those who pay-as-they-go most likely believe they will consume less content. This makes drawing conclusions about the counterfactual user engagement levels of the two subpopulations under different business plans difficult, as they may have very different experiences under the two business plans due to their differing tastes. While the streaming service would ideally like to run a randomized controlled trial (RCT) in order to estimate counterfactual engagement levels across different groups, participation in RCTs is voluntary for ethical and legal reasons, and so ensuring compliance is generally not possible. In this work, our goal is to leverage tools from information design in order to incentivize the exploration of different treatments by non-overlapping unit subpopulations in order to obtain valid counterfactual estimates using synthetic control methods. This is possible because the principal (i.e. the person/platform running the synthetic control method) will have access to more information about counterfactual unit outcomes than any one unit in isolation, due to the fact that they get to observe the outcome trajectories of all units using the platform. Specifically, we adopt tools and techniques from the literature on incentivizing exploration in multi-armed bandits (e.g. <cit.>) to show how the principal can leverage knowledge gained from previous interactions with similar units to persuade the current unit to take an intervention they would not normally take. The principal does this sending a signal (or recommendation) to the unit with information about which intervention is best for them. The principal's recommendation policy is designed so that it is incentive-compatible, i.e. it is in the unit's best-interests to follow the intervention recommended to them by the policy. In our streaming service example, incentive-compatible signaling may correspond to recommending a service plan for each user based on their usage of the platform, in a way that guarantees that units are better off in expectation when purchasing the recommended service plan.Our procedure ensures that the <ref> becomes satisfied over time, which enables the principal to do valid counterfactual inference using off-the-shelf SCMs after they have interacted with sufficiently many units. Overview of Our ResultsWe overview related work on learning from panel data, synthetic control methods, incentivized exploration in bandits, and algorithmic (Bayesian) persuasion in <Ref>. In <Ref> we introduce our model and provide relevant technical background on synthetic control methods.We introduce our algorithm for incentivizing exploration for synthetic control when there are two interventions (treatment and control) in <Ref> (<Ref>). At a high level, we adapt the “hidden exploration” paradigm from incentivized exploration in bandits to the panel data setting: First, we randomly divide units into “exploit” units and “explore” units. For all exploit units, we recommend the intervention which we estimate maximizes their expected utility, given the data we have seen so far. For every explore unit, we recommend the intervention for which we would like the <ref> to be satisfied. Units are not aware of their exploit/explore designation by the algorithm, and the explore probability is chosen to be low enough such that the units have an incentive to follow the principal's recommendations. After <Ref> has been used to provide recommendations to sufficiently-many units, we can guarantee that with high probability, the <ref> is satisfied for all units for either intervention of our choosing. This enables us to use existing synthetic control methods off-the-shelf in order to obtain finite sample guarantees for counterfacutal estimation for all units under control after running <Ref>.Along the way, we draw a conceptual connection between decision-making using panel data and linear contextual bandits, which has not been previously observed in the literature to the best of our knowledge. In particular, we show that under the setting of robust synthetic control <cit.>, the problem of assigning interventions to units based on their pre-intervention outcomes may be thought of as a generalization of the linear contextual bandit problem in which there is noise in the observed context. While we leverage this conceptual connection to design algorithms for incentivizing exploration in panel data settings, we hope that it will aid in the design of contextual bandit algorithms which are more robust to measurement error in the observed context.We show that the <ref> is indeed a necessary condition to obtain valid counterfactual estimates in our setting in <Ref>. In <Ref>, we extend <Ref> to the setting of synthetic interventions <cit.>, in which the principal may wish to estimate counterfactual outcomes under different treatments, in addition to control. We provide a hypothesis test for checking whether the <ref> holds for a given test unit and set of donor units in <Ref>, and we empirically evaluate the performance of our incentive-aware synthetic control estimator in <Ref>. When the <Ref> does not hold a priori, we show that our methods enable consistent counterfactual outcome estimation, while existing SCMs generally do not. §.§ Related WorkCausal Inference and Synthetic Control Methods Popular methods for counterfactual inference using panel data include synthetic control methods <cit.>, difference-in-differences <cit.>, and clustering-based methods <cit.>. Within the literature on synthetic control, our work builds off of the line of work on robust synthetic control <cit.>, which assumes outcomes are generated via a latent factor model (e.g. <cit.>) and leverages principal component regression (PCR) <cit.> to estimate unit counterfactual outcomes. Our work falls in the small-but-growing line of work at the intersection of synthetic control methods and online learning <cit.>, although we are the first to consider unit incentives in this setting. Particularly relevant to our work is the model used by <cit.>, which extends the finite sample guarantees from PCR in the panel data settings to online settings. While <cit.> also consider incentives in synthetic control methods (albeit in an offline setting), they consider a principal who can assign interventions to units (e.g. can force compliance). As a result, the strategizing they consider is that of units who modify their pre-intervention outcomes in order to be assigned a more desirable intervention. In contrast, we consider a principal who cannot assign interventions to units, but instead must persuade units to take different interventions by providing them with incentive-compatible recommendations. More broadly, there has been recent interest in causal inference on characterizing optimal treatment policies which must satisfy some additional constraint(s). For example, <cit.> study a setting in which the treatment supply is limited. Much like us, <cit.> consider a setting in which intervening on the treatment is not possible, but encouraging treatment is feasible. However their focus is on the setting in which the treatment is a limited resource, so their goal is to encourage treatment to those who would benefit from it the most. Since they do not consider panel data and place different behavioral assumptions on the individuals under intervention, the tools and techniques we use to persuade units in our setting differ significantly from theirs. Finally, <cit.> consider settings in which there is some uncertain cost associated with treatment.Bayesian Persuasion (BP) and Incentivized Exploration (IE) Bayesian Persuasion <cit.> is a popular model of information design <cit.>, a branch of theoretical economics which aims to characterize optimal communication between strategic individuals under information asymmetry. In its simplest form, BP is a game between two players: an informed sender and an uninformed receiver. After observing a payoff-relevant state, the sender sends a signal to the receiver, who then takes a payoff-relevant action. The goal of the sender is to design a signaling policy which (probabilistically) reveals information about the state in order to incentivize the receiver to take desirable actions.Our work draws on techniques from the growing literature on incentivizing exploration <cit.>, which falls at the intersection of BP and multi-armed bandits <cit.>.In IE, a principal interacts with a sequence of myopic agents over time. Each agent takes an action, and the principal can observe the outcome of each action. While each individual agent would prefer to take the action which appears to be utility-maximizing (i.e. to exploit), the principal would like to incentivize agents to explore different actions in order to benefit the population in aggregate. Motivation for IE includes online rating platforms and recommendation systems which rely on information collected by users to provide informed recommendations about various products and services. Incentivized exploration may be viewed as a generalization of BP, as each round of IE is an instance of a one-shot BP game between the principal and a myopic agent. Following the framework of <cit.>, we consider the recommendations given by the principal to be the only incentive for participating units. More precisely, in our model, the principal does not offer monetary payment for units to choose an intervention, which has known disadvantages such as potential selection bias and ethical concerns <cit.>.Within the literature on IE, the work most related to ours on a conceptual level is that of <cit.>, who consider the problem of incentivizing exploration in clinical trail settings. Like us, <cit.> study a setting in which the principal would like to incentivize agents to explore different treatments. However their goal is to estimate population-level statistics about each intervention, while we are interested in estimating unit-specific counterfactuals under different interventions using panel data. As a result, while our high-level motivations are somewhat similar, the tools and techniques we use to obtain our results differ significantly. On a technical level, our mechanisms are somewhat similar to the initial exploration phase in <cit.>, although we consider a more general setting where unit outcomes may vary over time, in contrast to the simpler multi-armed bandit setting they consider. § SETTING AND BACKGROUND Notation Subscripts are used to index the unit and time-step, while superscripts are reserved for interventions. We use i to index units, t to index time-steps, and d to index interventions.For x ∈ℕ, we use the shorthand x := {1, 2, …, x} and x_0 := {0, 1, …, x-1}. [i] denotes the i-th component of vector , where indexing starts at 1.We sometimes use the shorthand T_1 := T - T_0 for T, T_0 ∈ℕ_>0 such that T > T_0.Δ(𝒳) denotes the set of possible probability distributions over 𝒳. 0_d is shorthand for the vector [0, 0, …, 0] ∈ℝ^d. Finally, we use the notation a ∧ b := min{a,b} and a ∨ b := max{a,b}.§.§ Our Panel Data Setting We consider a setting in which the principal interacts with a sequence of n units for T time steps each. We assume that there is a pre-intervention period of T_0 time-steps, for which each unit is under the same intervention, i.e., under control. After the pre-intervention period, the principal either recommends the treatment to the unit or suggests that they remain under control. We denote the sender's recommended intervention to unit i by d_i ∈{0, 1}, where 1 denotes the treatment and 0 the control. After receiving the recommendation, unit i chooses an intervention d_i ∈{0,1} and remains under intervention d_i for the remaining T-T_0 time-steps. We use_i,pre := [y_i,1^(0), …, y_i,T_0^(0)]^⊤∈ℝ^T_0to denote unit i's pre-treatment outcomes under control, andd_i,post := [y_i,T_0+1^(d), …, y_i,T^(d)]^⊤∈ℝ^T-T_0to refer to unit i's post-intervention outcomes under intervention d. We denote the set of possible pre-treatment outcomes by _pre. In order to impose structure on how unit outcomes are related for different units and time-steps, we assume that outcomes are generated via the following latent factor model, a popular assumption in the literature (see references in <Ref>). [Latent Factor Model] Suppose the outcome for unit i at time t under treatment d ∈{0,1} takes the following factorized form𝔼[y_i,t^(d)]= ⟨_t^(d), _i ⟩ y_i,t^(d) = 𝔼[y_i,t^(d)] + ε_i,t^(d)where _t^(d)∈ℝ^r is a latent vector which depends only on the time-step t and intervention d, _i ∈ℝ^r is a latent vector which only depends on unit i, and ε_i,t^(d) is zero-mean sub-Gaussian random noise with variance at most σ^2. For simplicity, we assume that |[yd_i,t]| ≤ 1,∀i ∈n, t ∈T, d ∈{0,1}. While we assume the existence of such structure in the data, we do not assume that the principal knows or gets to observe _t^(d), _i, or ε_i,t^(d). In the literature on synthetic control, the goal of the principal is to estimate unit-specific counterfactual outcomes under different interventions. Specifically, our target causal parameter is the counterfactual average expected post-intervention outcome.-1 (Average expected post-intervention outcome) The average expected post-intervention outcome of unit i under intervention d is [y̅_i, post^(d)] 1/T_1∑_t=T_0 + 1^T[y_i,t^(d)]where the expectation is taken with respect to (ϵ_i,t^(d))_T_0 < t ≤ T.In order to infer something about unit outcomes in the post-intervention time period from outcomes in the pre-intervention time period, we require the following linear span inclusion assumption on the latent factors in the post-intervention time period.[Consider the limiting case in which _t^(0) = 0_r for all t ≤ T_0. Under such a setting, all expected unit outcomes in the pre-intervention time period will be 0, regardless of the underlying unit latent factors.] [Linear Span Inclusion] For each intervention d ∈{0,1} and time t > T_0, we assume that_t^(d)∈{_1^(0), _2^(0), …, _T_0^(0)}.<Ref> is ubiquitous in the literature on robust synthetic control (see, e.g.  <cit.>). Under Assumptions <ref> and <ref>, previous work (e.g. <cit.>) observes that the average expected post-treatment outcome for any unit i may be written as a linear combination of the expected pre-intervention outcomes of unit i. The following is an important structural result for our main algorithm, as it will allow us to transform the problem of estimating unit counterfactual outcomes from a regression over donor units to a regression over pre-intervention outcomes.Under <Ref> and <ref>, there exists a slope vector ^(d)∈ℝ^T_0 such that the average expected post-intervention outcome of unit i under intervention d is given by [y̅_i, post^(d)] = 1/T_1⟨^(d), [y_i, pre] ⟩.We assume that the principal has knowledge of a valid upper-bound D on the ℓ_2-norm of d, i.e. d_2 ≤Γ for d ∈{0, 1}. While we are not the first to make this observation, we are the first to leverage it to design algorithms for assigning interventions based on ideas from the contextual bandit literature.[For the reader unfamiliar with the linear contextual bandit setting, the remainder of this paragraph may be skipped without any loss in continuity.] In particular, it is sometimes helpful to think of [y_i, pre] as the “context” of unit i, and [y̅_i, post^(d)] as the principal's expected reward of assigning intervention d given [y_i, pre]. Since we observe y_i, pre instead of [y_i, pre] we cannot apply linear contextual bandit algorithms to our panel data setting out-of-the-box; however as we will show in <Ref>, one can combine ideas from incentivizing exploration in contextual bandits with principled ways of handling the noise in the pre-intervention outcomes to incentivize exploration in panel data settings.§.§ Background on PCR Given historical data of the form {_i,pre, _i,post^(d_i)}_j=1^i-1, our goal is to estimate ^(d) as ^(d)_i for d ∈{0, 1} via PCR <cit.>. Let d_i be the set of units who have received intervention d before unit i arrives, and let n_i^(d) be the number of such units. We useYd_pre, i := [_j,pre^⊤: j ∈d_i] ∈ℝ^nd_i × T_0to denote the matrix of pre-treatment outcomes corresponding to the subset of units who have undergone intervention d before unit i arrives, andY_post,i^(d)[∑_t=T_0+1^T y_j, t^(d) : j ∈_i^(d)] ∈ℝ^n_i^(d)× 1be the vector of the sum of post-intervention outcomes for the subset of units who have undergone intervention d before unit i arrives. We denote the singular value decomposition of Yd_pre, i asYd_pre, i = ∑_ℓ = 1^nd_i ∧ T_0sd_ℓd_ℓ (d_ℓ)^⊤,where {s_ℓ^(d)}_ℓ = 1^n_i^(d)∧ T_0 are the singular values of Yd_pre, i, and d_ℓ and d_ℓ are orthonormal column vectors. We assume that the singular values are ordered such that s_1(Yd_pre,i) ≥⋯≥ s_n_i^(d)∧T_0(Yd_pre,i) ≥ 0. For some threshold value r, we useYd_pre, i := ∑_ℓ = 1^rsd_ℓd_ℓ (d_ℓ)^⊤to refer to the truncation of Yd_pre, i to its top r singular values. We define the projection matrix onto the subspace spanned by the top r right singular vectors as _i,r^(d)∈ℝ^r × r, given by _i,r^(d)∑_ℓ = 1^r _ℓ^(d)(_ℓ^(d))^⊤. Equipped with this notation, we are now ready to define the procedure for estimating ^(d) using (regularized) principal component regression.Given regularization parameter ρ≥ 0 and truncation level r ∈ℕ, for d ∈{0,1} and i ≥ 1, let 𝒱_i^(d)(Y_pre, i^(d))^⊤Y_pre, i^(d) + ρ_i,r^(d) . Then, regularized PCR estimates ^(d) as _i^(d)( 𝒱_i^(d))^-1Yd_pre, i Y_i,post^(d), where ^(d) is defined as in <Ref>. The average post-intervention outcome for unit i under intervention d may then be estimated as [y̅_i,post^(d)] := 1/T_1⟨^(d) y_i,pre⟩. Under the <ref> and <Ref>, work on robust synthetic control (see <Ref>) uses (regularized) PCR to obtain consistent estimates for counterfactual post-intervention outcomes under different interventions. The regret of the principal after n units is R(n) := ∑_i=1^n rd_i^*_i - rd_i_i,where d_i is the intervention taken by unit i. We say that the principal has no regret if R(n) → 0 as n →∞. §.§ Recommendations and Beliefs The interaction between the principal and a unit i may be characterized by the tuple (_i, pre, d_i, d_i, d_i_i, post), where _i,pre are unit i's pre-treatment outcomes, d_i is the intervention recommended to unit i, d_i is the intervention taken by unit i, and d_i_i, post are unit i's post-intervention outcomes under intervention d_i. The interaction history at unit i is the sequence of outcomes, recommendations, and interventions for all units j ∈i-1. Formally,H_i := {(_j,pre, d_j, d_j, d_j_j,post)}_j=1^i-1.We denote the set of all possible histories at unit i as ℋ_i. A recommendation policy π_i: ℋ_i ×𝒴_pre→Δ({0,1}) is a (possibly stochastic) mapping from histories and pre-treatment outcomes to interventions. We assume that before the first unit arrives, the principal commits to a sequence of recommendation policies {π_i}_i=1^n which are fully known to all units.Whenever π is clear from the context, we use the shorthand d_i = π_i(_i,pre) to denote the recommendation of policy π_i to unit i.-1In addition to having a corresponding latent factor, each unit has an associated belief over the effectiveness of each intervention. This is made formal through the following definitions. Unit i has prior belief __i, which is a joint distribution over potential post-intervention outcomes {𝔼[_i,post^(0)], 𝔼[_i,post^(1)]}.We use the shorthand μ_v_i^(d) := ___i[y̅_i, post^(d)] to refer to a unit's expected average post-intervention outcome, with respect to their prior __i.Unit i is of type τ∈{0, 1} if τ = max_d ∈{0,1}μ_v_i^(d). We denote the set of all units of type τ as ^(τ) and the dimension of the latent subspace spanned by type τ units as r_τ := (_j : j ∈^(τ)). In other words, a unit's type is the intervention they would take according to their prior without any additional information about the effectiveness of each intervention. We consider the setting in which the possible latent factors associated with each type lie in mutually orthogonal subspaces (i.e. <ref> is not satisfied).[If the <ref> is known to be satisfied, units do not need to be incentivized to explore, and thus existing synthetic control methods may be used off-the-shelf.][Note that ∑_τ r_τ≤ r.] As is standard in the literature on IE, we assume that units are Bayesian-rational and each unit knows its place in the sequence of n units (i.e., their index i ∈n). Thus given recommendation d_i, unit i selects their intervention d_i such that d_i ∈max_d ∈{0,1} ___i[y̅_i, post^(d) | d_i], i.e. they select the intervention d_i which maximizes their utility in expectation over their prior __i, conditioned on receiving recommendation d_i from the principal. We say that a recommendation d is Bayesian incentive-compatible (BIC) for unit i if, conditional on receiving intervention recommendation d_i = d, unit i's average expected post-intervention outcome under intervention d is at least as large as their average expected post-intervention outcome under any other intervention:___i[y̅_i, post^(d) - y̅_i, post^(d') | d_i = d] ≥ 0 for every d' ∈{0,1}. A recommendation policy π is BIC if the above condition holds for every intervention d which is recommended by π with positive probability. § INCENTIVIZED EXPLORATION FOR SYNTHETIC CONTROL In this section, we turn our focus to incentivizing type 1 units to remain under control in the post-intervention period. The methods we present may also be applied to incentivize type 0 units to take the treatment. We focus on incentivizing control amongst type 1 units in order to be in line with the literature on synthetic control, which aims to estimate counterfactual unit outcomes under control. The goal of the principal is to design a recommendation policy that convinces enough units of type 1 to select the control in the post-treatment period, such that the <ref> is satisfied for all units of type 1. Our algorithm (<Ref>) is inspired by the “detail free” algorithm for incentivizing exploration in multi-armed bandits in <cit.>. Furthermore through our discussion in <Ref>, one may view <Ref> as an algorithm for incentivizing exploration in linear contextual bandit settings with measurement error in the context. The recommendation policy in <Ref> is split into two stages. In the first stage, the principal provides no recommendations to the first N_0 units. Left to their own devices, these units take their preferred intervention according to their prior belief: type 0 units take control, and type 1 units take the treatment. We choose N_0 such that it is large enough for the <ref> to be satisfied for all units of type 1 under the treatment with high probability.In the second stage, we use the initial data collected during the first stage to construct an estimator of the average expected post-intervention outcome for type 1 units under treatment using PCR, as described in <Ref>. At a high level, in order to incentivize a type 1 unit i to try the control, we assume that (1) there is a non-zero chance under unit i's prior __i that y_i,post^(0)≥y_i,post^(1) and (2) the principal will be able to infer this given the set of observed outcomes for units in the first phase. (This is made formal in <Ref> below.)Such “fighting chance” assumptions are common (and oftentimes necessary) in the literature on IE. By dividing the total number of units in the second stage into phases of L rounds each, the principal can randomly “hide” one explore recommendation amongst L-1 exploit recommendations. When the principal sends an exploit recommendation to unit i, they recommend the intervention which would result in the highest expected average post-intervention outcome for unit i, conditional on the observed outcomes collected during the first stage. On the other hand, the principal recommends that unit i take the control whenever they send an explore recommendation.Thus under <Ref>, if a type 1 unit receives a recommendation to take the treatment, they will always follow the recommendation since they can infer they must have received an exploit recommendation. However if a type 1 unit receives a recommendation to take the control, they will be unsure if they have received an explore or an exploit intervention, and can therefore be incentivized to follow the recommendation as long as L is set to be large enough (i.e. the probability of the recommendation being an explore recommendation is sufficiently low).-1Our algorithm requires the following assumption on the units' knowledge about the interaction:[Unit Knowledge for Synthetic Control] We assume that the following are common knowledge among all units and the principal: * Valid upper- and lower-bounds on the fraction of type 1 units in the population, i.e. if the true proportion of type 1 units is p_1 ∈ (0,1), the principal and units know p_L, p_H ∈ (0, 1) such that p_L ≤ p_1 ≤ p_H. * Valid upper- and lower-bounds on the prior mean for each receiver type and intervention, i.e. values μ^(d,τ, h), μ^(d,τ, l)∈ℝ such that μ^(d,τ, l)≤μ_v_i^(d)≤μ^(d, τ, h) for all units i ∈N, receiver types τ∈{0,1}, and interventions d ∈{0, 1}.* For some C ∈ (0, 1), a lower bound on the smallest probability of the event ξ_C,i over the priors and latent factors of type 1 units, denoted by min_i ∈^(1)__i[ξ_C,i] ≥ζ_C > 0, where ξ_C,i := {μ_i^(0)≥y_i, post^(1) + C} where y_i, post^(1) is the average post-intervention outcome for unit i under intervention 1.* A sufficient number of observations N_0 needed such that the <ref> is satisfied for type τ units under intervention τ with probability at least 1 - δ, for τ∈{0, 1} and some δ∈ (0, 1).<Ref> posits that bounds on various aggregate statistics about the underlying unit population are common knowledge across all units. We are now ready to present our main result: a way to initialize <Ref> to ensure that <ref> is satisfied with high probability.Suppose that <Ref> holds for some constant gap C ∈ (0, 1). If the number of initial units N_0 is chosen to be large enough such that  <ref> is satisfied for all units of type 1 under treatment with high probability, then <Ref> is BIC if the batch size L is chosen to be sufficiently large.Moreover, if the number of batches B is chosen to be sufficiently large, then the <ref> will be satisfied simultaneously for all type 1 units under control with high probability. See <Ref> for complete proof details. At a high level, the proof follows by expressing the compliance condition for type 1 units as different cases depending on the principal's recommendation. In particular, a type 1 unit could receive recommendation d_i = 0 for two reasons:(1) Under event ξ̂_C,i when control is indeed the better intervention according to the unit prior and the observed outcomes of previous units, or (2) when the unit is randomly selected as an explore unit. Using the probabilities of these two events occurring, we can derive a condition on the minimum phase length L such that the expected gain from exploiting (when the event ξ̂_C,i happens) exceeds the expected loss from exploring. We then further simplify the condition on the phase length L so that it is computable by the principal by leveraging existing finite sample guarantees for principal component regression using the samples collected in the first stage when no recommendations are given. <Ref> says that after running <Ref> with optimally-chosen parameters, the <ref> will be satisfied for all type 1 units under control with probability at least 1 - 3 δ. Therefore after running <Ref> for a sufficiently long time, the principal can use off-the-shelf synthetic control methods (e.g. <cit.>) to obtain counterfactual estimates with valid confidence intervals for all type 1 units under control with high probability. Consider the following concrete instantiation of <Ref>.Consider a setting with two receiver types, each with equal probability: type 1 units prefer the treatment and type 0 units prefer control. Let v_i ∼[0.25, 0.75].If unit i is a type 1 unit, they have latent factor _i = [v_i 0]. Otherwise they have latent factor _i = [0 v_i]. Suppose that T_0=2 and T_1 = 1. We leave _1^(0), _2^(0), _3^(0), _3^(1) unspecified. Since the dimension of the type 1 subpopulation in this setting is 1, we only need to incentivize a single type 1 unit to take the control in order for the <ref> to be satisfied. Therefore, for a given confidence level δ, it suffices to set N_0 = B = log(1/δ)/log(2). After the first N_0 time-steps, the principal will know all of the parameters necessary to compute L. Since T_1 = 1, the units' priors are only over {[y_i,3^(0)], [y_i,3^(1)]}. For simplicity, suppose that (1) all type 1 units believe that [y_i,3^(0)] ∼[0, 0.5] and [y_i,3^(1)] ∼[0, 1] and (2) there is no noise in the post-intervention outcome. Under this setting, event ξ_C,i simplifies to ξ_C,i = { 0.25 ≤[y_i, 3^(1)] + C} for any unit i and constant C ∈ (0, 1). Therefore ζ_C = __i[[y_i, 3^(1)] ≤ 0.25 - C] = max{0, 0.25-C}.§.§ On the Necessity of the <ref> We conclude this section by showing that the <ref> must be satisfied in order to obtain consistent counterfactual estimates under the setting described in <Ref>. Fix a time horizon T, pre-intervention time period T_0, and number of donor units n^(0). For any algorithm used to estimate the average post-intervention outcome under control for a test unit, there exists a problem instance such that the produced estimate has constant error whenever the <ref> is not satisfied for the test unit under control, even as T, T_0, n^(0)→∞.Consider the setting in which all type 0 units have latent factor _0 = [0 1]^⊤ and all type 1 units have latent factor _1 = [1 0]^⊤. Suppose that _t^(0) = [1 0]^⊤ if t2 = 0 and _t^(0) = [0 1]^⊤ if t2 = 1 if t ≤ T_0. For all t > T_0, let _t^(0) = [H 1]^⊤ for some H in range [-c, c], where c > 0. Furthermore, suppose that there is no noise in the unit outcomes. Under this setting, the (expected) outcomes for type 0 units under control are[y_0,t^(0)] =0 if t2 = 0, t ≤ T_0 1 if t2 = 1, t ≤ T_0 1 if t > T_0 and the outcomes for type 1 units under control are[y_1,t^(0)] =1 if t2 = 0, t ≤ T_0 0 if t2 = 1, t ≤ T_0 H if t > T_0. Suppose that H ∼[-c,c] and suppose that the principal wants to estimate [y_1,post^(0)] using just the set of outcomes [_0,pre], [_0,post^(0)], and [_1,pre]. Since these outcomes do not contain any information about H, any estimator [y_1, post^(0)] cannot be a function of H and thus will be at least a constant distance away from the true average post-intervention outcome [y_1,post^(0)] in expectation over H. That is, _H ∼[-c,c] [y_1,post^(0)] -[y_1, post^(0)]= _H ∼[-c,c]H -[y_1,post^(0)]= 1/2c(c^2 + ( [y_1, post^(0)])^2)≥c/2. Note that our choice of T and T_0 was arbitrary, and that estimation does not improve as the number of donor units increase since (1) there is no noise in the outcomes and (2) all type 0 units have the same latent factor. Therefore any estimator for y_1,post^(0) is inconsistent in expectation over H as n^(0),T,T_0 →∞. This implies our desired result because if estimation error is constant in expectation over problem instances, there must exist at least one problem instance with constant estimation error. §.§ Accurate Counterfactual Estimation After collecting sufficient samples from <Ref>, we can obtain an estimate of our target causal parameter, which is the average expected potential outcome over the post-intervention period _post. Formally, for any unit i and intervention d, we aim to estimate: θ_i^(d) = 1/T_1∑_t ∈_post[y_i,t^(d) | {_t^(d), _i: t ∈_post}] Letbe the collection of latent factors {_t^(d), _i: (t, i, d) } and intervention assignment D = { (t, i, d): y_i,t^(d)}. Then, after running <Ref>, the <ref> is satisfied for type 1 units under control. Hence, we can write the target causal parameter θ_i^(d) asGiven a type 1 unit i and intervention d, after running <Ref> [y_i,t^(d) | _t^(d), _i]= ∑_j ∈^(d) w_i^(d)[j] [y_i,t^(d) | ] for t ∈ T θ_i^(d) = 1/T_1∑_t ∈_post∑_j ∈^(d) w_i^(d)[j] [y_i,t^(d) | ] Let N^(d) denote the number of units under intervention d in the second stage of <Ref> and _N^(0)(0) = ([y_1, pre, ⋯, y_N^(0), pre])^⊤ denote the matrix of expected-pre-treatment outcomes for all units under control. Furthermore, we define the following quantity:(Signal-to-noise Ratio) We define the signal-to-noise ratio associated with an intervention d and unit i as: snr_i(d) σ_r(_i(d))/U_iwhere (U_i)_i ≥ 1 is a noise-dependent sequence growing as U_i = O( √(i) + √(T_0) + √(log( 1/δlog(i) ))) Then, we can use the following lemma to obtain a finite-sample guarantee for the estimated causal parameter θ̂_i^(d).Let δ∈ (0,1) be an arbitrary confidence parameter and ρ > 0 be chosen to be sufficiently small. Further, assume that the linear span inclusion assumption is satisfied, there are some units i_0 ≥ 1 such that rank(_i_0(d)) = r, and snr_i(d) ≥ 2 for all i ≥ i_0. If T_0 ≤T/2, then under <Ref>, with probability at least 1 - O(2δ), simultaneously for all interventions d ∈ [2]_0:,[y̅_i, post^(d)] - [y̅_i, post^(d)] ≤ O ( θ_i^(d)r /T_0 ∧ N^(d) + θ_i^(d)√(r)/√(T_0 ∧ N^(d)) + θ_i^(d)√( rlog(k/δ))/√((T - T_0)(T_0 ∧ N^(d))))where [y̅_i, post^(d)] 1/T - T_0·⟨θ̂_i(d) , y_i,pre⟩ is the estimated average post-intervention outcome for unit i under intervention d.§ EXTENSION TO SYNTHETIC INTERVENTIONS Idea (old)* Each unit belongs to one of at most r disjoint subspaces of ℝ^r * Idea: partially-BIC algorithm for each subspace * The idea is to incentivize some unit in a given subspace to try all interventions in order to get valid counterfactual estimates for all units in the subspace * We would run this algorithm r times, instead of k! times * To run this procedure, we would need knowledge of a receiver subtype with strictly positive probability mass, for each subspace Our focus so far has been to incentivize units who a priori prefer the (single) treatment to take the control in order to obtain accurate counterfactual estimates for all units under control. In this section, we extend <Ref> to the setting where there are multiple treatments. Now our goal is to incentivize enough exploration across all interventions such that the <ref> is satisfied and thus our counterfactual estimates are valid simultaneously for all units in the population under every intervention. In order to do so, we use tools from the literature on synthetic interventions, which is a generalization of the synthetic control framework which allows for counterfactual estimation under different treatments, in addition to control <cit.>.Our setup is the same as in <Ref>, with the only difference being that each unit may choose one of k ≥ 2 interventions after the pre-treatment period. We assume that <Ref>, <Ref>, <Ref>, and <Ref> are all extended to hold under k interventions.Under this setting, a unit's beliefs induce a preference ordering over different interventions. We capture this through the notion of a unit subtype, which is a generalization of our definition of unit type (<Ref>) to the setting with k interventions. A unit subtype τ∈ [k]_0^k is a preference ordering over interventions. Unit i is of subtype τ if for every κ≤ k,τ[κ] = max_d ∈k_0 \{τ[κ']}_κ' < κμd_v_i. Unit i's type is τ[1]. We denote the set of all units of subtypeas ^() and the dimension of the latent subspace spanned by subtypeunits as r_ := (_j : j ∈^()). Since different unit subtypes have different preferences over interventions, we design our algorithm in this section (<Ref>) to be BIC for all units of a given subtype (in contrast to <Ref>, which is BIC for all units of a given type). While there are at most k! subtypes, the principal will only have to run <Ref> on at most r different subtypes in order to ensure that the <ref> is satisfied for all units under all interventions with high probability. This is because if the <ref> is satisfied for a specific unit subtype whose latent factors lie within a particular subspace, it is also satisfied for all other units whose latent factors lie within the same subspace.For a given subtype τ,the goal of <Ref> is to incentivize sufficient exploration for the <ref> to be satisfied for all interventions for subtype τ units with high probability. As was the case in <Ref>, the algorithm will still be split into two phases, and units will not be given any recommendation in the first phase.The main difference compared to <Ref> is how the principal chooses the exploit intervention in the second phase. When there are only two interventions, the principal can compare their estimate of the average counterfactual outcome under treatment to the lower bound on the prior-mean average counterfactual outcome for control for type 1 units (and vice versa for type 0 units). However, a more complicated procedure is required to maintain Bayesian incentive-compatibility when there are more than two interventions. Instead in <Ref>, the principal will set the exploit intervention as follows: For receiver subtype τ, <Ref> sets intervention ℓ to be the exploit intervention for unit i only if the sample average of all interventions d ∈{ℓ - 1, …, τ[k]} are both (1) larger than the sample average of intervention τ[1] by some constant gap C and (2) less than the lower bound on the prior-mean average counterfactual outcome μ^(ℓ)_i by C.If no such intervention satisfies both conditions, intervention τ[1] is chosen to be the exploit intervention.We require the following “common knowledge” assumption for <Ref>, which is analogous to <Ref>: We assume that the following are common knowledge among all units and the principal:* Valid upper- and lower-bounds on the fraction of subtype τ units in the population, i.e. if the true proportion of subtype τ units is p_τ∈ (0,1), the principal knows p_τ,L, p_τ,H∈ (0, 1) such that p_τ,L≤ p_τ≤ p_τ,H. * Valid upper- and lower-bounds on the prior mean for each receiver subtype and intervention, i.e. values μ^(d,, h), μ^(d,, l)∈ℝ such that μ^(d,, l)≤μ_i^(d)≤μ^(d, , h) for all receiver subtypes , interventions d ∈k_0 and all units i ∈^().* For some C ∈ (0,1), a lower bound on the smallest probability of the event _C,i over the prior and latent factors of subtypeunits, denoted by min_i ∈^()__i[_C,i] ≥ζ_C > 0, where _C, i{∀ j ∈k_0 \{[1], [k]}: y_i, post^([1]) + C ≤y_i,post^(j)≤μ_i^([k]) - C}where y̅_i,post^(ℓ) is the average post-intervention outcome for unit i under intervention ℓ∈k_0. * A sufficient number of observations N_0 needed such that the <ref> is satisfied for subtypeunits under intervention [1] with probability at least 1 - δ, for some δ∈ (0, 1). Suppose that <Ref> holds for some constant gap C ∈ (0, 1). If the number of initial units N_0 is chosen to be large enough such that <ref> is satisfied for all units of subtypeunder intervention [1] with high probability, then <Ref> is BIC if the batch size L is chosen to be sufficiently large.Moreover, if the number of batches B is chosen to be sufficiently large, then the <ref> will be satisfied for all units of subtypeunder all interventions with high probability.See <Ref> for the proof, which follows similarly to the analysis of <Ref>. The main difference in the analysis compared to that of <Ref> lies in the definition of the “exploit” intervention.First, when there are k interventions, an intervention ℓ is chosen as the exploit intervention only when the sample average of each intervention d ∈{ℓ - 1, …, [k]} is (i) larger than the sample average of intervention [1], and (ii) smaller than the prior-mean average outcome of intervention ℓ (with some margin C). Second, instead of choosing between a “previously best” intervention (an intervention [j] < ℓ with the highest sample average) and intervention ℓ to be the 'exploit' intervention, <Ref> always choose between [1] and ℓ. This is because in the former case, conditional on any given intervention being chosen as the exploit intervention, there is no known concentration-based analysis that can show the exploit intervention will have a higher average expected outcome compared to all other interventions.[This observation is in line with work on frequentist-based BIC algorithms for incentivizing exploration in bandits. See Section 6.2 of <cit.> for more details.] § TESTING WHETHER THE <REF> HOLDS So far, our focus has been on designing algorithms to incentivize subpopulations of units to take interventions for which the <ref> does not initially hold. In this section we study the complementary problem of designing a procedure for determining whether this assumption holds for a given test unit and set of donor units. In particular, we provide two simple hypothesis tests for deciding whether the <ref> holds, using existing finite sample guarantees for principal component regression and an additional assumption on the underlying latent factors associated with the pre-intervention time-steps.§.§ A Non-Asymptotic Hypothesis Test Our hypothesis test proceeds as follows: Using the PCR techniques described in <Ref>, we learn a linear relationship between the first T_0/2 time-steps in the pre-intervention time period and the average outcome in the second half of the pre-intervention time period using data from the donor units. Using this learned relationship, we then compare our estimate for the test unit with their true average outcome in the second half of the pre-intervention time period. If the difference between the two is larger than the confidence interval given by PCR (plus an additional term to account for the noise), we can conclude that the <ref> is not satisfied with high probability. In order for our hypothesis test to be valid we require the following assumption, which is analogous to <Ref>. [Linear Span Inclusion, revisited]For every time-step t such that T_0 / 2 < t ≤ T_0, we assume that_t^(0)∈{_1^(0), …, _T_0 / 2^(0)}.While <Ref> requires that the post-intervention latent factors for any intervention fall within the linear span of the pre-intervention latent factors, <Ref> requires that the second half of the pre-intervention latent factors fall within the linear span of the first half. This assumption allows us to obtain valid confidence bounds when regressing the second half of the pre-intervention outcomes on the first half. Lety_i,pre' := [y_i,1^(0), …, y_i,T_0/2^(0)], y_i,pre” := [y_i,T_0/2 + 1^(0), …, y_i,T_0^(0)], andy_i,pre” := 2/T_0∑_t=T_0/2 + 1^T_0 y_i,t^(0).Under <Ref> we can estimate 𝔼 [y_i,pre”] as 𝔼 [y_i,pre”] := ⟨_i, y_i,pre'⟩, where _i is computed using (regularized) PCR.We are now ready to introduce our hypothesis test. In what follows, we will leverage the high-probability confidence interval of <cit.> (<Ref>). Under <Ref>, we can apply <Ref> out-of-the-box to estimate 𝔼 [y_i,pre”] using the first T_0/2 time-steps as the pre-intervention period and the next T_0/2 time-steps as the post-intervention period. Non-Asymptotic Hypothesis TestConsider the hypothesis that the <ref> is not satisfied for a test unit n. * Using donor unitsand test unit n, compute [y_n,pre”] via (regularized) PCR. * Compute α(δ) + 2σ√(log(1/δ)/T_0) for confidence level δ using <Ref> with the first T_0/2 time-steps as the pre-intervention period and the next T_0/2 time-steps as the post-intervention period.* If |[y_n,pre”] - y_n,pre”| > α(δ) + 2σ√(log(1/δ)/T_0) then accept the hypothesis. Otherwise, reject. Under <Ref>, if <ref> is satisfied for unit n, then|[y_n,pre”] - y_n,pre”| ≤α(δ) + 2σ√(log(1/δ)/T_0) with probability 1 - 𝒪(δ), where α(δ) is the high-probability confidence interval which is defined in <Ref> when using the first T_0/2 time-steps as the pre-intervention period and the next T_0/2 time-steps as the post-intervention period. | [y_i,pre”] - y_i,pre”|≤ | [y_i,pre”] -[y_i,pre”]| + | [y_i,pre”]- y_i,pre”|≤α(δ) + |2/T_0∑_t=T_0/2 + 1^T_0ϵ_i,t^(0)| with probability at least 1 - 𝒪(δ), where the second inequality follows from <Ref>. The result follows from a Hoeffding bound.Therefore we can conclude that if | [y_i,pre”] - y_i,pre”| > α(δ) + 2σ√(log(1/δ)/T_0), then the <ref> will not be satisfied for unit i, conditioned on the high-probability event in <Ref> holding. §.§ An Asymptotic Hypothesis Test Next we present a hypothesis test which, while only valid in the limit, may be of more practical use when compared to the non-asymptotic hypothesis test of <Ref>. At a high level, our asymptotic hypothesis test leverages <Ref> and the guarantees for PCR in <cit.> to determine whether the <ref> holds for a given test unit n. In particular, the results ofimply that under <Ref> and the <ref>, a rescaled version of [y_n,pre”] -[y_n,pre”] converges in distribution to the standard normal distribution as ||, T_0, T_1 →∞. While the rescaling amount depends on quantities which are not computable, a relatively simple application of the continuous mapping theorem and Slutsky's theorem imply that we can compute a valid test statistic which converges in distribution to the standard normal.NotationIn what follows, let ω^(n,0)∈ℝ^|| be the linear relationship between the test unit n and donor unitswhich is known to exist under the <ref>, i.e. [y_n,t^(0)] = ∑_i ∈ω^(n,0)[i] ·[y_i,t^(0)]. Let ω^(n,0) be the projection of ω^(n,0) onto the subspace spanned by the latent factors of thetest units. Let ω^(n,0) be the estimate of ω^(n,0) given by the synthetic interventions procedure of <cit.>. Let σ be the estimate of the standard deviation of the noise terms {ϵ_i,t: i ∈, t ∈ [T]} given by .[Recall that we only assume knowledge of an upper-bound σ^2 on the variance of ϵ_i,t.] Finally, let z_α be the z score at significance level α∈ (0,1), i.e. α = ℙ(x ≤ z_α), where x ∼𝒩(0,1). Asymptotic Hypothesis TestConsider the hypothesis that the <ref> is not satisfied for a test unit n. * Using donor unitsand test unit n, compute [y_n,pre”] and ω^(n,0) using the synthetic interventions procedure of <cit.>. * If √(T_1)/σω^(n,0)_2| [y_n,pre”] - y_n,pre”| > 1 - z_0.95 then accept the hypothesis. Otherwise, reject. Under <Ref>, if ω^(n,0)_2 is sufficiently large and ω^(n,0)→ω^(n,0) at a sufficiently fast rate, then as ||, T_0, T_1 →∞ the Asymptotic Hypothesis Test falsely accepts the hypothesis with probability at most 5%.§ NUMERICAL SIMULATIONS In this section, we complement our theoretical results with a numerical comparison to standard methods for counterfactual estimation which do not take incentives into consideration.Experiment setup We consider the setting of <Ref>: there are two interventions and two types of units. Type 1 units prefer the treatment and type 0 units prefer the control. If unit i is of type 1 (resp. type 0), we generate a latent factor _i = [0⋯0 v_i[r/2 + 1]⋯v_i[r]] (resp. _i = [v_i[0]⋯v_i[r/2] 0⋯0]), where ∀ j ∈ [r]: v_i[j] ∼(0, 1). We consider a setting where 500 units of alternating types arrive sequentially.[This is done only for convenience. The order of agent arrival is unknown to the algorithms.] Our goal is to incentivize type 1 units to take the control in order to obtain accurate counterfactual estimates under control over time. We consider a pre-intervention time period of length T_0 = 100 with latent factors as follows: * If t2 = 0, generate latent factor _t^(0) = [0 ⋯ 0 u_t^(0)[r/2 + 1] ⋯ u_t^(0)[r]], where ∀ℓ∈ [r/2+1, r]: u_t^(0)[ℓ] ∼[0.25, 0.75]. * If t2 = 1, generate latent factor _t^(0) = [u_t^(0)[0] ⋯ u_t^(0)[r/2] 0 ⋯ 0], where ∀ℓ∈ [0, r/2]: u_t^(0)[ℓ] ∼[0.25, 0.75].We set T_1 = 100 and generate post-intervention outcomes as follows:* If d=0, we set _t^(d) = [u_t^(d)[0]⋯u_t^(d)[r]]: t ∈ [T_0 + 1, T], where ∀ℓ∈ [r]: u_t^(d)[ℓ] ∼[0,1]. * If d=1, we set _t^(d) = [u_t^(d)[0]⋯u_t^(d)[r]]: t ∈ [T_0 + 1, T], where ∀ℓ∈ [r]: u_t^(d)[ℓ] ∼[-1,0]. Finally, outcomes are generated by adding independent Gaussian noise ϵ_i,t^(d)∼(0, 0.01) to each inner product of latent factors. Using <Ref>, we can calculate a lower bound on the phase length L of <Ref> such that the BIC condition is satisfied for units of type 1 who are recommended the control. We run three different sets of simulations and compare the estimation error |𝔼y_n,post^(0) - 𝔼y_n,post^(0)| for a new type 1 unit n under control when incentivizing exploration using <Ref> (blue) to the estimation error without incentivizing exploration (orange). For both our method and the incentive-unaware ablation, we use the adaptive PCR method of <cit.> to estimate counterfactuals. All experiments are repeated 50 times and we report both the average prediction error and the standard deviation for estimating the post-treatment outcome of type 1 units under control.Varying latent factor size In <Ref>, we vary r ∈{2, 4, 6, 8} while keeping all other parameters of the problem instance constant. Increasing r increases the batch size L, so we see a modest decrease in performance as r increases.Varying strength of belief In <Ref>, we explore the effects of changing the units' “strength of beliefs”. Specifically, we vary the gap between the prior mean outcome under control and treatment for type 1 units by setting μ_i^(1) - μ_i^(0)∈{ 0.2, 0.4, 0.6, 0.8 }. As the gap decreases it becomes easier to persuade units to explore, and therefore our counterfactual estimates improve.§ CONCLUSION We study the problem of non-compliance when performing counterfactual estimation using panel data. Our focus is on synthetic control methods, which canonically require a unit overlap assumption on the donor units in order to provide valid finite sample guarantees. We shed light on this often overlooked assumption, and provide an algorithm for incentivizing units to explore different interventions using tools from information design and online learning. After running our algorithm, the <ref> will be satisfied with high probability, which allows for the principal to obtain valid finite sample guarantees for all units when using off-the-shelf synthetic control methods to estimate counterfactuals under control. We also extend our algorithm to satisfy the <ref> when there are more than two interventions, and provide a hypothesis test for determining if the <ref> hold for a given test unit and set of donor units. Finally, we complement our theoretical findings with numerical simulations, and observe that our procedure for estimating counterfactual outcomes produces significantly more accurate counterfactual estimates when compared to existing methods which do not take unit incentives into consideration. There are several exciting directions for future research.Recent work (<cit.>) provides lower bounds on the sample complexity of incentivizing exploration in various multi-armed bandit settings.It would be interesting to provide analogous bounds on the number of units required to incentivize sufficient exploration in our panel data setting. Another avenue for future research is to design difference-in-differences or clustering-based algorithms for incentivizing exploration in other causal inference settings. Finally it would be interesting to relax other assumptions typically needed for synthetic control, such as the analogous linear span inclusion assumption on the time-step latent factors (<Ref>). plainnat § APPENDIX FOR <REF>: INCENTIVIZED EXPLORATION FOR SYNTHETIC CONTROL §.§ Causal Parameter Recovery Let δ∈ (0,1) be an arbitrary confidence parameter and ρ > 0 be chosen to be sufficiently small. Further, assume that <Ref> and <ref> are satisfied, there is some i_0 ≥ 1 such that rank(_i_0 ](d)) = r, and snr_i(d) ≥ 2 for all i ≥ i_0. Then, with probability at least 1 - (kδ), simultaneously for all interventions d ∈ [k]_0,[y̅_i, post^(d)] - [y_i, post^(d)] ≤α(δ), where α(δ):= 3 √(T_0)/snr_i(d)( L (√(74) + 12 √(6)κ (_i(d)))/(T - T_0) ·snr_i(d) + √(err_i(d))/√(T - T_0)·σ_r(_i(d))) + 2L √(24T_0)/(T - T_0) ·snr_i(d)+ 12 Lκ (_i(d))√(3T_0)/(T - T_0) ·snr_i(d) + 2√(err_i(d))/√(T - T_0)·σ_r (_i(d))+ L σ√(log(k/δ))/√(T - T_0) + L σ√(74 log(k/ δ))/snr_i(d) √(T - T_0) + 12 σκ (_i(d)) √(6 log(k/δ))/snr_i(d) √(T - T_0)+ σ√(2 err_i(d) log(k / δ))/σ_r(_i(d))where [y̅_i, post^(d)] 1/T - T_0·⟨θ_i(d) , y_i,pre⟩ is the estimated average post-intervention outcome for unit i under intervention d and θ_i^(d)≤ L. Given a gap ϵ and the same assumptions as in <Ref>, the probability that [y̅_i, post^(d)] - [y̅_i, post^(d)]≤ϵ is at least 1 - δ, where δ≤log(n^(d)) ∨ k/exp( ( √(σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0))/D + α^2/4D^2) - α/2D)^2 )with A = 3√(T_0)( θ_i^(d) (√(74) + 12√(6)κ(_i(d)))/T - T_0+ 1/√(T - T_0)),F = 2 θ_i^(d)√(24T_0)/T - T_0 + 12 θ_i^(d)κ(_i(d)) √(3T_0)/T - T_0 + 2/√(T - T_0), D = θ_i^(d)σ√(74)/√(T - T_0) + 12 σκ(_i(d)) √(6)/√(T - T_0) + σ√(2), E = θ_i^(d)σ/√(T - T_0) and α = A + F + D(√(n^(d)) + √(T_0)) + σ_r_d(_i(d)) EWe begin by setting the right-hand side of <Ref> to be ϵ. The goal is to write the failure probability δ as a function of ϵ.Then, using the notations above, we can write ϵ = A/(snr_i(d))^2 + F/snr_i(d) + D√(log(k/δ))/snr_i(d) + E√(log(k/δ))First, we take a look at the signal-to-noise ratio snr_i(d). By definition, we have:snr_i(d)= σ_r(_i(d))/U_i= σ_r(_i(d))/√(n^(d)) + √(T_0) + √(log(log(n^(d))/δ))Hence, 1/snr_i(d) = √(n^(d)) + √(T_0) + √(log(log(n^(d))/δ))/σ_r(_i(d))Observe that since snr_i(d) ≥ 2, we have 1/(snr_i(d))^2≤1/snr_i(d). Hence, we can write an upper bound on ϵ as:ϵ ≤A + F/snr_i(d) + D√(log(k/δ))/snr_i(d) + E√(log(k/δ))≤(A + F)(√(n^(d)) + √(T_0))/σ_r_d(_i(d)) + (A + F) √(log(log(n^(d))/δ))/σ_r_d(_i(d)) + D (√(n) + √(d) + √(log(log(n^(d))/δ))) √(log(k/δ))/σ_r(_i(d)) + E√(log(k/δ))Since log(x) is a strictly increasing function for x > 0, we can simplify the above expression as:ϵ ≤(A + F)(√(n^(d)) + √(T_0))/σ_r_d(_i(d)) + (A + F) √(log(log(n^(d)) ∨ k/δ))/σ_r_d(_i(d)) + D (√(n) + √(d))√(log(log(n^(d)) ∨ k/δ))/σ_r_d(_i(d)) +D log(log(n^(d)) ∨ k/δ)/σ_r_d(_i(d)) + E√(log(log(n^(d)) ∨ k/δ))Subtracting the first term from both sides and multiplying by σ_r(_i(d)), we have:σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0)) ≤(A + F) √(log(log(n^(d)) ∨ k/δ)) + D (√(n) + √(d))√(log(log(n^(d)) ∨ k/δ)) + D log(log(n^(d)) ∨ k/δ)+ Eσ_r_d(_i(d))√(log(log(n^(d)) ∨ k/δ))= (A + F + D (√(n) + √(d))+ Eσ_r_d(_i(d)))√(log(log(n^(d)) ∨ k/δ)) +D log(log(n^(d)) ∨ k/δ)Let α = A + F + D (√(n) + √(d))+ Eσ_r_d(_i(d)), we can rewrite the inequality above as:σ_r(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0)) ≤α√(log(log(n^(d)) ∨ k/δ)) + D log(log(n^(d)) ∨ k/δ)Then, we can complete the square and obtain:log(log(n^(d)) ∨ k/δ)+ α/D√(log(log(n^(d)) ∨ k/δ)) + α^2/4D^2≥σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0))/D + α^2/4D^2( √(log(log(n^(d)) ∨ k/δ)) + α/2D)^2 ≥σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0))/D + α^2/4D^2√(log(log(n^(d)) ∨ k/δ)) + α/2D≥√(σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0))/D + α^2/4D^2)√(log(log(n^(d)) ∨ k/δ))≥√(σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0))/D + α^2/4C^2) - α/2Dlog(log(n^(d)) ∨ k/δ) ≥( √(σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0))/D + α^2/4D^2) - α/2D)^2log(n^(d)) ∨ k/δ≥exp( ( √(σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0))/D + α^2/4D^2) - α/2D)^2 )δ≤log(n^(d)) ∨ k/exp( ( √(σ_r_d(_i(d))ϵ - (A + F)(√(n^(d)) + √(T_0))/D + α^2/4D^2) - α/2D)^2 ) §.§ Proof of <Ref> Validity of <Ref>.3 First, we note that in the first stage of <Ref>, the principal does not provide any recommendation to the units and instead lets them pick their preferred intervention. The goal of this first stage is to ensure the linear span inclusion assumption (<ref>) is satisfied for type 1 units and intervention 1. This condition is equivalent to having enough samples of type 1 units such that the set of latent vectors {v_i}_i ∈^(1) spans the latent vector space S_1. We invoke the following theorem from <cit.> that shows span({v_i}_i ∈^(1)) = S_1 with high probability:Let A be an m × n matrix whose rows A_i are independent, mean zero, sub-gaussian isotropic random vectors in ℝ^n. Then for any t ≥ 0 we have with probability at least 1 - 2 exp(-t^2):√(m) -K^2 (√(n) + t ) ≤ s_n(A) ≤ s_1(A) ≤√(m) +K^2 (√(n) + t) where K = max_i A_i_ψ_2 andis an absolute constant. Hence, after observing N_0^(1) samples of type 1 units taking intervention 1, the linear span inclusion assumption is satisfied with probability at least 1 - 2exp( - (√(N_0^(1))/ K^2 - √(r_1))^2 ).Suppose the there are two interventions, and assume that <Ref> holds for some constant gap C ∈ (0, 1).If N_0 is chosen to be large enough such that  <ref> is satisfied for all units of type 1 under treatment with probability at least 1 - δ_0, then<Ref> with parameters δ, B, C is BIC for all units of type 1 ifL ≥ 1 + max_i ∈^(1){μ_v_i^(1) - μ_v_i^(0)/( C - α(δ_PCR) - σ√(2log(1/δ_ϵ_i)/T_1)) [ μ_v_i^(0) - y̅_i,post^(1)≥ C - α(δ_PCR) - σ√(2 log(1/δ_ϵ_i)/T_1)] - 2 δ} where α(δ_PCR) is the high-probability confidence bound defined in <Ref>,δ = δ_ϵ_i + δ_0 + δ_PCR and δ_ϵ_i∈ (0,1) is the failure probability of the Chernoff-Hoeffding bound on the average of sub-Gaussian random noise {ϵ_i,t^(1)}_t=T_0+1^T, and δ_PCR≤log(N_0^(1)) ∨ k/exp( ( √(σ_r(Y_pre, N_0^(1))(C/2 ) - (A + F)(√(N_0^(1)) + √(T_0))/D + α^2/4D^2) - α/2D)^2 ) and κ(Y_pre, N_0^(1)) = σ_1 (Y_pre, N_0^(1))/σ_r_1(Y_pre, N_0^(1)) is the condition number of the matrix of observed pre-intervention outcomes for the subset of the first N_0 units who have taken the treatment. The remaining variables in δ_PCR are defined as A = 3√(T_0)( Γ (√(74) + 12√(6)κ(Y_pre, N_0^(1))/T - T_0+ 1/√(T - T_0)), F = 2 Γ√(24T_0)/T - T_0 + 12 Γκ(Y_pre, N_0^(1)) √(3T_0)/T - T_0 + 2/√(T - T_0), D = Γσ√(74)/√(T - T_0) + 12 σκ(Y_pre, N_0^(1)) √(6)/√(T - T_0) + σ√(2), E = Γσ/√(T - T_0), and α = A + F + D(√(N_0^(1)) + √(T_0)) + σ_r_1(Y_pre, N_0^(1)) E. Moreover if the number of batches B is chosen to be large enough such that with probability at least 1 - δ, ([[_i,pre]^⊤ : d_i = 0, i ∈^(1)]_i=1^N_0 + B · L) = ([[_j,pre]^⊤]_j ∈^(1)), then the <ref> will be satisfied for all type 1 units under control with probability at least 1 - 2 δ.At a particular time step t, unit i of type 1 can be convinced to pick control if _v_i[y̅_i,post^(0) - y̅_i,post^(1) | d = 0] [d = 0] ≥ 0. There are two possible disjoint events under which unit i is recommended intervention 0: either intervention 0 is better empirically, i.e. μ_v_i^(0)≥[y̅_i,post^(1)] + C,or intervention 1 is better and unit i is chosen for exploration. Hence, we have_v_i[y̅_i,post^(0) - y̅_i,post^(1) | d = 0] [d = 0]= [y̅_i,post^(0) - y̅_i,post^(1) | μ_v_i^(0)≥[y̅_i,post^(1)] + C] [μ_v_i^(0)≥[y̅_i,post^(1)] + C ] ( 1 - 1/L) + 1/L_v_i[y̅_i,post^(0) - y̅_i,post^(1)]= ( μ_v_i^0 - _v_i[y̅_i,post^(1) | μ_v_i^(0)≥[y̅_i,post^(1)] + C] )[μ_v_i^(0)≥[y̅_i,post^(1)] + C] (1 - 1/L) + 1/L (μ_v_i^(0) - μ_v_i^(1))Rearranging the terms and taking the maximum over all units of type 1 gives the lower bound on phase length L:L ≥ 1 + μ_v_i^(1) - μ_v_i^(0)/(μ_v_i^(0) - _v_i[[y̅_i,post^(1)] | ξ_C, i]) _v_i[ξ_C, i])To complete the analysis, we want to find a lower bound for the terms in the denominator. That is, we want to lower bound( μ_v_i^(0) - _v_i[y̅_i,post^(1) | μ_v_i^(0)≥[y̅_i,post^(1)] + C] ) _v_i[μ_v_i^(0)≥[y̅_i,post^(1)] + C] Let ξ_0 denote the event that  <ref> is satisfied for type 1 units under treatment. From <Ref>, we know that event ξ_0 occurs with probability at least 1-δ_0 for some δ_0 ∈ (0,1). Let ξ_PCR denote the event that the high-probability concentration bound holds for units of type 1 and intervention 1. Finally, let ξ_ϵ_i denote the event wherethe Chernoff-Hoeffding bound holds on the average of the sub-Gaussian noise ϵ_i,t^(1), that is with probability at least 1 - δ_ϵ_i, we have:1/T_1∑_t=T_0 + 1^T ϵ_i,t^(1)≤σ√(2 log(1/δ_ϵ_i)/T_1) Define another clean event ξ where the events ξ_0, ξ_ϵ_i and ξ_PCR happen simultaneously with probability at least 1 - δ, where δ = δ_0 + δ_PCR + δ_ϵ_i.Then, we have:(μ_v_i^(0) - _v_i[y̅_i,post^(1) | ξ_C, i]) [ξ_C, i]=(μ_v_i^(0) - _v_i[y̅_i,post^(1) | ξ_C, i, ξ]) [ξ_C, i, ξ] +(μ_v_i^(0) - _v_i[y̅_i,post^(1) | ξ_C, i, ξ]) [ξ_C, i, ξ]≥ (μ_v_i^(0) - _v_i[y̅_i,post^(1) | ξ_C, i, ξ]) [ξ_C, i, ξ] - 2δsince [ξ] < δ and μ_v_i^(0) - _v_i[y̅_i,post^(1)] ≥ -2We proceed to lower bound the expression above as follows:μ_v_i^(0) - _v_i[y̅_i,post^(1) | ξ_C, i, ξ] = μ_v_i^(0) - _v_i[y̅_i,post^(1) | μ_v_i^(0)≥[y̅_i,post^(1) + C], [y̅_i,post^(1)] - [y̅_i,post^(1)]≤α(δ_PCR),1/T_1∑_t=T_0 + 1^T ϵ_i,t^(1)≤σ√(2 log(1/δ_ϵ_i)/T_1)] = μ_v_i^(0) - _v_i[y̅_i,post^(1) | μ_v_i^(0) - [y̅_i,post^(1)] ≥ C - α(δ_PCR),1/T_1∑_t=T_0 + 1^T ϵ_i,t^(1)≤σ√(2 log(1/δ_ϵ_i)/T_1)] = μ_v_i^(0) - [v_i] [ y̅_i,post^(1) | μ_v_i^(0) - y̅_i,post^(1)≥ C - α(δ_PCR) -σ√(2 log(1/δ_ϵ_i)/T_1)] ≥ C - α(δ_PCR) - σ√(2 log(1/δ_ϵ_i)/T_1)Furthermore, we can write the probability of the joint event ξ_C, i, ξ as:[ξ_C, i, ξ]= [μ_v_i^(0)≥[y̅_i,post^(1) + C], [y̅_i,post^(1)] - [y̅_i,post^(1)]≤α(δ_PCR),1/T_1∑_t=T_0 + 1^T ϵ_i,t^(1)≤σ√(2 log(1/δ_ϵ_i)/T_1)]= [ μ_v_i^(0) - y̅_i,post^(1)≥ C - α(δ_PCR) - σ√(2 log(1/δ_ϵ_i)/T_1)]Hence, we can derive the following lower bound on the denominator of L:μ_v_i^(0) - _v_i[y̅_i,post^(1) | ξ_C, i] ≥( C - α(δ_PCR) - σ√(2log(1/δ_ϵ_i)/T_1)) [ μ_v_i^(0) - y̅_i,post^(1)≥ C - α(δ_PCR) - σ√(2 log(1/δ_ϵ_i)/T_1)] - 2 δ Applying this lower bound to the expression of L and taking the maximum over type 1 units, we have:L ≥ 1 + max_i ∈^(1){μ_v_i^(1) - μ_v_i^(0)/( C - α(δ_PCR) - σ√(2log(1/δ_ϵ_i)/T_1)) [ μ_v_i^(0) - y̅_i,post^(1)≥ C - α(δ_PCR) - σ√(2 log(1/δ_ϵ_i)/T_1)] - 2 δ}§ APPENDIX FOR <REF>: EXTENSION TO SYNTHETIC INTERVENTIONS Suppose that <Ref> holds for some constant gap C ∈ (0,1).If the number of initial units N_0 is chosen to be large enough such that <ref> is satisfied for all units of subtypeunder intervention [1] with probability 1 - δ, then <Ref> with parameters δ, L, B, C is BIC for all units of subtypeif:L ≥ 1 + max_i ∈^(){μ_i^([1]) - μ_i^([k])/( C - 2α(δ_PCR) - 2σ√(2log(1/δ_ϵ_i)/T_1)) (1 - δ)[ _C,i] - 2 δ} Moreover, if the number of batches B is chosen to be large enough such that with probability at least 1-δ, we have ([[_i,pre]^⊤ : d_i = ℓ≠[1], i ∈^(τ)]_i=1^N_0 + B · L) = ([[_j,pre]^⊤]_j ∈^(τ)), then the <ref> will be satisfied for all units of subtypeunder all interventions with probability at least 1 - 𝒪(k δ).In each round k ∈ [K], <Ref> is Bayesian Incentive-Compatible for at least a 1/K-1 fraction of the subspace unit population. In order to prove <Ref>, we will show something slightly stronger; namely, that in round k <Ref> is BIC for at least a 1/K-k fraction of the subspace unit population.The process for selecting each exploit intervention d_k may be thought of as a game between the algorithm and nature. For each round k ∈ [K], nature moves first by selecting the relevant probabilities over subtypes in the algorithm's optimization in (<ref>), given the parts of the joint distribution they have already fixed in previous rounds. Note that this is without loss of generality via the principle of deferred decisions. The key observation is that each probability p([k”] ∈_k∀ k” < k') for k' ≤ k is fixed before round k. In round k, nature is deciding how to split the probability mass p([k”] ∈_k∀ k” < k') amongst K-k remaining interventions, for every probability indexed by k' ≤ k. Since the algorithm will pick the intervention with the highest cumulative probability mass, the best that nature can do is to split the probability mass evenly amongst the K-k remaining interventions if they are trying to harm the algorithm. Now that we have established that ∑_k'=1^k p([k'] = d_k, [k”] ∈_k∀ k” < k') ≥1/K-1 for all k ∈ [K], it suffices to show that <Ref> is BIC for this subset of the unit population in round k. To do: prove this! According to our recommendation policy, unit i is either recommended intervention [1] or intervention ℓ. We will prove that in either case, unit i will comply with the principal's recommendation.Let _C,i^(ℓ) denote the event that the exploit intervention for unit i is intervention ℓ. Formally, we have_C,i^(ℓ) = {[y̅_i,post^([1])] ≤min_1 < j < ℓ[y̅_i,post^([j])] - C andmax_1 ≤ j < ℓ[y̅_i,post^([j])] ≤μ_v_i^(ℓ) - C } When unit i is recommended intervention ℓ:When a unit i ∈^() is recommended intervention ℓ, we argue that this unit will not switch to any other intervention j ≠ℓ. Because of our ordering of the prior mean reward, any intervention j > ℓ has had no sample collected by a typeunit and μ_v_i^([j])≤μ_v_i^([ℓ]). Hence, we only need to focus on the cases where j < ℓ. For the recommendation policy to be BIC, we need to show that [μ_v_i^(ℓ) - y_i,post^([j]) | d = ℓ] [d = ℓ] ≥ 0There are two possible disjoint events under which unit i is recommended intervention ℓ: either ℓ is determined to be the 'exploit' intervention or unit i is chosen as an 'explore' unit. Since being chosen as an explore unit does not imply any information about the rewards, we can derive that conditional on being in an 'explore' unit, the expected gain for unit i to switch to intervention j is simply μ_v_i^(ℓ) - μ_v_i^([j]). On the other hand, if intervention ℓ is the 'exploit' intervention, then event _C,i^(ℓ) has happened. Hence, we can rewrite the left-hand side of the BIC condition above as:[μ_v_i^(ℓ) - y_i,post^([j]) | d = ℓ] [d = ℓ] = [μ_v_i^(ℓ) - y_i,post^([j]) | _C,i^(ℓ)] [_C,i^(ℓ)] ( 1- 1/L) + 1/L (μ_v_i^(ℓ) - μ_v_i^(j))Rearranging the terms, we have the following lower bound on the phase length L for the algorithm to be BIC for type 1 units:L ≥ 1 + μ_v_i^([j]) - μ_v_i^(ℓ)/[μ_v_i^(ℓ) - y_i,post^([j]) | _C,i^(ℓ)][_C,i^(ℓ)]To complete the analysis, we need to lower bound the denominator of the expression above. Consider the following event := {∀ j ∈ [k]: [y̅_i^([j])] - [y_i,post^([j])]≤α(δ_PCR) } Let _0 denote the event that  <ref> is satisfied for units of typeunder intervention [1]. From <Ref>, we know that event _0 occurs with probability at least 1 - δ_0 for some δ_0 ∈ (0,1). Furthermore, let _ϵ_i denote the event where the Chernoff-Hoeffding bound on the noise sequences {ϵ_i,t^([j])}_t=T_0+1^T holds, that is with probability at least 1 - δ_ϵ_i, we have: 1/T_1∑_t = T_0 + 1^T ϵ_i,t^([j])≤σ√(2 log(1 / δ_ϵ_i)/T_1) Then, we can define a clean eventwhere the events , _0 and _ϵ_i happens simultaneously with probability at least 1 - δ, where δ = δ_ϵ^PCR + δ_ϵ_i + δ_0.Note that since y_i,post^([j])≤ 1, we can rewrite the denominator as:[μ_v_i^(ℓ) - y_i,post^([j]) | _C,i] [_C,i^(ℓ)]= [μ_v_i^(ℓ) - y_i,post^([j]) | _C,i^(ℓ), ] [_C,i^(ℓ), ] +[μ_v_i^(ℓ) - y_i,post^([j]) | _C,i^(ℓ), ] [_C,i^(ℓ), ]≥[μ_v_i^(ℓ) - y_i,post^([j]) | _C,i^(ℓ), ] [_C,i^(ℓ), ] - 2δ Define an event _i^(ℓ) on the true average expected post-intervention outcome of each intervention as follows:_i^(ℓ){ y_i,post^([1])≤min_1 < j < ℓy_i,post^([j]) - C - 2α(δ_PCR) - 2σ√(2log(1/δ_ϵ_i)/T_1)andmax_1 ≤ j < ℓy_i,post^([j])≤μ_v_i^(ℓ) - C - 2α(δ_PCR) - 2σ√(2log(1/δ_ϵ_i)/T_1)}We can observe that under event , event _C,i^(ℓ) is implied by event _i^(ℓ). Hence, we have: [, _i^(ℓ)] ≤[, _C,i^(ℓ)]We can rewrite the left-hand side as:[, _i] = [ | _i^(ℓ)] [_i^(ℓ)] ≥ (1 - δ) [_i^(ℓ)]Substituting these expressions using _i^(ℓ) for the ones using _C,i in the denominator gives:[μ_v_i^(ℓ) - y_i,post^([j]) | _C,i^(ℓ)] [_C,i^(ℓ)] ≥(C- 2α(δ_PCR) - 2σ√(2log(1/δ_ϵ_i)/T_1)) (1 - δ) [_i^(ℓ)] - 2δApplying this lower bound to the expression of L and taking the maximum over type 1 units, we have:L ≥ 1 + max_i ∈^(1){μ_v_i^([1]) - μ_v_i^([k])/( C - 2α(δ_PCR) - 2σ√(2log(1/δ_ϵ_i)/T_1)) (1 - δ)[ _C,i^(ℓ)] - 2 δ} When unit i is recommended intervention [1]: When unit i gets recommended intervention [1], they know that they are not in the explore group. Hence, the event _C,i did not happen, and the BIC condition, in this case, can be written as: for any intervention [j] ≠[1],[y_i,post^([1]) - y_i,post^([j]) | _C,i^(ℓ) ] [_C,i^(ℓ)] ≥ 0Similar to the previous analysis on the recommendation of intervention ℓ, it suffices to only consider interventions [j] < ℓ. We have:[y_i,post^([1]) - y_i,post^([j]) | _C,i^(ℓ)][_C,i^(ℓ)]= [y_i,post^([1]) - y_i,post^([j])] - [y_i,post^([1]) - y_i,post^([j]) | _C,i^(ℓ)] [_C,i^(ℓ)]= μ_v_i^([1]) - μ_v_i^([j]) + [y_i,post^([j]) - y_i,post^([1]) | _C,i^(ℓ)] [_C,i^(ℓ)]By definition, we have μ_v_i^([1])≥μ_v_i^([j]). Hence, it suffices to show that for any intervention 1 < [j] < ℓ, we have:[y_i,post^([j]) - y_i,post^([1]) | _C,i^(ℓ)] [_C,i^(ℓ)] ≥ 0 Observe that with the eventdefined above, we can write:[y_i,post^([j]) - y_i,post^([1]) | _C,i^(ℓ)] [_C,i^(ℓ)]= [y_i,post^([j]) - y_i,post^([1]) | _C, i^(ℓ), ] [_C,i^(ℓ), ] + [y_i,post^([j]) - y_i,post^([1]) | _C,i^(ℓ), ] [_C,i^(ℓ), ] ≥[y_i,post^([j]) - y_i,post^([1]) | _C,i^(ℓ), ] [_C,i^(ℓ), ] - 2δWhen event _C,i^(ℓ) happens, we know that [y̅_i,post^([j])] ≥[y̅_i,post^([1])] + C. Furthermore, when eventhappens, we know that y_i,post^([j])≥[y̅_i,post^([j])] - α(δ_PCR) - σ√(2log(1/δ_ϵ_i)/T_1) and _[y̅_i,post^(τ[1])] ≥y_i,post^([1]) - α(δ_PCR) - σ√(2log(1/δ_ϵ_i)/T_1). Hence, when these two events _C,i^(ℓ) andhappen simultaneously, we have y_i,post^([j])≥y_i,post^([1]) + C- 2α(δ_PCR) - 2σ√(2log(1/δ_ϵ_i)/T_1). Therefore, the lower bound can be written as:[y_i,post^([j]) - y_i,post^([1]) | _C,i^(ℓ)] [_C,i^(ℓ)]≥(C- 2α(δ_PCR) - 2σ√(2log(1/δ_ϵ_i)/T_1)) [_C,i^(ℓ), ] - 2δSimilar to the previous analysis when unit i gets recommended intervention ℓ, we have:[y_i,post^([j]) - y_i,post^([1]) | _C,i] [_C,i] ≥(C- 2α(δ_PCR) - 2σ√(2log(1/δ_ϵ_i)/T_1)) (1 - δ) [_i^(ℓ)] - 2δChoosing a large enough C such that the right-hand side is non-negative, we conclude the proof.§ APPENDIX FOR <REF>: TESTING WHETHER THE <REF> HOLDSUnder <Ref>, ifr^3/2√(log(T_0 ||))/ω^(n,0)_2 ·min{T_0, ||, T_0^1/4||^1/2} = o(1) and1/√(T_1)ω^(n,0)_2∑_t=T_0+1^T ∑_i ∈[y_i,t^(0)] · (ω^(n,0)[i] - ω^(n,0)[i]) = o_p(1) then as ||, T_0, T_1 →∞ the Asymptotic Hypothesis Test falsely accepts the hypothesis with probability at most 5%, where σ^2 is defined as in Equation (4) and ω^(n,0) is defined as in Equation (5) in <cit.>.Condition (<ref>) requires that the ℓ_2 norm of ω^(n,0) is sufficiently large, and condition (<ref>) requires that the estimation error of ω^(n,0) decreases sufficiently fast. See Section 5.3 of <cit.> for more details.We use x p→ y (resp. x d→ y) if x converges in probability (resp. distribution) to y. Let σ' denote the true standard deviation of ϵ_i,t. By Theorem 3 of <cit.> we know that as ||, T_0, T_1 →∞,* √(T_1)/σ' ω^(n,0)_2 ( [y_n,pre”] - [y_n,pre”]) d→𝒩(0,1) * σp→σ' and * ω^(n,0)p→ω^(n,0). By the continuous mapping theorem, we know that 1/σp→1/σ' and 1/ω^(n,0)_2p→1/ω^(n,0)_2. We can rewrite [y_n,pre”] -[y_n,pre”] as [y_n,pre”] - y_n,pre” + ϵ_n,pre”, and we know that ϵ_n,pre”p→ 0. Applying Slutksy's theorem several times obtains the desired result.
http://arxiv.org/abs/2312.16307v1
{ "authors": [ "Daniel Ngo", "Keegan Harris", "Anish Agarwal", "Vasilis Syrgkanis", "Zhiwei Steven Wu" ], "categories": [ "econ.EM", "cs.GT", "cs.LG", "stat.ME" ], "primary_category": "econ.EM", "published": "20231226192511", "title": "Incentive-Aware Synthetic Control: Accurate Counterfactual Estimation via Incentivized Exploration" }
Journal ofClass Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE JournalsGroup Multi-View Transformer for 3D Shape Analysis with Spatial Encoding Lixiang Xu, Member, IEEE, Qingzhe Cui, Richang Hong, Senior Member, IEEE, Wei Xu,Enhong Chen, Senior Member, IEEE, Xin Yuan, Member, IEEE, Chenglong Li and Yuanyan Tang, Life Fellow, IEEEThis work was financially supported by National Natural Science Foundation of China (62176085, 62172458), Scientific Research Innovation Team in Colleges and Universities of Anhui Province (2022AH010095) and Industry-University-Research Cooperation Project (GP/026/2020 and HF-010-2021) Zhuhai City, Guangdong Province, China. (Corresponding author: Richang Hong and Enhong Chen. Equal Contribution: Lixiang Xu and Qingzhe Cui.)Lixiang Xu, Qingzhe Cui and Wei Xu are with the College of Artificial Intelligence and Big Data, Hefei University, Hefei 230027, China (e-mail: [email protected]; [email protected]; [email protected]).Richang Hong is with the School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230009, China (e-mail: [email protected]).Enhong Chen is with the Anhui Province Key Laboratory of Big Data Analysis and Application, School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui 230000, China (e-mail: [email protected]).Xin Yuan is with the School of Electrical and Mechanical Engineering, The University of Adelaide, Adelaide, SA 5005, Australia (e-mail: [email protected]).Chenglong Li is with the School of Artificial Intelligence, Anhui University, Hefei, 230601, China ([email protected]).Yuanyan Tang is with the Zhuhai UM Science and Technology Research Institute, FST University of Macau, Macau (e-mail: [email protected]).================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ In recent years, the results of view-based 3D shape recognition methods have saturated, and models with excellent performance cannot be deployed on memory-limited devices due to their huge size of parameters. To address this problem, we introduce a compression method based on knowledge distillation for this field, which largely reduces the number of parameters while preserving model performance as much as possible. Specifically, to enhance the capabilities of smaller models, we design a high-performing large model called Group Multi-view Vision Transformer (GMViT). In GMViT, the view-level ViT first establishes relationships between view-level features. Additionally, to capture deeper features, we employ the grouping module to enhance view-level features into group-level features. Finally, the group-level ViT aggregates group-level features into complete, well-formed 3D shape descriptors. Notably, in both ViTs, we introduce spatial encoding of camera coordinates as innovative position embeddings. Furthermore, we propose two compressed versions based on GMViT, namely GMViT-simple and GMViT-mini. To enhance the training effectiveness of the small models, we introduce a knowledge distillation method throughout the GMViT process, where the key outputs of each GMViT component serve as distillation targets. Extensive experiments demonstrate the efficacy of the proposed method. The large model GMViT achieves excellent 3D classification and retrieval results on the benchmark datasets ModelNet, ShapeNetCore55, and MCB. The smaller models, GMViT-simple and GMViT-mini, reduce the parameter size by 8 and 17.6 times, respectively, and improve shape recognition speed by 1.5 times on average, while preserving at least 90% of the classification and retrieval performance.3D object recognition, Multi-view ViT, View grouping, 3D position embedding, Knowledge distillation. § INTRODUCTIONWith the popularity of various 3D acquisition devices, the volume of 3D data has surged, which in turn has facilitated a shift from theoretical research on 3D data to experimental research based on deep learning. The main deep learning methods about 3D shape analysis are voxel-based methods <cit.>, point-based methods <cit.> and view-based methods <cit.>. All of the above methods have been widely applied in various fields such as autonomous driving, virtual/augmented reality, and medical diagnosis.Voxel-based methods extend 2D pixels to 3D space and extract their features by convolutional neural networks (CNNs) equipped with 3D convolutional kernel. Although this type of approach can achieve satisfactory performance, the memory footprint and computational consumption caused by increasing voxel resolution are significant. Point-based methods generate point clouds by scanning the surface of 3D objects with devices such as LiDAR, then learn geometric features on the surface of the point clouds through deep learning methods, and finally aggregate the extracted local information into global features utilizing symmetry functions. The view-based methods render the 3D target from different angles to get multiple views, then extract the information from individual views separately, and finally aggregate all the view features into 3D shape descriptors.How to efficiently fuse multiple view features and avoid redundancy of features has always been the most important issue for this class of methods. This is because seeing an object from only one angle is partial and the views rendered from adjacent angles have a high degree of similarity. To solve the above problem, a number of view feature fusion methods <cit.> have been proposed. Initially, using the symmetry of pooling functions is the most direct means to aggregate multiple view features into a 3D shape descriptor, but such simple pooling operations ignore the complementary relationships between views, which inevitably leads to loss of information. Thus, a number of approaches attempting to fully fuse multiple view features have since been introduced, such as using group pooling to capture the relationships of similar views <cit.>, treating multiple views as a set of ordered sequences and capturing the sequential relationship between them via recurrent neural networks <cit.> (RNN) <cit.>, trying to learn the optimal rendering positions of the camera to obtain more expressive images <cit.>, employing the self-attention mechanism of Vision Transformer <cit.> (ViT) to obtain global information between views <cit.>, and considering the spatial structure of the views as a graph and utilizing a graph convolutional neural network <cit.> (GCN) to aggregate information between views <cit.>. Despite the advancements made by the aforementioned methods in addressing the issue of view feature fusion, some limitations still persist. For instance, the group pooling approach <cit.> incorporates group feature pooling before global pooling, yet this intermediary step merely reduces the pooling scale, resulting in some information loss. To compensate for the inevitable loss, it becomes crucial to allow all features to interact fully before pooling. Consequently, our study introduces a novel approach that establishes relationships between view-level features and group-level features before applying group and global pooling independently. Furthermore, the RNN-based methods <cit.> primarily consider 1D sequential relationships among views, while the self-attention-based approach <cit.> uses traditional position embeddings to establish view relationships, inadvertently overlooking the spatial relationships among views. Given that multi-views are generated by placing the camera at various coordinates around the 3D object, which inherently carry vital positional information, we propose to map the rendering coordinates of the views to potential position embeddings when establishing view relationships through ViT.Additionally, various 3D shape recognition methods <cit.> have demonstrated exceptional performance, reaching a saturation point on certain 3D shape recognition datasets. Despite their improved performance, these methods tend to increase model parameters and reduce computation speed, restricting deployment to high-capability machines and limiting their application on mobile devices. Thus, it becomes necessary to compress the models while maintaining their excellent performance. Recent research has focused on knowledge distillation (KD) methods <cit.> for model compression. The concept was initially introduced by Hinton et al. <cit.> and has since evolved, with KD involving the use of a high-performance teacher network's output as soft labels for a low-performance student network. While most KD advancements were designed for CNN models, several KD methods <cit.> tailored for the ViT model have recently emerged, demonstrating their efficacy in feature or class token distillation through extensive experimentation.While extensive research has focused on KD in the field of 2D image recognition, its application in 3D shape recognition remains unexplored. 3D data comprises complex but more comprehensive information compared to 2D data, necessitating additional computational steps for effective information extraction. For instance, in 2D domain, the network model only needs to extract information from a single image to recognize an object. However, in the 3D multi-view domain, the network model must process individual images and integrate valuable information from multiple images while discarding redundant information. Therefore, it is necessary to compress the multi-view processing model. In multi-view knowledge distillation, the choice of intermediate outputs from the teacher model as distillation targets should consider several factors. First, selecting outputs from structurally complex modules like the self-attention mechanism in ViT can transfer more sophisticated feature information that is difficult for the weaker student model to learn. Second, outputs from information-rich modules like fully-connected layers contain more global features and can also be beneficial distillation targets. Additionally, combining outputs from different abstraction levels, both low-level and high-level semantics, can enable more comprehensive feature distillation. Analyzing each module's impact on the downstream task and selecting influential outputs is another strategy. Overall, choosing intermediate outputs with high information content and significance to guide the student model in learning the teacher’s core knowledge enables effective distillation. Specifically, this paper performs feature distillation from the CNN, view-level ViT, and group-level ViT modules to transfer multi-scale information. The group tokens are also distilled to align grouping. Logit distillation further provides holistic guidance. This multifaceted approach allows comprehensive knowledge transfer from teacher to student. The main contributions of this paper are as follows:* Proposing the Group Multi-view Vision Transformer (GMViT), a 3D shape recognition model that utilizes the rendering coordinates of views as position embeddings for the first time.This approach achieves state-of-the-art classification and retrieval results on benchmark datasets. * Designing compressed versions of GMViT, namely GMViT-simple and GMViT-mini, which significantly reduce the size of model parameters and computational complexity while improving the speed of 3D object recognition.* Pioneering the application of the knowledge distillation method in the field of 3D shape recognition. GMViT serves as the teacher model, while GMViT-simple and GMViT-mini are utilized as student models. The student models preserve the majority of the teacher model's performance through feature-based, group token-based, and logit-based distillation methods.The rest of the paper is organized as follows. Section <ref> presents the related work. Section <ref> details the proposed method. Section <ref> presents the experimental results and analysis. Section <ref> summarizes the full paper.§ RELATED WORKThis section provides a review of voxel-based, point-based and view-based 3D shape analysis methods. In addition, existing works on knowledge distillation are also reviewed.§.§ Voxel-Based MethodsThe voxel-based methods divide the 3D space into voxel units and construct a shape representation of the 3D object on them. The initial volume processing method is 3D Shapenet <cit.>, where the probability distribution of binary variables on a 3D voxel grid is obtained by learning a convolutional deep belief network. VoxNet <cit.> utilizes CNNs equipped with 3D convolution to output voxel occupancy on meshes. 3D convolution has higher complexity than 2D convolution, which leads to an exponential increase in time complexity and computational cost of such methods when the depth of the network or the resolution of the voxels increases. Therefore, some methods featuring low consumption and high efficiency have been proposed. O-CNN <cit.> is a CNN-based octree, aiming to use octrees to divide 3D shapes at different scales using octrees, which greatly improves the efficiency of voxel processing. §.§ Point-Based Methods Point clouds, compared to other modalities, have a simple representation comprising the coordinates of points on a 3D shape's surface. PointNet <cit.> processes point clouds directly using deep networks, extracting features with MLP and obtaining global features through pooling, effectively addressing permutation invariance and disorder. PointNet++ <cit.> improves segmentation by incorporating neighborhood information, overcoming PointNet's limitation. Wang et al. <cit.> proposed the EdgeConv module, establishing edges between points and neighbors using KNN. Lin et al. <cit.> used a deformable kernel with a 3D graph convolutional neural network. AdaptConv <cit.> developed an adaptive kernel considering central points and neighbors. With the success of the self-attention mechanism, subsequent models like PCT <cit.> and Point Transformer <cit.> aim to establish global relationships among all points. §.§ View-Based MethodsView-based methods represent 3D objects through a set of 2D views rendered at different angles. MVCNN <cit.>, the earliest study of this kind of method on deep learning, uses a set of CNNs with shared weights to extract features of all views, and then feeds these features to a pooling function to obtain shape descriptors. Although the process is simple, it provides a very valuable reference for subsequent studies. GVCNN <cit.> incorporated a hierarchical structure that divides similar view features into groups and applies pooling functions within each group and layer. This approach aims to mitigate feature loss resulting from direct employment of global pooling. In contrast to GVCNN, we introduce the Vision Transformer before group pooling and global pooling stages. This approach facilitates the establishment of global relationships between view-level features and group-level features, respectively. Consequently, it effectively mitigates information loss resulting from pooling. Wei et al. <cit.> considered a set of views as a graph, aggregate the neighboring features of each view node through GCN, and aggregate view features at different scales using a hierarchical structure. The MVTN <cit.>proposed by Hamdi et al. improves the representation of 3D objects by learning the optimal rendering positions of the views.Some methods utilize the order of view arrangement to enhance the learning of shape descriptors. These methods organize a set of views into a specific sequence based on predefined rules and subsequently utilize RNNs to capture temporal features among the views. Ma et al. <cit.> assigned weights and aggregated view features from each time step of the Long Short-Term Memory network <cit.> (LSTM) to derive global features. Xu et al. <cit.> captured the bi-directional dependency of view sequences by employing a Bi-directional Long Short-Term Memory network <cit.> (Bi-LSTM). Jin et al. <cit.> introduced a partial-based recurrent feature aggregation module, which utilizes LSTM to accumulate features from specific regions within each view over time. The SeqViews2SeqLabels <cit.> model primarily comprises an Encoder RNN and a Decoder RNN. The Encoder RNN is responsible for aggregating global features from a sequence of views, while the Decoder RNN is utilized for predicting the label of a 3D shape. In contrast, the 3D2SeqViews <cit.> model does not rely on an RNN structure to acquire sequence features. Instead, it employs hierarchical attention modules to aggregate view features into global features.Additionally, there exist methods that leverage the self-attention mechanism of ViT to capture the global relationships among views. Chen et al. proposed MVT <cit.>, a method that initially employs a Local Transformer Encoder to capture relationships between patches within each view individually. Subsequently, a Global Transformer Encoder is utilized to enable comprehensive interaction among patches from all views. MVDAN <cit.> combines the two features produced by the view space attention block and the channel attention block to generate compact shape descriptors. Nie et al. <cit.> broke the conventional multi-head self-attention approach and facilitated the fusion of multi-view features through the utilization of stacked deep self-attention. Lin et al. <cit.> highlighted that aggregating neighboring views could result in feature redundancy. Therefore, they introduced Mid-Range and Long-Range views to complement the Short-Range view features. This approach involved aggregating view features at each scale using the ViT Encoder. The aforementioned methods employ regular position embeddings, such as <cit.>, during the aggregation of view-level features using ViT. Views are generated by cameras that are discretely positioned in 3D space, and unlike patches of 2D images, they do not exhibit fixed front-to-back dependencies. Consequently, our GMViT, maps the rendering coordinates of each view to novel position embeddings.§.§ Knowledge DistillationKD, a highly effective method for enhancing the performance of small models, has generated significant attention in recent years. Hinton et al. <cit.> pioneered the usage of soft labels derived from the teacher model's output to enhance the training of the student model. This approach not only significantly compressed the small model but also yielded remarkable performance improvements. Initially, KD was predominantly employed for compressing CNN-based models. However, Touvron et al. <cit.> extended the application of KD to ViT-based models and demonstrated its viability. The recently proposed miniViT <cit.> by Zhang et al. employs self-attention distillation and Hidden-State distillation, which is feature-based distillation. Yang et al. <cit.> propose a novel approach for feature-based ViT distillation, which utilizes a special method to distill three distinct components of the teacher model. All the aforementioned methods are utilized for 2D image recognition, while the performance of traditional 3D shape recognition methods based on feature aggregation has reached a saturated point in recent years. Therefore, this paper aims to introduce knowledge distillation into the domain of multi-view recognition for the first time. § PROPOSED METHOD§.§ Group Multi-view Vision Transformer§.§.§ OverviewThe overall framework of GMViT is shown in Fig. <ref>. Initially, we utilize N cameras positioned at location pos={pos_1, pos_2, …, pos_N}∈ℝ^N×3 to render the 3D objects, generating a corresponding set of views, VIEW={view_1, view_2, …, view_N}. Then, we employ a set of CNNs with shared weights to extract the features F_v={f_1, f_2, …, f_N}∈ℝ^N× D from all the views. Subsequently, the position information is embedded into the view feature F_v with class token and fed into the view-level ViT. Within the view-level ViT, the position embeddings of the views are derived based on their respective camera positions, pos. Next, we dynamically group and pool the view features obtained from the view-level ViT along with the pos. Lastly, the view features of each group are sequentially aggregated to generate the final 3D shape descriptor. This aggregation process involves the group-level ViT, Max-Pooling, and MLP Head.§.§.§ View-level ViTBefore inputting the CNN-extracted view features F_v into the ViT, it is necessary to perform a position embedding of these features and the class token f_cls. In contrast to existing multi-view approaches that employ ViT, we introduce a novel position embedding method. This method utilizes a MLP to map the camera positions pos of the captured views to the position embeddings p_v={p_1, p_2, ..., p_N}∈ℝ^N× D of the view features: p_v=mlp(pos)where mlp stands for MLP. Then the process of embedding position information for the view features is:p_V=[p_cls, p_1, p_2, ..., p_N] ∈ℝ^(N+1)× D F_V=[f_cls, f_1, f_2, …, f_N] ∈ℝ^(N+1)× D F_V^∗=p_V+F_Vwhere [ ·, ·] denotes the concatenation operation, F_V^∗ represents the input feature of view-level ViT, and the class token f_cls along with its corresponding position embedding p_cls are acquired through a learning process. The position information from the cameras, distributed in 3D space, is incorporated into the view features, thereby enhancing the spatial information in the 3D shape descriptors. Subsequently, the features F_V^∗ are inputted into the view-level ViT, resulting in the generation of interacted features F_ViT_V={f_ViT_cls, f_ViT_1, f_ViT_2 …, f_ViT_N}∈ℝ^(N+1)× D. The view-level ViT, denoted as ViT_V={ViT_v_1, ViT_v_2, …, ViT_v_L}, comprises a series of L-layer ViTs. The process is:F_ViT_V=ViT_v_L(…(ViT_v_1(F_V^∗))) §.§.§ View groupingTo obtain 3D information at different scales, inspired by <cit.>, we group the view features F_ViT_view={f_ViT_1, f_ViT_2 …, f_ViT_N} of the view-level ViT output. First, we define a feature set G_F= {G_F_1, G_F_2, ..., G_F_M} and a position set G_P= {G_P_1, G_P_2, ..., G_P_M}. Subsequently, we utilize an MLP along with a sigmoid activation function to map the view features F_ViT_view to the group token set Token={t_1, t_2, ..., t_i,..., t_M}∈ℝ^M×1:Token=sigmoid(mlp(F_ViT_view))If the i-th view's group token satisfies:(m-1)/M≤ t_i<m/Mthen the feature F_ViT_i corresponding to the i-th view, along with the position p_i of its camera, is assigned to the m-th feature group G_F_m and the position group G_P_m, respectively (1≤ m≤ M, m∈ Z). Ultimately, the feature information and position information of the views will be fused independently within their respective groups. The group-level view features F_g={F_1, F_2, ..., F_M}∈ℝ^M× D are acquired by employing the maximum pooling function for aggregation, as follows: F_g={max(G_F_1), max(G_F_2), ..., max(G_F_M)}where max denotes maximum pooling. Regarding the position coordinates within each group, we compute their center-of-mass positions, which serve as the updated position information POS_G={POS_1, POS_2, ..., POS_m, ..., POS_M}. Suppose that G_P_m={(x_1, y_1, z_1), (x_2, y_2, z_2), ..., (x_u, y_u, z_u)} represents the position set of the m-th group. In this case, the computation of POS_m=(x_m, y_m, z_m)∈ℝ^3 is performed as follows: {[ x_m=(x_1+x_2+...+x_u)/u, ; y_m=(y_1+y_2+...+y_u)/u, ; z_m=(z_1+z_2+...+z_u)/u. ]. §.§.§ Group-level ViTThe group-level ViT VIT_G={VIT_g_1, VIT_g_2, ..., VIT_g_K} consists of K layers of ViT arranged in series, similar to the view-level ViT. Likewise, the processing steps for the group-level feature F_g using VIT_G follow a similar pattern to those for the view-level feature F_v utilizing the view-level ViT. First, the group-level position information POS_G is embedded into the group-level feature F_g: P_g=mlp(POS_G)={P_1, P_2, ..., P_M}∈ℝ^M× D P_G=[P_cls, P_1, P_2, ..., P_M]∈ℝ^(M+1)× D F_G=[F_cls, F_1, F_2, …, F_M]∈ℝ^(M+1)× D F_G^∗=P_G+F_Gwhere F_G^∗ denotes the input feature of group-level ViT, and the class token F_cls along with its corresponding position embedding P_cls are learnable.F_VIT_G= {F_VIT_cls, F_VIT_1, F_VIT_2, …, F_VIT_M}∈ℝ^(M+1)× D is obtained by utilizing the group-level ViT with F_G^∗∈ℝ^(M+1)× D as the input feature: F_VIT_G=VIT_g_L(...(VIT_g_1(F_G^∗)))Subsequently, we concatenate the maximum pooled group features F_VIT_group={F_VIT_1, F_VIT_2, …, F_VIT_M}∈ℝ^M× D with the class token F_VIT_cls∈ℝ^D, and input this concatenated representation into the MLP Head to generate the final 3D shape descriptor F_D∈ℝ^D: F_D=mlp(F_VIT_cls, max(F_VIT_group)) §.§.§ Feature ClassificationOnce the shape descriptor F_D is obtained, it is utilized for downstream tasks. In order to obtain the prediction result F_pred of the model, we introduce multiple MLPs to reduce the dimensionality of the feature F_D. Additionally, between each pair of MLPs, we include BatchNorm1d and ReLU activation functions to expedite the convergence of model:F_D^1=ReLU(Norm(mlp(F_D))) F_D^2=ReLU(Norm(mlp(F_D^1))) F_pred=mlp(F_D^2)where Norm denotes BatchNorm1d. The entire network is optimized by minimizing the cross-entropy loss between the prediction result F_pred and the Ground Truth. §.§ GMViT-simple and GMViT-miniIn this section we introduce two lightweight variants of GMViT, namely GMViT-simple and GMViT-mini. Recent 3D shape recognition methods typically leverage pre-trained CNN models like GoogLENet <cit.> and ResNet <cit.>, fine-tuning them on 3D datasets for individual view feature extraction. However, these CNN models have a large number of parameters, with even the lighter ResNet18 having 11.7 million (M) parameters. Therefore, we compress the CNN structure of GMViT, specifically ResNet18. As illustrated in Table <ref>, we directly connect multiple 2D convolutional modules and pooling functions without incorporating any residual structures. Following each of these convolutional structures, BatchNorm2d and ReLU activation functions are applied. Moreover, both the view-level ViT and the group-level ViT in GMViT consist of six ViT layers. Additionally, we compress the view-level ViT and group-level ViT as well. GMViT-simple reduces the number of ViT layers to 1 and sets the hidden layer's expansion ratio to 1, whereas GMViT-mini replaces these two models with two minimalist MLPs directly. By compressing the models, GMViT-simple and GMViT-mini, the size of the original large model is reduced from 44.1 M to 5.5 M and 2.5 M, respectively.§.§ Knowledge distillationIn this section, we employ the knowledge distillation method to enhance the training effectiveness of the small model. This method involves using the output knowledge of the pre-trained large model as the learning target for the small model. During the distillation process, the more powerful GMViT model serves as the teacher model, while the GMViT-simple and GMViT-mini models, which are weaker but refined, act as the student models. As illustrated in Fig. <ref>, this section employs a comprehensive distillation approach throughout the large model to preserve its performance to the fullest extent. The distillation process includes CNN feature distillation, view-level ViT feature distillation, group token distillation, group-level ViT feature distillation, global feature distillation, and prediction-logit distillation.§.§.§ CNN feature distillationRecent studies have demonstrated that distilling the output of the network's middle layer enhances the training effectiveness, validating the feature-based distillation is reasonable. The CNN module utilized in GMViT primarily consists of ResNet18, which has a well-designed structure, enabling it to effectively learn view features. We employ the mean square error (MSE) between the output F_t^CNN of the teacher CNN and the output F_s^CNN of the student CNN as the distillation target: ℒ_CNN=(1/N)∑_n=1^NMSE(F_t_n^CNN , F_s_n^CNN)§.§.§ View-level ViT feature distillationThe view-level ViT leverages deep ViTs to make the view features fully interactive and strengthen global relationships. The Encoder1 in the student model corresponds to a simple MLP or a single-layer lightweight ViT, which has limited capability in capturing view relations. Hence, we distill the superior view-level featuresF_t^view learned by the teacher model into the Encoder1 of the student model. The distillation target is defined as follows:ℒ_view=(1/N)∑_n=1^NMSE(F_t_n^view , F_s_n^view)where F_s^view represents the output feature of the Encoder1.§.§.§ Group token distillationThe grouping module of the teacher network has undergone thorough training and demonstrates effective grouping of upper-level features F_t^view. As the previous distillations have significantly aligned F_t^view and F_s^view, the group token Token_t from the teacher network can also be transferred to the student network. Therefore, the distillation target is defined as follows: ℒ_token=MSE(Token_t, Token_s)where Token_s represents the group token of student model.§.§.§ Group-level ViT feature distillationSimilarly, we take the MSE of the group-level ViT output feature F_t^group and the Encoder2 output feature F_s^group as the optimization target:ℒ_group=(1/M)∑_m=1^MMSE(F_t_m^group, F_s_m^group) §.§.§ Global feature distillationWe use the shape descriptor F_D in Equation <ref> as the global feature. As the global features are utilized directly in downstream tasks, distilling the global features becomes essential: ℒ_global=MSE(F_t^global, F_s^global)where F_t^global and F_s^global represent the shape descriptors of the teacher model and the student model, respectively.§.§.§ Prediction-logit distillationHinton et al. <cit.> employed the soft label pred_t, derived from the output of the teacher model, as the distillation target for optimizing the student model. They demonstrated that this approach is more effective in enhancing model performance compared to traditional training methods. Consequently, we incorporate the soft label loss ℒ_soft into the prediction loss. Furthermore, we introduce the hard label loss L_hard, which represents the cross-entropy loss between the predicted pred_s from the student model and the true labels. Since even the powerful teacher model cannot guarantee the correctness of all predictions, the true labels play a role in correcting errors when needed: ℒ_soft=KL(softmax(pred_t/T), softmax(pred_s/T)) ℒ_hard=CE(softmax(label), softmax(pred_s)) ℒ_logit=(1-λ)L_soft+λ L_hardwhere KL denotes Kullback-Leibler divergence loss, log_softmax denotes logarithm after passing the softmax function, T denotes distillation temperature, and CE denotes cross-entropy loss.To sum up, the final distillation target is:ℒ=ℒ_CNN+ℒ_view+ℒ_token+ℒ_group+ℒ_global+ℒ_logit § EXPERIMENT§.§ Datasets ModelNet: ModelNet <cit.> contains 127,000+ 3D CAD models from 662 categories. ModelNet40 includes 12311 objects from 40 categories (9843/2468 in training/testing). ModelNet10 has 4899 objects from 10 classes (3991/908 for training/testing). We use Circle-12 and Dodecahedron-20 camera settings <cit.> for evaluation.ShapeNetCore55: ShapeNetCore55 <cit.> is a subset of ShapeNet, containing 51,300 3D objects from 55 categories and 203 subcategories. It's split into a 7:1:2 ratio for training, validation, and testing. We evaluate on the NORMAL version, where 3D objects are aligned.MCB: MCB <cit.> is a 3D machine part dataset with two versions. MCB-A has 58,696 objects from 68 categories, while MCB-B has 18,038 objects from 25 categories of MCB-A. Objects are sourced from TraceParts, 3D Warehouse, and GrabCAD, without alignment. §.§ Implementation details Each 3D object is rendered into 224 × 224 2D images. GMViT's CNN backbone is based on ResNet18 <cit.>, excluding the last fully connected layer. Both the view-level ViT and group-level ViT have 6 layers and 8 attention heads each. The grouping module is set with 8 and 12 groups for Circle-12 and Dodecahedron-20 settings, respectively. GMViT-simple and GMViT-mini use the same grouping module settings as the large model. A Dropout layer with a 0.5 dropout rate is added to address overfitting.The model is trained using the SGD optimizer with 1e-4 momentum and weight decay for 100 epochs. The learning rate starts at 0.1 and decreases to 0.01 over 50 epochs with cosine annealing. Different strategies are used for training large and small models. Large model CNNs are pre-trained on ImageNet before fine-tuning on the 3D shape dataset. Small model CNNs are directly integrated during training. The distillation temperature is set to 5. §.§ Experiments on ModelNetIn this section, we present the classification and retrieval performance analysis of the proposed model on the ModelNet dataset. To validate the effectiveness of our proposed method, we compare it with a wide range of methods, including voxel-based (3D ShapeNet <cit.> and VRN-Ensemble <cit.>), point-based (PointNet <cit.>, PointNet++ <cit.>, GeomGCNN <cit.>, point2vec <cit.> and PointMLP <cit.>), multimodal-based (PVNet <cit.> and PVRNet <cit.>), and view-based (MVCNN <cit.>, GVCNN <cit.>, GIFT <cit.>, MHBN <cit.>, 3D2SeqViews <cit.>, Ma et al. <cit.>, DAN <cit.>, RelationNet <cit.>, CAR-Net <cit.>, RotationNet <cit.>, and View-GCN <cit.>) approaches. The primary evaluation metrics for classification are overall accuracy (OA) and mean class accuracy (mA). For the retrieval task, the shape descriptors are obtained by directly utilizing the 256-dimensional features from the classifier's penultimate fully connected layer. In the retrieval task, each object in the testing set is treated as a query, and a KD-Tree is employed to rank the similarity of its feature to the remaining object features. The mean average precision (mAP) is subsequently calculated based on this ranking. §.§.§ Classification resultsTable <ref> shows the model's classification performance. Generally, view-based methods outperform point-based methods significantly. Our GMViT achieves optimal performance across all indicators in both datasets under the Dodecahedron-20 setting. Our GMViT demonstrates an approximate 2% improvement in OA for both datasets compared to the optimal voxel-based model VRN-Ensemble <cit.>. Our GMViT achieves comparable classification performance to the current leading view-based method, CAR-Net<cit.>. Both methods consider the spatial relationship of views from different perspectives. Our method also outperforms other methods in the Circle-12 setting alone. DAN <cit.> replaces parallel multi-head self-attention with deep self-attention to enhance the fusion of significant 3D features between views. In contrast, our approach incorporates the spatial information of views in the position embedding part of GMViT and extends the consideration beyond the relationship between views to encompass the relationship between groups. 3D2SeqViews <cit.> considers a view as a sequence and captures its dependencies through hierarchical attention aggregation. However, this approach largely overlooks the positional relationships of views in 3D space. Unlike 3D2SeqViews, we utilize the rendering coordinates of the view as the position embedding , enabling us to convert the 1D sequence relations into their corresponding 3D spatial relations. GVCNN <cit.> incorporates group pooling before global pooling, resulting in reduced pooling scale and effectively mitigating feature loss. In comparison to GVCNN, we introduce two types of ViT for establishing the relationship between view-level and group-level features before group pooling and global pooling, respectively. This approach further minimizes feature loss attributed to pooling. In addition, we evaluate the performance of the small models GMViT-simple and GMViT-mini. Directly training small models with hard labels leads to unsatisfactory classification results due to performance degradation resulting from simplified networks. However, the small models trained using our proposed knowledge distillation method are more effectively optimized. Significantly, the student model achieves classification performance on the ModelNet10 dataset that is comparable to the current state-of-the-art method, CAR-Net.§.§.§ Retrieval resultsRegarding shape retrieval, our GMViT also demonstrates outstanding performance. While GMViT and CARNet achieved comparable classification results under the Dodecahedron-20 setting, GMViT outperforms CARNet in retrieval performance with improvements of 2.53% and 1.51% on the respective datasets. Conversely, CAR-Net's <cit.> retrieval performance is not superior to that of all other methods under the Circle-12 setting. Remarkably, our GMViT outperforms all other methods, including Dodecahedron-20, on ModelNet10 while maintaining superiority under the same setting. This provides evidence of the superior ability of our proposed GMViT to learn more effective 3D shape descriptors.Similarly, GMViT-simple and GMViT-mini demonstrate substantial and comprehensive improvements in retrieval performance following the distillation process. Particularly noteworthy, GMViT-simple and GMViT-mini outperform all other large models on the ModelNet10 dataset when evaluated under the Dodecahedron-20 setting. This remarkable achievement can primarily be attributed to the exceptional retrieval performance of the student models, which is inherited from the teacher model through the distillation process. To better show the similarity of the 3D shape descriptors learned by each model, we plot them in the Fig. <ref>.§.§ Experiments on ShapeNetCore55In order to comprehensively evaluate the shape retrieval performance of GMViT, we conduct experiments on the ShapeNetCore55 dataset. Consistent with the experimental setup described in <cit.>, we limit the retrieval to a maximum of 1000 shapes per query. In the retrieval process, we utilize multiple views as model input under the Dodecahedron-20 setting and employed KD-Tree to generate the retrieval score ranking for each shape. We utilize indicators under both “microALL" and “macroALL" settings. “microALL" represents a weighted average based on the category size of the samples, while “macroALL" does not consider such weighting. The retrieval results, sourced from <cit.>, are presented in Table <ref>. Our method's performance is only slightly lower than the competition-winning method, RotationNet, in terms of the NDCG indicator under the “microALL" setting. In the retrieval task, P@N and R@N indicators demonstrate a trade-off relationship. Our method achieves a better balance between P@N and R@N compared to the runner-up method, DLAN. Compared with the recently published method MVCNN(VAM+IAM) <cit.>, our GMViT demonstrates greater advantages in general. §.§ Experiments on MCBIn this section, we conduct additional experiments on the MCB-A dataset to further validate the effectiveness of the proposed method. The experiments encompass both 3D shape classification and retrieval tasks. The primary comparison methods include PointCNN <cit.>, PointNet <cit.>, SpiderCNN <cit.>, MVCNN <cit.>, RotationNet <cit.>, DLAN <cit.>, and VRN <cit.>. The experimental results for all the aforementioned methods are obtained from <cit.>.§.§.§ Classification resultsThe classification results of models on MCB-A are presented in Table <ref>. Among the models, RotationNet <cit.>, an advanced multi-view approach, achieves the highest classification results, with our GMViT ranking second. Despite being a view-based method, MVCNN <cit.> exhibits the lowest performance. In contrast, our smaller models, GMViT-simple and GMViT-mini, outperform MVCNN significantly and demonstrate further improvement through distillation. This finding validates that, in view-based methods, the quality of the multi-view feature fusion module holds greater significance than that of a single-view feature extraction module. §.§.§ Retrieval resultsThe retrieval results of models are presented in Table <ref>. Consistent with <cit.>, we evaluate the model performance using F1@N, MAP, and NDCG as evaluation indicators. Additionally, we introduce the “microALL+macroALL" metric, which represents the average performance of microALL and macroALL evaluations, providing a comprehensive assessment. It can be seen that our GMViT achieves the best results in general. Despite slight superiority in the classification task, RotationNet <cit.> does not exhibit superior performance in retrieval. This sensitivity is attributed to the presence of numerous unaligned shapes in MCB-A, which affects RotationNet's performance significantly. PointCNN <cit.> and PointNet++ <cit.>, being inherently resistant to point cloud permutation invariance, attain optimal results in <cit.>. Furthermore, our small models, GMViT-simple and GMViT-mini, exhibit impressive performance even without knowledge distillation, which is further enhanced through the distillation process. Remarkably, knowledge distillation results in GMViT-simple surpassing GMViT in MAP within the macroALL evaluation. To better observe the similarity of the 3D shape descriptors, we plot them in the Fig. <ref>.§.§ Analysis of GMViTIn this section, we analyze the various parameters and components of GMViT. All experiments were carried out under the Dodecahedron-20 setting.§.§.§ Position embeddingWe conduct a comparison between the proposed position embedding method and other approaches, and the results are presented in Table <ref>. Applying the traditional position embedding (PE) improves the OA and mA of the model to some extent compared to the model without PE.Furthermore, utilizing the camera position as the PE leads to the highest classification performance, improving it by at least 1% compared to the traditional PE. These findings highlight the significant loss of valuable information when disregarding the positional relationship among views, with the spatial relationship between views containing more crucial information compared to the sequence relationship. §.§.§ Number of ViT layersWe test the classification performance of GMViT by changing the number of stacked layers of ViT. Both the view-level ViT and group-level ViT consist of an equal number of layers. The classification results on ModelNet are presented in Fig. <ref> (a) and (b). The results demonstrate a consistent increase in accuracy as the number of layers increases from 1 to 6, suggesting that a greater number of layers promotes enhanced interaction between view-level and group-level features. Nevertheless, the accuracy declines with further increases in the number of layers, as the model performance reaches its peak at 6 layers.§.§.§ Number of attention heads in ViT We analyze the number of self-attention heads in GMViT. The experimental results showed in Fig. <ref> (a) demonstrate that the OA of the model falls below 97.5% on ModelNet40 when using 1 or 2 attention heads, surpasses 97.5% with 4 attention heads, and achieves its peak performance with 8 attention heads. This indicates that distinct self-attention heads effectively capture diverse semantic information, and the aggregation of multiple heads enriches the final 3D representation. Nonetheless, setting the number of heads to 16 leads to a decrease in model accuracy, possibly due to information redundancy arising from an excessive number of attention points. Fig. <ref> (b) also demonstrates the same accuracy trend of the model on ModelNet10.§.§.§ Number of groups of grouping modulesWe observe the change of GMViT on ModelNet by changing the number of groups of GMViT grouping modules. The Fig. <ref> (a) and (b) illustrate consistent increases in overall accuracy (OA) as the number of groups increases from 2 to 12, demonstrating that finer groupings enhance the model's performance. While it is not guaranteed that each group can be assigned features among the numerous divisions, a larger number of groups refines the boundaries of each group. Across various models employing different grouping modules, objects with the same view group token may yield different groupings due to variations in the degree of group boundaries.§.§.§ Components of GMViTFinally we conduct ablation analysis on the view-level ViT, grouping module and group-level ViT of GMViT. The classification results of various GMViT versions on ModelNet40 are presented in Table <ref>. In the absence of a grouping module, group-level features are nonexistent, making the group-level ViT equivalent to the view-level ViT. Consequently, models with fewer layers of ViT outperform those with more layers in terms of performance. The absence of the view-level ViT has the most detrimental impact on the model's classification performance. This could be attributed to the lack of information interaction between the view features generated by the CNN, as they are directly grouped and pooled within the grouping module, leading to significant information loss. This confirms the indispensable role of all three components in GMViT.§.§ Analysis of knowledge distillation§.§.§ Compression effect We analyze the impact of model compression. To ensure a fair comparison, all models are tested on a single NVIDIA RTX 3090 GPU. BatchSize is set to 8 to account for system memory variations, and experiments are conducted on the ModelNet40 testing set. Inference speed is measured in objects per second, calculated by the time taken for the model to classify objects within a single epoch. Results are shown in Table <ref>. Despite MVCNN's larger VGG-M baseline and the highest parameter count, it achieves the lowest classification and retrieval results. GMViT, although not having the largest parameter size, outperforms most methods in both inference speed and performance. Notably, GMViT-simple and GMViT-mini are compressed versions of GMViT, reducing parameter size by 8 and 17.6 times, respectively, while maintaining at least 96% and 90% of the classification and retrieval performance through knowledge distillation. Our small model exhibits approximately 1.5 times faster inference speed compared to the large model, a modest improvement considering the significant reduction in parameter size. The time distribution analysis in Fig. <ref> reveals that the majority of processing time is allocated to the "Group" and "Else" components, likely due to the dominance of looping statements in these sections. The limited increase in inference speed can be attributed to the shared use of the same grouping modules in both the large and small models.§.§.§ Distillation targets The inclusion or exclusion of distillation losses directly signifies the presence or absence of the distillation targets. As shown in Table <ref>, the model's performance keeps improving as we gradually increase the distillation target. The incremental improvements validate the rationale behind each distillation target: CNN features provide basic view representations to transfer lower-level knowledge. View-level ViT outputs contain complex relational information that is difficult to learn alone, providing sophisticated feature distillation. Group-level ViT outputs further enrich the relational information transfer. Group tokens transfer crucial grouping knowledge, demonstrating the value of distilling information-rich intermediate outputs. Global features provide holistic supervision, in line with distilling the most influential outputs. Logit distillation gives end-to-end guidance. Additionally, the targets cover both low-level view features and high-level shape representations, enabling multi-scale knowledge transfer. The substantial improvements from view-level ViT group-level align with the strategy of distilling complex, information-rich module outputs. The global feature improvements validate distilling influential intermediate results. In conclusion, the multi-faceted targets effectively transfer knowledge at different levels of abstraction and complexity, leading to optimized student learning. The analysis demonstrates principled distillation target selection.§.§.§ Distillation temperature The impact of various temperatures on the distillation effect is presented in Table <ref>. At a temperature value of 1, the student model achieves an OA of only 88.53%, which is inferior to the performance of the model trained without distillation. This suggests that at this temperature, the soft labels entirely preserve the teacher model's output, making it challenging for the student model to learn the complex details. In contrast, the classification performance of the student model reaches its peak when the temperature is raised to 5, implying that higher temperatures facilitate the student model's learning from the teacher model. Nevertheless, as the temperature further increases, the performance of the student model deteriorates, possibly attributable to the over-smoothing of the soft labels caused by the excessively high temperature. §.§.§ Coefficients of soft and hard labelsTo enhance the training of the student model, we perform experiments to determine the optimal label coefficients. As shown in Table <ref>, we vary the coefficients of the soft label and hard label from 0 to 1 while ensuring their sum is 1. The model attains optimal classification results with coefficients of 0.7 for the soft label and 0.3 for the hard label. Setting the hard label coefficient to 0 leads to a degradation in the model's classification performance, suggesting that the teacher model's conclusions are not always reliable during the student model's learning process, and the hard label is necessary to rectify errors when required. Similarly, when the soft label coefficient is set to 0, the model's performance is diminished, indicating that the soft label encompasses more meaningful information than the hard label.§ CONCLUSION In this paper, we propose a method called Group Multi-view Vision Transformer (GMViT) for 3D shape recognition. To strengthen view relationships, we utilize view-level ViT to foster interaction among view-level features. For capturing information at varying scales, we employ the grouping module to aggregate low-level view-level features into high-level group-level features. Additionally, we employ group-level ViT to fuse the group-level features and obtain the final 3D shape descriptor. Notably, The introduced spatial encoding of camera coordinates as position embeddings equips the model with valuable view spatial information. GMViT has exhibited outstanding performance on multiple 3D shape recognition datasets. Furthermore, we pioneer application of knowledge distillation to multi-view 3D shape recognition, enabling model compression while preserving performance. The distillation incorporates complementary outputs to transfer multi-scale knowledge. This systematic approach effectively transfers knowledge across different levels of abstraction, as demonstrated by substantial improvements. While promising, some limitations exist indistillation speed-up. Future work can address this and extend the method to other 3D tasks. 10maturana2015voxnet Daniel Maturana and Sebastian Scherer, “Voxnet: A 3d convolutional neural network for real-time object recognition,” in 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2015, pp. 922–928.wu20153d Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao, “3d shapenets: A deep representation for volumetric shapes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1912–1920.brock2016generative Andrew Brock, Theodore Lim, James M Ritchie, and Nick Weston, “Generative and discriminative voxel modeling with convolutional neural networks,” arXiv preprint arXiv:1608.04236, 2016.wang2019dynamic Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon, “Dynamic graph cnn for learning on point clouds,” Acm Transactions On Graphics (tog), vol. 38, no. 5, pp. 1–12, 2019.zhou2021adaptive Haoran Zhou, Yidan Feng, Mingsheng Fang, Mingqiang Wei, Jing Qin, and Tong Lu, “Adaptive graph convolution for point cloud analysis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4965–4974.xu2018spidercnn Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, and Yu Qiao, “Spidercnn: Deep learning on point sets with parameterized convolutional filters,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 87–102.li2018pointcnn Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen, “Pointcnn: Convolution on x-transformed points,” Advances in neural information processing systems, vol. 31, 2018.qi2017pointnet Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660.qi2017pointnet++ Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in neural information processing systems, vol. 30, 2017.guo2021pct Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R Martin, and Shi-Min Hu, “Pct: Point cloud transformer,” Computational Visual Media, vol. 7, no. 2, pp. 187–199, 2021.ma2022rethinking Xu Ma, Can Qin, Haoxuan You, Haoxi Ran, and Yun Fu, “Rethinking network design and local geometry in point cloud: A simple residual mlp framework,” arXiv preprint arXiv:2202.07123, 2022.zeid2023point2vec Karim Abou Zeid, Jonas Schult, Alexander Hermans, and Bastian Leibe, “Point2vec for self-supervised representation learning on point clouds,” arXiv preprint arXiv:2303.16570, 2023.srivastava2021exploiting Siddharth Srivastava and Gaurav Sharma, “Exploiting local geometry for feature and graph construction for better 3d point cloud processing with graph neural networks,” in 2021 IEEE INternational conference on robotics and automation (ICRA). IEEE, 2021, pp. 12903–12909.han20193d2seqviews Zhizhong Han, Honglei Lu, Zhenbao Liu, Chi-Man Vong, Yu-Shen Liu, Matthias Zwicker, Junwei Han, and CL Philip Chen, “3d2seqviews: Aggregating sequential views for 3d global feature learning by cnn with hierarchical attention aggregation,” IEEE Transactions on Image Processing, vol. 28, no. 8, pp. 3986–3999, 2019.ma2018learning Chao Ma, Yulan Guo, Jungang Yang, and Wei An, “Learning multi-view representation with lstm for 3-d shape recognition and retrieval,” IEEE Transactions on Multimedia, vol. 21, no. 5, pp. 1169–1182, 2018.feng2018gvcnn Yifan Feng, Zizhao Zhang, Xibin Zhao, Rongrong Ji, and Yue Gao, “Gvcnn: Group-view convolutional neural networks for 3d shape recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 264–272.han2018seqviews2seqlabels Zhizhong Han, Mingyang Shang, Zhenbao Liu, Chi-Man Vong, Yu-Shen Liu, Matthias Zwicker, Junwei Han, and CL Philip Chen, “Seqviews2seqlabels: Learning 3d global features via aggregating sequential views by rnn with attention,” IEEE Transactions on Image Processing, vol. 28, no. 2, pp. 658–672, 2018.kanezaki2018rotationnet Asako Kanezaki, Yasuyuki Matsushita, and Yoshifumi Nishida, “Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5010–5019.wei2020view Xin Wei, Ruixuan Yu, and Jian Sun, “View-gcn: View-based graph convolutional network for 3d shape analysis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1850–1859.xu2021multi Yong Xu, Chaoda Zheng, Ruotao Xu, Yuhui Quan, and Haibin Ling, “Multi-view 3d shape recognition via correspondence-aware deep learning,” IEEE Transactions on Image Processing, vol. 30, pp. 5299–5312, 2021.yu2018multi Tan Yu, Jingjing Meng, and Junsong Yuan, “Multi-view harmonized bilinear network for 3d object recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 186–194.yang2019learning Ze Yang and Liwei Wang, “Learning relationships for multi-view 3d object recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 7505–7514.lin2022multi Dongyun Lin, Yiqun Li, Yi Cheng, Shitala Prasad, Tin Lay Nwe, Sheng Dong, and Aiyuan Guo, “Multi-view 3d object retrieval leveraging the aggregation of view and instance attentive features,” Knowledge-Based Systems, vol. 247, pp. 108754, 2022.su2015multi Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller, “Multi-view convolutional neural networks for 3d shape recognition,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 945–953.nie2021dan Weizhi Nie, Yue Zhao, Dan Song, and Yue Gao, “Dan: Deep-attention network for 3d shape recognition,” IEEE Transactions on Image Processing, vol. 30, pp. 4371–4383, 2021.graves2012long Alex Graves, “Long short-term memory,” Supervised sequence labelling with recurrent neural networks, pp. 37–45, 2012.hamdi2021mvtn Abdullah Hamdi, Silvio Giancola, and Bernard Ghanem, “Mvtn: Multi-view transformation network for 3d shape recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1–11.dosovitskiy2020image Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.xu2021multiple Lixiang Xu, Lu Bai, Jin Xiao, Qi Liu, Enhong Chen, Xiaofeng Wang, and Yuanyan Tang, “Multiple graph kernel learning based on gmdh-type neural network,” Information Fusion, vol. 66, pp. 100–110, 2021.xu2021deep Lixiang Xu, Lu Bai, Xiaoyi Jiang, Ming Tan, Daoqiang Zhang, and Bin Luo, “Deep rényi entropy graph kernel,” Pattern Recognition, vol. 111, pp. 107668, 2021.xu2019deeply Yong Xu, Chaoda Zheng, Ruotao Xu, and Yuhui Quan, “Deeply exploiting long-term view dependency for 3d shape recognition,” IEEE Access, vol. 7, pp. 111678–111691, 2019.jin2021prema Jiongchao Jin, Huanqiang Xu, Zehao Tang, Pengliang Ji, and Zhang Xiong, “Prema: Part-based recurrent multi-view aggregation network for 3d shape retrieval,” in 2021 2nd International Conference on Computer Science and Management Technology (ICCSMT). IEEE, 2021, pp. 311–318.chen2021mvt Shuo Chen, Tan Yu, and Ping Li, “Mvt: Multi-view vision transformer for 3d object recognition,” arXiv preprint arXiv:2110.13083, 2021.lin2023multi Dongyun Lin, Yiqun Li, Yi Cheng, Shitala Prasad, Aiyuan Guo, and Yanpeng Cao, “Multi-range view aggregation network with vision transformer feature fusion for 3d object retrieval,” IEEE Transactions on Multimedia, 2023.hinton2015distilling Geoffrey Hinton, Oriol Vinyals, and Jeff Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.yang2022vitkd Zhendong Yang, Zhe Li, Ailing Zeng, Zexian Li, Chun Yuan, and Yu Li, “Vitkd: Practical guidelines for vit feature knowledge distillation,” arXiv preprint arXiv:2209.02432, 2022.touvron2021training Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou, “Training data-efficient image transformers & distillation through attention,” in International conference on machine learning. PMLR, 2021, pp. 10347–10357.zhang2022minivit Jinnian Zhang, Houwen Peng, Kan Wu, Mengchen Liu, Bin Xiao, Jianlong Fu, and Lu Yuan, “Minivit: Compressing vision transformers with weight multiplexing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12145–12154.wang2017cnn Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, and Xin Tong, “O-cnn: Octree-based convolutional neural networks for 3d shape analysis,” ACM Transactions On Graphics (TOG), vol. 36, no. 4, pp. 1–11, 2017.lin2020convolution Zhi-Hao Lin, Sheng-Yu Huang, and Yu-Chiang Frank Wang, “Convolution in the cloud: Learning deformable kernels in 3d graph convolution networks for point cloud analysis,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1800–1809.zhao2021point Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun, “Point transformer,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 16259–16268.huang2015bidirectional Zhiheng Huang, Wei Xu, and Kai Yu, “Bidirectional lstm-crf models for sequence tagging,” arXiv preprint arXiv:1508.01991, 2015.wang2022multi Wenju Wang, Yu Cai, and Tao Wang, “Multi-view dual attention network for 3d object recognition,” Neural Computing and Applications, vol. 34, no. 4, pp. 3201–3212, 2022.szegedy2015going Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.he2016deep Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.chang2015shapenet Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.kim2020large Sangpil Kim, Hyung-gun Chi, Xiao Hu, Qixing Huang, and Karthik Ramani, “A large-scale annotated mechanical components benchmark for classification and retrieval tasks with deep neural networks,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16. Springer, 2020, pp. 175–191.you2018pvnet Haoxuan You, Yifan Feng, Rongrong Ji, and Yue Gao, “Pvnet: A joint convolutional network of point cloud and multi-view for 3d shape recognition,” in Proceedings of the 26th ACM international conference on Multimedia, 2018, pp. 1310–1318.you2019pvrnet Haoxuan You, Yifan Feng, Xibin Zhao, Changqing Zou, Rongrong Ji, and Yue Gao, “Pvrnet: Point-view relation neural network for 3d shape recognition,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 9119–9126.bai2016gift Song Bai, Xiang Bai, Zhichao Zhou, Zhaoxiang Zhang, and Longin Jan Latecki, “Gift: A real-time and scalable 3d shape search engine,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 5023–5032.savva2017large Manolis Savva, Fisher Yu, Hao Su, Asako Kanezaki, Takahiko Furuya, Ryutarou Ohbuchi, Zhichao Zhou, Rui Yu, Song Bai, Xiang Bai, et al., “Large-scale 3d shape retrieval from shapenet core55: Shrec'17 track,” in Proceedings of the workshop on 3D object retrieval, 2017, pp. 39–50.furuya2016deep Takahiko Furuya and Ryutarou Ohbuchi, “Deep aggregation of local 3d geometric features for 3d model retrieval.,” in BMVC, 2016, vol. 7, p. 8.he2020improved Xinwei He, Song Bai, Jiajia Chu, and Xiang Bai, “An improved multi-view convolutional neural network for 3d object retrieval,” IEEE Transactions on Image Processing, vol. 29, pp. 7917–7930, 2020.
http://arxiv.org/abs/2312.16477v2
{ "authors": [ "Lixiang Xu", "Qingzhe Cui", "Richang Hong", "Wei Xu", "Enhong Chen", "Xin Yuan", "Chenglong Li", "Yuanyan Tang" ], "categories": [ "cs.CV", "cs.AI", "68", "I.2.10" ], "primary_category": "cs.CV", "published": "20231227085241", "title": "Group Multi-View Transformer for 3D Shape Analysis with Spatial Encoding" }
Source Code is a Graph, Not a Sequence: A Cross-Lingual Perspective on Code Clone Detection *A Challenge and Opportunity for Source Code Research: Through the Lens of Code Clone Detection using Python and Java. Source code can be found here <https://github.com/Ataago-AI/clone-detection> Mohammed Ataaur RahamanSchool of Electronic Engineering and Computer Science Queen Mary UniversityLondon, UK [email protected] IveSchool of Electronic Engineering and Computer Science Queen Mary UniversityLondon, UK [email protected]==================================================================================================================================================================================================================================================================================================================================================================================================================================Source code clone detection is the task of finding code fragments that have the same or similar functionality, but may differ in syntax or structure. This task is important for software maintenance, reuse, and quality assurance <cit.>. However, code clone detection is challenging, as source code can be written in different languages, domains, and styles. In this paper, we argue that source code is inherently a graph, not a sequence, and that graph-based methods are more suitable for code clone detection than sequence-based methods. We compare the performance of two state-of-the-art models:<cit.>, a sequence-based model, and<cit.>, a graph-based model, on two benchmark data-sets:<cit.> and<cit.>. We show thatoutperformson both data-sets, especially on cross-lingual code clones. To the best of our knowledge, this is the first work to demonstrate the superiority of graph-based methods over sequence-based methods on cross-lingual code clone detection.graph-based modeling, sequence-modeling, AST, DFG, CPG, source code clone detection, graph transformers, attention, , , GNN, ,§ INTRODUCTIONSource code is a formal language that describes the logic and behavior of a software system <cit.>. Source code can be written in different programming languages, domains, and styles, but it always has a common characteristic, i.e. it is structured as a graph, not a sequence. A graph is a mathematical object that consists of nodes and edges, where nodes represent entities and edges represent relations <cit.>. A graph can capture both the syntactic and semantic information of the source code, such as the tokens, statements, control flow, data flow, and program dependence to name a few. To realise that source code is a graph, not a sequence, we take a simple code task of identifying if a pair of source code given is a clone or not. Code clone detection is the task of finding code fragments that are identical or similar in functionality, but may differ in syntax or structure. Code clone detection is important for software engineering, as it can help to improve code quality, reduce maintenance costs, and prevent bugs and vulnerabilities <cit.>. However, code clone detection is also challenging, as it requires a deep understanding of the semantics and logic of the code, which may vary across different programming languages, domains, and styles.Existing methods for code clone detection can be broadly classified into two categories: sequence-based and graph-based. Sequence-based methods rely on textual similarity of the code, such as token sequences. Graph-based methods rely on structural similarity of the code, such as Abstract Syntax Tree (ASTs), or control flow graphs (CFGs) or Code Property Graphs (CPGs). Sequence-based methods are fast and scalable, but they may fail to detect clones that have different syntax or structure. Graph-based methods are more accurate and robust, but they may be slow and complex, especially for large-scale or cross-language code clone detection. A python source code clone pair is presented in Listing [<ref>, <ref>]. The two code snippets have the same semantic behavior: they print ‘A’ or ‘a’ depending on the case of the input. However, they differ in their syntactic forms. Further such examples can be viewed in Appendix <ref>. 2[] [language=Python, caption=Python code 1, linewidth=.99, label=motivational_expample_1] S = input()if S.isupper(): print("A") else: print("a") [language=Python, caption=Python code 2, linewidth=.99, label=motivational_expample_2] alp=input()if alp==alp.upper(): print("A") elif alp==alp.lower(): print("a")In this paper, we argue that source code is naturally a graph, not a sequence, and that graph-based methods are more suitable for code clone detection than sequence-based methods. We compare the performance of sequence-based and graph-based methods for code clone detection on two benchmark data-sets:<cit.> and<cit.>.is a data-set of Java code snippets where asis a data-set of Python code snippets. We use<cit.> as a representative sequence-based modelling approach, and<cit.> as a representative graph-based modeling approach.is a bimodal pre-trained model for programming language (PL) and natural language (NL) that learns general-purpose representations that support downstream NL-PL applications.is a graph-based model for semantic code clone detection based on a Siamese graph-matching network that uses attention mechanisms to learn code semantics from DFGs and CPGs.We conduct various experiments to evaluate the accuracy, recall, precision, and F1-score ofandon three experimental setups: (i) in-domain static source code analysis , (ii) cross-lingual generalization and semantic extraction, and (iii) zero-shot source code clone classification. We show thatoutperformson all experimental setups in terms of all metrics. The main contributions of this paper are as follows:* To best of our knowledge, we are the first one to demonstrate the superiority of graph-based methods over sequence-based methods for multilingual static source code analysis tasks, such as clone detection, by exploiting the natural graph structure of source code across programming languages.* We provide novel insights on the generalization and cross-domain understanding of graph-based models, compared to sequence-based models, for source code analysis, as they leverage both the syntactic and semantic features of source code in various cross-domain settings.* We show how mixing cross-lingual data-sets can improve the overall performance of the graph-based model by 4.5%, as it can learn from the commonalities and differences of different programming languages.* We focus on the under-explored clone detection python data-set POOLC, along with the benchmark Java data-set BCB, and draw parallel comparisons on both of the data-sets.* We provide highly efficient and scalable code for standard CPG representation generation using Tree-sitter <cit.> and Micrsoft's DFG <cit.>, along with the re-implemented code for the sequence and graph-based models.The rest of this paper is organized as follows: Section <ref> reviews the related work on source code clone detection. Section <ref> introduces to the various code representations, like Abstract syntax tree, Data flow graph etc and the background knowledge on graph neural networks and natural language processing. Section <ref> describes the details of how the experiments are carried out, and lays down the research questions. Section <ref> presents the results and its discussion. Section <ref> discusses the limitations and future research directions. Section <ref> concludes the paper. § RELATED WORK§.§ What is static source code analysis?Static code analysis is a valuable technique for improving software quality and security without actually compiling the code. It can find errors that are hard to detect at run time, improve the quality and maintainability of the code, and reduce the cost and time of testing and debugging. This is basically a form of white-box testing. According to <cit.> the cost of fixing a defect increases exponentially as it moves from the coding phase to the testing phase to the maintenance phase. Therefore, having a static tools to analyse and fix a source code as soon as possible helps save lot of resources and efforts.§.§ What are applications of static code analysis?There are various different applications for static code analysis. Security Vulnerability detection, is one of the major static code analysis, which can help developers identify and fix security vulnerabilities before they are exploited by the attackers as extensively stated by <cit.>. Another important area of static code analysis is in helping developers find inefficient algorithms and improve the resource utilization, response time, and other throughput of the software. Clone Detection is another application that can help identify similar functional code fragments which may indicate code duplication, plagiarism or reuse. This can improve quality, maintainability and the security of the code by eliminating redundant and inconsistent or outdated code <cit.>. §.§ What is source code clone detection?Source code clone detection is the process of finding code fragments that have similar functionalities or structures, which could indicate that the code is a duplicate, plagiarised or reused, which may be done purposefully, negligently or accidentally by a developer as said by <cit.>. Clone detection is very harmful to the quality of the entire source code <cit.>. A broad categorization on various types of code clones is given by <cit.>.* Textual Similarity * Type I: Changes in White-spaces, comments, layouts.* Type II: Renaming of variable names, or changes in types and identifiers. * Type III: Addition or removal of statements. * Functional Similarity * Type IV: Complete change in syntax, but functionally same behavior.§.§ Types of Clone detection approaches?According to <cit.> there are multiple techniques to detect a source code clones, like Text based, token based, tree based, Program dependency graph based, metrics based, or hybrid approaches. Here we deal and compare token based vs tree based clone detection approach. As we know that, to detection semantically same code clones, the model should not just rely on difference of syntax, but also understand the semantics of the structure. This becomes hard, if we pass the source code to the model as sequence rather than a Tree like Abstract Syntax Tree which naturally holds the syntatical information of a source code. §.§ Sequence based modelingThere has been various sequence based modelling approaches used by source code clone detection like, CodeBERT <cit.>, UNIXCODER <cit.>, ContraBERT <cit.>. Here in sequence modeling the source code is tokenized as a piece of words (or source code). This tokenized pieces of words in a sequence is learnt by the model to understand a fragment of code. This helps the model learn the semantics, by taking the code in a sequential manner. We use<cit.> which is a bimodal pre-trained model for programming language (PL) and natural language (NL) that learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc1.is developed with a Transformer-based neural architecture, and is trained with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators 1, along side with Masked Language modelling. In this study, we use CodeBERT as a pre-trained model for our sequence model for source code clone detection. §.§ Graph based modelingOn the other side, clone detection as a graph modelling approach, we have models like TBCCD <cit.>, FA-AST <cit.>, HOLMES <cit.>,DG-IVHFS <cit.>, CodeGraph4CCDetector <cit.>. These types of graph models first construct a tree or a graph like, abstract syntax tree, Control flow graph etc from the source code. This helps to retain the structural information of the code, regardless of it being moved from its location or variables being replaced. This ideally should help the model concentrate more on the semantics, rather than the structural learning, as it is already baked into its structure. We use CodeGraph4CCDetector <cit.> as our graph-based model, from here on referred as . This model is reported to have state of the art results on the<cit.> data-set. This is a Siamese graph matching network which basically takes in two source code snippets and output a similarity score between them. The input for this is the Code Property Graph, which is essentially graph having various nodes and edges. This helps the network capture the source codes syntactical and semantical information. The node representation of thisuses attention mechanism on a node level to extract out a node representation, before combining it to graph level representation. The major advantage of a graph level over the sequence level is, this can handle code snippets of different lengths and structures, as long as the hardware memory can load it. § METHODS This section describes the methods and models that we employ for our experiments on the sequence and graph representations of source code. We organize this section into four parts. First, we present how we use byte pair tokenizers to represent source code as a sequence of tokens. Second, we illustrate how we use code property graphs (CPGs) to represent source code as a graph. Third, we introduce<cit.>, the sequence-based model that we employ for code clone detection. Fourth, we present<cit.>, the graph-based model that we employ for code clone detection. §.§ Code TokenizationWe apply the default Byte Pair Encoding, BPE tokenizer of CodeBERT <cit.> to represent the source code as a sequence of tokens. Figure <ref>b shows an example of how the BPE tokenizer splits the source code in Figure <ref>a into word and subword tokens.§.§ Code RepresentationsWe use a graph representation of source code that consists of nodes and edges connecting source code tokens. To generate this representation, we apply the following steps: First, we use a lexical parser, to produce the abstract syntax tree (AST) of the source code. Second, we extract the data flow information from the AST. Third, we merge the AST and DFG, to form one graph which we call it as Code Property Graph. This is further pruned and standardized across source code languages to make the standard CPG. §.§.§ Abstract Syntax Tree (AST) We use Tree-sitter <cit.>, a lexical parser, to generate the abstract syntax tree (AST) of the source code for any language. We currently use it for Java and Python, but it supports 141 different languages. Figure <ref>c shows the AST generated from the sample source code in Figure <ref>a.§.§.§ Data Flow Graph (DFG) We use Microsoft’s Data Flow Generator (DFG) <cit.> to generate a data flow graph (DFG). This DFG generator takes the AST from the previous step and adds the data flow edges to it to form the DFG. Figure <ref>d shows the DFG graph for the same example source code in Figure <ref>a. We can see that there is a data flow edge between the integer literal ‘5’ and the variable ‘num’, as ‘5’ is assigned to ‘num’. There is also a data flow edge between the second occurrence of ‘num’ and the print statement, as ‘num’ is used as an argument. §.§.§ Standard Code Property Graph (CPG)We standardize the DFG to a code property graph (CPG), which is our final graph representation of source code. We perform two main operations to standardize the DFG across languages. First, we prune the graph from nodes that do not add value to the model’s understanding, such as opening and closing brackets that are implicitly understood when there is a method call. Second, we standardize the node type labels across languages so that the model can recognize them consistently across languages. For example, in Figure <ref>e we see that the root node 'module' and 'integer' node are standardized and replaced with ‘_program’ and ‘_integer’ as standard node types.The major impact of a standard CPG can be seen in Figure <ref>, where two programs that print hello, name in Java and Python are written. The programs look different as raw code, but they have the same functionality and semantics. The standard CPGs look very similar in both cases, as shown in Figure <ref>c and <ref>d. We provide more examples of standard CPGs in Appendix <ref>. This supports our claim that this type of code representation is more suitable than the sequence of code for identifying code clones.§.§ Sequence-based Model: In order to model a sequence model for source code clone detection, we use<cit.> as a pre-trained model. We fine-tuneon the source code clone detection labelled data-set. The fine tuning task is a binary classification task where the source code pair is passed sequentially through thewhich acts as an encoder, and the 2 representation vectors coming out from this encoder, is concatenated and passed to a shallow 2 layer MLP classifier to give the final output, if the pair is a clone or a no clone. The major advantage of using this state of the art encoderis that it can help capture both the syntactic and semantic information of the PL code, by leveraging the large-scale pre-training data of multiple languages.§.§ Graph-based Model: We use CodeGraph4CCDetector <cit.> as the graph based model for our source code clone detection, it is from here on refered to asmodel. This model initially is used by its authors ondata-set for its classification, and hence we keep the pipeline as it is. However, we trained our own word2vec model <cit.>, using gensim <cit.> to keep it consistent with respect to the sequence model. We call this model asmodel, which helps to generate the source code token embedding for our source code. We train thismodel using the source tokens which are tokenized by the 's <cit.> tokenizer, which is a Byte Pair encoding tokenizer. This helps in two ways, Firstly, it helps to keep it consistent with the comparison of the sequence model, and secondly it helps to retain the word meanings of the human written source code variable names etc. Once we get the tokenized vector format of each graph nodes using themodel on every node of CPG, we then use thearchitecture as it is. Here similar to the sequence model, we pass the code pair sequentially to themodel, which then generates the graph level representation. This representation is the taken to a shallow LSTM layer which helps to perform a binary classification on this graph level representation.§ EXPERIMENTAL DESIGN Based on our proposed methodology, we conduct experimentation on the following research questions (RQs):* RQ1: Will a graph-based model that leverages both structural and semantic information surpass a sequence-based model in an in-domain static source code analysis?* RQ2: Will a graph-based model trained on multiple source code languages outperform a sequence-based model in cross-lingual generalization and semantic extraction?* RQ3: Will a graph-based model excel over a sequence-based model in the domain adaptation of zero-shot source code clone classification? §.§ Experiment Data For our experimental setups, we perform clone detection on two publicly available data-sets. The first one is Big Clone Bench (), which is a java language data-set that was originally introduced by <cit.>. We used the version ofthat was filtered according to FA-AST <cit.>.contains 9,134 java methods, which generate over 2M combinations of clone and non-clone code pairs. The second one is , which consists of over 6M python code snippets that were extracted from hugging face <cit.>.We applied various parameters to limit the data-set for the experimentation phase. We restricted the number of lines to be between 5 and 100, the maximum number of characters to be 2000, and the maximum number of nodes in the graph to be 100. The details of how the distribution changed before and after applying the thresholds are given in Appendix <ref>. Table <ref> shows the summary of the average counts for the filtered files according to each data-set (, , and their combination, ).We randomly sampled pairs of clone and non-clone from the filtered files set to form the data-set. Table <ref> summarizes the data-set pairs according to each data-set.§.§ Experimental Setup We chose the state-of-the-art sequence model and graph model, namely<cit.> and<cit.>, respectively, to conduct various experiments. To answer the research questions, we designed the experiments around them as follows. * Experiment 1: We train and evaluate Sequence and Graph models independently on each of the data-sets, namelyand , to compare their performance within the same domain as baselines.* Experiment 2: We train and evaluate Sequence and Graph models onData-set, which is a mixture of data from both domains, to examine their cross-domain learning and generalization capabilities. * Experiment 3: We train Sequence and Graph models ondata-set and test them ondata-set, and vice versa, to assess their cross-domain zero-shot performance. §.§ Model Hyper-parameters We use the same machines with Intel® Xeon® Gold 5222 and one Quadro RTX 6000 to train both the sequence and graph models ( and , respectively) in order to maintain a consistent experimentation environment. The maximum batch size thatcan run on a single RTX 6000 is 16 code pairs, or 32 code snippets per batch. We also set the batch size ofto the same value. The other hyper-parameters used for training these models are given in Appendix <ref>. § RESULTS§.§ Experiment 1We train the sequence and graph models ( and , respectively) on two datasets:and . This leads to four model trainings and evaluations, as shown in Table <ref> on page table:exp_1. We select the best-performing epochs for each model, which are the 3rd epoch forand the 2nd epoch for . We find thatconsistently outperformson both datasets, demonstrating thathas a better learning capability on the source code thanunder limited data and constrained environment conditions. We highlight statistically significant experimental results in the tables based onbootstrap testing <cit.> with p value below 0.05 for statistical significance, which comparesand .Answer to RQ1: Baseline on theanddata-sets, suggests that the graph based model outperforms the sequence based model. This suggests that the graph model can better capture the structural and semantic information of the source code than the sequence model.§.§ Experiment 2 We use the same model architectures from Experiment 1, but we train them on a cross-lingual data-set () that combines both theanddata sets. We then evaluate these models on thedata-set as well as the individualanddata-sets. The results are shown in Table <ref> on page table:exp_2.The evaluation results on thedataset for bothandare intermediate between the single-language models trained in Experiment 1. This is further confirmed by the evaluation results on the individualanddata-sets, where we observe that cross-lingual training improves the performance ofon both data-sets, from 83.90 to 87.64 on thedata-set and from 98.87 to 99.42 on thedata-set, indicating thatgeneralizes better on the source code with cross-lingual training. On the other hand, we observe that cross-lingual training does not improve the performance ofas much as , decreasing it by 0.55 on thedata-set and increasing it by only 0.19 on thedata-set. Answer to RQ2: The results on the cross-lingual setting ofandmodels, i.e. trained ondata-set, demonstrate thatis a more generalized model thanas evidenced by the improvement in the performance ofespecially ondata-set, whereas we observe a decline in the performance ofondata-set and marignal improvement ondataset. This implies that graph models are more adaptable for cross-lingual source code analysis. §.§ Experiment 3 We test the domain adaptation of the pre-trained models from Experiment 1, i.e.,and , on a different source code language than the one they were trained on. For instance, we evaluatetrained onon , and vice versa. We repeat the same procedure withwithout changing the environment. The results of this experiment are shown in Table <ref> on page table:exp_3.This experiment simulates the domain adaptation from Python source code to Java source code and vice versa. The results show thatperforms very poorly on a different domain, with F1 scores of 33.71 and 36.56 forandevaluation, respectively. This indicates that the model has over-fitted on the domain and cannot generalize well to a new domain. We observe the same trend with other epochs. On the other hand,performs much better thanon a different domain, with F1 scores of 53.67 and 46.44 forandevaluation, respectively. This demonstrates thathas a better domain adaptation capability thanin a zero-shot learning setting, although it does not achieve state-of-the-art performance. This suggests that representing source code as a graph rather than a sequence is a promising direction for future research.Answer to RQ3: The results on the domain adaptation task show thatoutperformsin adapting to a new source code language domain without any labeled data for that domain during training. This indicates that graph-based model has an advantage over sequence-based model in the domain adaptation of zero-shot source code clone classification task. §.§ DiscussionWe analyze the false predictions made by both the models,and , and find that most of them are false positives, especially from CodeBERT. When we examine these examples from , we notice that the model predicts them as false positives with high confidence, whereas themodel either predicts them as true negatives or false positives with low confidence. This indicates that adjusting the classification threshold forcould improve its overall performance. However, for , we observe that the model is confused by the very similar keywords in the code pair. We provide a detailed analysis of this in Appendix <ref>.We also analyze the false negatives foron thedata-set, which are the most frequent among all the models and data-sets. We find that these false negatives are mainly due to the large size differences between the code pairs in thedata-set. The examples we inspect are clones of type IV, but they have one code snippet much longer than the other. This makes it difficult forto recognize them as clones and it predicts them as non-clones instead. We provide some detailed explanation and examples of these false negatives in Appendix <ref>. § LIMITATIONS AND FUTURE RESEARCH§.§ Limitations We note the following limitations and concerns in the study: * The experimental setup data-set size is drastically reduced in order to time-bound the experiments for this research project. Majorly, there are 2 cuts in the data-set size, Firstly, source code files are filtered to only those which have less than or equal to 100 nodes. Secondly, the sample size of the data-set clone and no clone pair is restricted to 50K data-points only. Refer the Appendix <ref> to check the filtering criterion and Section <ref> to understand the data-point split samples. * Thedata-set was significantly reduced after applying the thresholding criterion, resulting in only 2K files out of the original 9K. This led to a over-sampled distribution of the training pairs, which consisted of 50K data-points. As shown in the Results Section <ref>, this caused thedata-set model to over-fit the data, unlike thedata-set model. * A potential limitation of this study is the discrepancy in the number of trainable model parameters between the sequence model () and the graph model (). The sequence model has 125M parameters, which is 125 times more than the graph model’s 1.1M parameters. This could raise the question of whether the graph model’s superior performance is due to its inherent advantages or its lower complexity. However, this also suggests that there is room for further improvement on the graph model by increasing its number of parameters. §.§ Future Research Some possible directions for the future research based on the limitations are as follows:* To evaluate the impact of data-set size on the performance of the models, future research could use the complete and more diverse data-set that include source code files with more than 100 nodes and data-set samples itself going upwards of a million samples. This would help to test the generalizability and robustness of the models across different domains and languages at a larger scale. * To help reduce over fitting problem on the Java data-set (i.e.data-set), future research could use samples from other data-sets, like CodeForces <cit.>, Google Code Jam <cit.>, which can yield in more diversified data-set for Java language. * To explore the potential of the graph model (), future research could increase its number of trainable parameters and compare its performance with the sequence model () under the same complexity level. This would help to determine whether the bigger graph model would still have inherent advantages over the sequence model or not. conversely, one can reduce the parameters on sequence model and check its impact.*data-set false Negatives are majorly due to the code size length differences. This can be solved if trained with longer snippet of codes.* Train a mixture model on various source code languages, not just limiting to two, such as JavaScript, SQL, HTML, etc., and evaluate its generalization ability on different domains together. Moreover, cross-domain example pairs could be generated from Code Forces <cit.>, which is an online platform for competitive programming that supports multiple languages. § CONCLUSION In this paper, we have shown that graph-based methods are superior to sequence-based methods for source code clone detection. We have used the state-of-the-art models<cit.> and<cit.> to conduct various experiments on two benchmark data-sets:<cit.> and<cit.>. We have demonstrated that graph models can better capture the structural and semantic information of the source code than sequence models in a series of 3 experimental setups, and that they can generalize better across different source code languages and domains. We have also provided efficient and scalable code for generating standard CPG representations of source code, along with the re-implemented code for the sequence and graph-based models. Our work has important implications for future research on source code analysis, as it suggests that representing source code as a graph rather than a sequence is a promising direction for enhancing the performance and generalization of static source code analysis models. § ACKNOWLEDGEMENTSWe are deeply grateful to our supervisor, https://scholar.google.com/citations?hl=en user=WMYcG5EAAAAJDr. Julia Ive, for her constant guidance, support, and feedback throughout this research project. She has inspired and motivated us with her expertise and experience, and we have learned a lot from her. We also thank Vishal Yadav, Mashhood Alam, and other peers as well as the reviewers for their valuable comments and suggestions that enhanced the quality of our paper. Moreover, we acknowledge the Queen Mary University and organizations that supported this work, as well as the authors of the data-sets and models that we used in our experiments. Lastly, we thank our families for their love and encouragement, and the almighty for his blessings and grace.§ DATA AVAILABILITY We have made the data and source code that we used in this paper publicly accessible. Source code is available for replication at: <https://github.com/Ataago-AI/clone-detection>, and the filtered data-sets can be downloaded from here: <https://drive.google.com/drive/folders/1phx8k_JB8HC_HW3nhZLec9BKjRxNCN2b?usp=drive_link> agsm§ CODE REPRESENTATIONS§.§ Python Standard Code Property Graphs Example pairs §.§ Java Standard Code Property Graphs Example pairs § DATA-SET Data set is filtered based on various parameters like number of lines, number of characters, and number of nodes. Given below are the charts how the data looks before and after filtering. §.§ Java Data : BCBGiven in Figure <ref> are the original and filtered distributions for Java dataset from BigCloneBench BCB dataset <cit.>. §.§ Python Data : PoolCGiven in Figure <ref> are the original <ref> and filtered <ref> distributions for Python dataset from PoolC dataset <cit.>.§.§ Java and Python Data : mix 1Given in Figure <ref> we have the distribution for the mixture of BCB and PoolC dataset. This dataset is made by randomly sampling the filtered datasets of BCB and PoolC examples from each of train, valid, and test splits. This results in total 19K files for source code from both Java and Python together, which results in total 25K Java and 25K python labelled pairs.§ TRAINING§.§ Model hyper-parameters §.§: Sequence Model §.§: Graph Model § RESULT ANALYSIS§.§ Confusion Matrix §.§.§ BCB Dataset Given in Figure <ref> are the confusion matrix for CodeBERT and CodeGraph models. §.§.§ PoolC DatasetGiven in Figure <ref> are the confusion matrix for CodeBERT and CodeGraph models.§.§ False Positive Analysis §.§.§ BCB Dataset * There are 18 False positive from both the models combined. Here we observe that the prediction confidence from CodeBERT is very high above 0.9, where as CodeGraph has prediction confidence on a lower side at 0.5 to 0.6. As observed from Code Pair Listings [<ref> & <ref>], [<ref> & <ref>]. This suggest that adjusting the classification threshold for CodeGraph can help reduce the False positives which are common in both. * The False Positives from CodeGraph, but True Negative from CodeBERT is seen to be consistently having less confidence, which is below 0.7. Although these False positives are only predicted by CodeGraph, and CodeBERT very strongly predicts them as True Negative. Examples of Code Pair Listing [<ref> & <ref>] following this trend. * The False Positives from CodeBERT, but True Negative from CodeGraph, is seen to have a consistent prediction with confidence less than 0.9. This can be misleading as this is a higher confidence from CodeBERT. on the same side, CodeGraph doesn't have very strong prediction either, but it is atleast consistently predicting them as TN. Example of Code Pair Listing [<ref> & <ref>] 2[Code Pair Example | true_label : NoClone | pred_CodeBERT : Clone(0.91) | pred_CodeGraph : Clone(0.62)] [language=Java, caption=code 1, linewidth=.99, label=code:1.1] public PhoneDurationsImpl(URL url) throws IOExceptionBufferedReader reader; String line; phoneDurations = new HashMap(); reader = new BufferedReader(new InputStreamReader(url.openStream())); line = reader.readLine(); while (line != null)if (!line.startsWith("***"))parseAndAdd(line);line = reader.readLine();reader.close();[language=Java, caption=code 2, linewidth=.99, label=code:1.2] public static String getMyGlobalIP()tryURL url = new URL(IPSERVER); HttpURLConnection con = (HttpURLConnection) url.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream())); String ip = in.readLine(); in.close(); con.disconnect(); return ip;catch (Exception e)return null; 2[Code Pair Example | true_label : NoClone | pred_CodeBERT : Clone(0.91) | pred_CodeGraph : Clone(0.59)] [language=Java, caption=code 1, linewidth=.99, label=code:2.1] public static LinkedList<String> read(URL url) throws IOExceptionLinkedList<String> data = new LinkedList<String>(); HttpURLConnection con = (HttpURLConnection) url.openConnection(); BufferedReader br = new BufferedReader(new InputStreamReader(con.getInputStream())); String input = ""; while (true)input = br.readLine(); if (input == null) break; data.add(input);br.close(); return data;[language=Java, caption=code 2, linewidth=.99, label=code:2.2]protected Reader getText() throws IOExceptionBufferedReader br = new BufferedReader(new InputStreamReader(url.openStream())); String readLine; doreadLine = br.readLine();while (readLine != nullreadLine.indexOf("</table><br clear=all>") < 0); return br;2[Code Pair Example | true_label : NoClone | pred_CodeBERT : NoClone(0.99) | pred_CodeGraph : Clone(0.65)] [language=Java, caption=code 1, linewidth=.99, label=code:bcb:1300030]public PhoneDurationsImpl(URL url) throws IOException BufferedReader reader; String line; phoneDurations = new HashMap();reader = new BufferedReader(new InputStreamReader(url.openStream())); line = reader.readLine();while (line != null)if (!line.startsWith("***"))parseAndAdd(line); line = reader.readLine(); reader.close();[language=Java, caption=code 2, linewidth=.99, label=code:bcb:20955454] public void alterarQuestaoMultiplaEscolha(QuestaoMultiplaEscolha q) throws SQLExceptionPreparedStatement stmt = null; String sql = "UPDATE multipla_escolha SET texto=?, gabarito=? WHERE id_questao=?"; tryfor (Alternativa alternativa : q.getAlternativa())stmt = conexao.prepareStatement(sql); stmt.setString(1, alternativa.getTexto()); stmt.setBoolean(2, alternativa.getGabarito()); stmt.setInt(3, q.getIdQuestao()); stmt.executeUpdate(); conexao.commit(); catch (SQLException e)conexao.rollback(); throw e;2[Code Pair Example | true_label : NoClone | pred_CodeBERT : Clone(0.90) | pred_CodeGraph : NoClone(0.67)] [language=Java, caption=code 1, linewidth=.99, label=code:bcb:5510183] public static String getMyGlobalIP()tryURL url = new URL(IPSERVER); HttpURLConnection con = (HttpURLConnection) url.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream())); String ip = in.readLine(); in.close(); con.disconnect(); return ip;catch (Exception e)return null;[language=Java, caption=code 2, linewidth=.99, label=code:bcb:9356670] private FTPClient loginToSharedWorkspace() throws SocketException, IOExceptionFTPClient ftp = new FTPClient(); ftp.connect(mSwarm.getHost(), mSharedWorkspacePort); if (!ftp.login(SHARED_WORKSPACE_LOGIN_NAME, mWorkspacePassword))throw new IOException("Unable to login to shared workspace.");ftp.setFileType(FTPClient.BINARY_FILE_TYPE); return ftp;§.§.§ PoolC Dataset * There are 344 total False positives predicted by both the models combine. But here we observe that the confidence is following similar trend with BCB dataset, CodeBERT is having higher prediction confidence of Fasle positive, where as CodeGraph has a lower prediction confidence, with max being at 0.71. Here we see that the positive prediction is majorly coming from confusion of syntactically same keywords present in both, for example having extensive usage of if and for loops. This can be seen in the example Code Pair Listing [<ref> & <ref>].* The False positive from CodeGraph, but True Negative from CodeBERT is seen to have consistently lower prediction score from CodeGraph, max being 0.78 and average being 0.58. This again suggests that the CodeGraph is understanding the semantics, and with a high classification threshold, should improve significantly. Here the rational behind why the Graph model seem to get them wrong, is it seems to confused on the syntactic structure of the codes. They might not have same keywords, but the structure syntactically is dominating. A code pair Listing [<ref> & <ref>].* The false positives from CodeBERT, but True negatives from CodeGraph, seems to have stronger prediction of True negatives from Graph models, again showing the Graph models are superior to learn the structural and symatic information with an average score of 0.81. Looking at what might be going wrong with Sequence model would mostly be the keywords having similar names in sequence, but not syntactically similar, as observed in code pair Listing [<ref> & <ref>]. 2[Code Pair Example | true_label : NoClone | pred_CodeBERT : Clone(0.64) | pred_CodeGraph : Clone(0.63)] [language=Python, caption=code 1, linewidth=.99, label=code:poolc:5931] n=int(input()) x=list(map(int,input().split())) m=10**15 for i in range(101): t=x[:] s=sum(list(map(lambda x:(x-i)**2,t))) m=min(m,s) print(m)[language=Python, caption=code 2, linewidth=.99, label=code:poolc:6679] a,b,c,d = map(int, input().split())ans = -10**18+1 for i in [a,b]: for j in [c,d]: if ans < i*j: ans = i*j print(ans)2[Code Pair Example | true_label : NoClone | pred_CodeBERT : NoClone(0.96) | pred_CodeGraph : Clone(0.60)] [language=Python, caption=code 1, linewidth=.99, label=code:poolc:4851] import sys x=int(input())n=1 while(100*n<=x): if(x<=105*n): print(1) sys.exit() n+=1 print(0)[language=Python, caption=code 2, linewidth=.99, label=code:poolc:7175] s = str(input()) t = str(input()) revise = 0 len_str = len(s) for i in range(len_str): if s[i] != t[i]: revise += 1 print(int(revise))2[Code Pair Example | true_label : NoClone | pred_CodeBERT : Clone(0.83) | pred_CodeGraph : NoClone(0.93)] [language=Python, caption=code 1, linewidth=.99, label=code:poolc:2388] K, N= map(int, input().split()) A = list(map(int, input().split())) max=K-(A[N-1]-A[0]) for i in range(N-1): a=A[i+1]-A[i] if max<a: max=a print(K-max)[language=Python, caption=code 2, linewidth=.99, label=code:poolc:1287] x = float(input()) if 1 >= x >= 0: if x == 1: print(0) elif x == 0: print(1) §.§ False Negative Analysis §.§.§ BCB Dataset * There is zero overlap of False Negative between both the models. This is also due to the fact that there are very low false negative overall, due to the model tending to overfit on the dataset.* The False Negatives from the CodeGraph, but True Positives from CodeBERT, shows consistently lower confidence at an average of 0.7. This shows that we can tweak the classification threshold to handle these lower confidence scores. On further inspection of these cases, which where just 14, shows that this prediction of false negatives are more cause of variation in parameters, which seems to mislead the model to not detect them as clone. Morover we can argue these are Type IV clones, which would be better identified, given more context. An example can be seen in the Code Pair Listing [<ref> & <ref>]* The False Negatives from the CodeBERT, but True Positives from CodeGraph, have a strong very high confidence. This is not helpful, as it is clearly seen that the sequence model is predicting them wrongly as Negataives with a conf average of 0.99. This is a very intersting case, as all the 52 of these cases seems to have the code very different syntactically, but semantically they are same. This shows how the graph model has an edge over the sequence model. This can be seen in the Code Pair Listing [<ref> & <ref>]2[Code Pair Example | true_label : Clone | pred_CodeBERT : Clone(0.91) | pred_CodeGraph : NoClone(0.58)] [language=Java, caption=code 1, linewidth=.99, label=code:bcb:23677124] public FTPClient sample1c(String server, int port, String username, String password) throws SocketException, IOExceptionFTPClient ftpClient = new FTPClient(); ftpClient.setDefaultPort(port); ftpClient.connect(server); ftpClient.login(username, password); return ftpClient;[language=Java, caption=code 2, linewidth=.99, label=code:bcb:23677129] public FTPClient sample3b(String ftpserver, String proxyserver, int proxyport, String username, String password) throws SocketException, IOExceptionFTPHTTPClient ftpClient = new FTPHTTPClient(proxyserver, proxyport); ftpClient.connect(ftpserver); ftpClient.login(username, password); return ftpClient;2[Code Pair Example | true_label : Clone | pred_CodeBERT : NoClone(0.99) | pred_CodeGraph : Clone(0.97)] [language=Java, caption=code 1, linewidth=.99, label=code:bcb:3257108]public static String getMD5(String s) tryMessageDigest m = MessageDigest.getInstance("MD5"); m.update(s.getBytes(), 0, s.length()); s = new BigInteger(1, m.digest()).toString(16); catch (NoSuchAlgorithmException ex)ex.printStackTrace();return s;[language=Java, caption=code 2, linewidth=.99, label=code:bcb:21044331] private static byte[] getKey(String password) throws UnsupportedEncodingException, NoSuchAlgorithmExceptionMessageDigest messageDigest = MessageDigest.getInstance(Constants.HASH_FUNCTION); messageDigest.update(password.getBytes(Constants.ENCODING)); byte[] hashValue = messageDigest.digest(); int keyLengthInbytes = Constants.ENCRYPTION_KEY_LENGTH / 8; byte[] result = new byte[keyLengthInbytes]; System.arraycopy(hashValue, 0, result, 0, keyLengthInbytes); return result;§.§.§ PoolC Dataset * There are 34 False Negatives predicted by both the models combined. Here we see that the average score from Graph model is 0.64 where as 0.87 is the average score from the sequence model. This shows how the sequence model is very confidently wrong, which is not a good sign although, when examples are examined for this case, we find the clone pairs distinctly have a difference in length in the code, which seems to be the reason for these wrong predictions. Code Pair Listing [<ref> & <ref>] demonstrates this. * The False Negatives from CodeGraph, but True positives from CodeBERT, shows a consistently lower score of confidence with average confidence being 0.63. where as the true positives as well from codeBERT have lower confidence average of 0.78. Here the CodeGraph is marignaly doing wrong, and this seems to be due to the code length again. the difference in the size of code snippet is larger. This can be seen again from the Code Pair Listing[<ref> & <ref>].* The False Negatives from CodeBERT, but True Positives from CodeGraph, are around 83. This cases seems to be strongly False at an average score of 0.81 for the CodeBERT, which is not a good sign of the model predicting them wrong confidently, again, the reason looks the same as the snippet sizes being very different. Simialarly average confidence of True positive from CodeGraph is 0.61, which is again not that confident. This can be seen with the Code Pair Listing[<ref> & <ref>].2[Code Pair Example | true_label : Clone | pred_CodeBERT : NoClone(0.88) | pred_CodeGraph : NoClone(0.96)] [language=Python, caption=code 1, linewidth=.99, label=code:poolc:7785] while True: a = input()if a == '0': breakprint(sum(map(int,*a.split())))[language=Python, caption=code 2, linewidth=.99, label=code:poolc:4401] n = int(input()) res = 0 while n != 0:res = ndropped = nwhile dropped//10 != 0: dropped = dropped//10 res += droppedprint(res)res = 0n = int(input()) 2[Code Pair Example | true_label : Clone | pred_CodeBERT : Clone(0.52) | pred_CodeGraph : NoClone(0.59)] [language=Python, caption=code 1, linewidth=.99, label=code:poolc:5696] while True:n = input()if n == "0": breakprint(sum([int(i) for i in n]))[language=Python, caption=code 2, linewidth=.99, label=code:poolc:3165] while True: num = input() if int(num) == 0: break sum = 0 for i in num: a = int(i) sum += a print(sum)2[Code Pair Example | true_label : Clone | pred_CodeBERT : NoClone(0.62) | pred_CodeGraph : Clone(0.66)] [language=Python, caption=code 1, linewidth=.99, label=code:poolc:6612] H,A = map(int,input().split()) cnt = 0 while True: if H <= 0: print(cnt) break else: H -= A cnt += 1 [language=Python, caption=code 2, linewidth=.99, label=code:poolc:8682] h,a = map(int, input().split()) an, bn = divmod(h,a) if bn == 0: print(an) else: print(an+1)
http://arxiv.org/abs/2312.16488v1
{ "authors": [ "Mohammed Ataaur Rahaman", "Julia Ive" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231227093031", "title": "Source Code is a Graph, Not a Sequence: A Cross-Lingual Perspective on Code Clone Detection" }
Make BERT-based Chinese Spelling Check Model Enhanced by Layerwise Attention and Gaussian Mixture Model1st Yongchang Cao National Key Laboratory for Novel Software Technology Nanjing UniversityNanjing, China [email protected] 2nd Liang He National Key Laboratory for Novel Software Technology Nanjing UniversityNanjing, China [email protected] 3rd Zhen Wu National Key Laboratory for Novel Software Technology Nanjing University Nanjing, China [email protected] 4th Xinyu Dai* National Key Laboratory for Novel Software Technology Nanjing University Nanjing, China [email protected] 14, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= A dg manifold of amplitude +1 in C^∞-context can be thought of as a derived intersection of a section s and the zero section of a vector bundle E. In this paper, we compute the Atiyah class of a dg manifold of amplitude +1. In particular, we prove that the Atiyah class of a dg manifold of amplitude +1 vanishes if and only if the intersection of s and the zero section is a clean intersection. § INTRODUCTIONThe present paper is devoted to studying the Atiyah class of dg manifolds of amplitude +1. The notion of dg manifolds (a.k.a. Q-manifolds) is a generalisation of the notion of smooth manifolds, in a way that the algebra of smooth functions is enriched to include graded commutative elements. Dg manifolds have appeared in mathematical physics literature, in relation to BRST quantisation and AKSZ formalism <cit.>. They naturally arise in various fields of mathematics, such as Lie theory, differential geometry and homotopy theory. Moreover, they are closely related to the emerging field of derived differential geometry <cit.>. Recall that a dg manifold is a -graded manifold , equipped with a homological vector field, i.e. a degree +1 derivation Q ofsatisfying [Q,Q]=2Q^2=0.An interesting example is a dg manifold of amplitude +1, which is essentially a vector bundle E equipped with a section s∈E: the graded algebra of functions is Λ^-∙E^∨ and the homological vector field is the interior product ι_s. A dg manifold of amplitude +1 can be thought of as a model for a derived intersection of the section s and the zero section of E. In studying dg manifolds, the Atiyah class takes an important role.Originally, Atiyah <cit.> introduced the Atiyah class of holomorphic vector bundles as an obstruction to the existence of holomorphic connections. Later, the notion of the Atiyah class was generalised to dg manifolds. The Atiyah class of dg manifolds was first introduced by Shoikhet <cit.> in terms of Lie algebra cohomology and 1-jets of tangent bundles. It also appeared in the work of Lyakhovich–Mosman–Sharapov <cit.> (in their notation, it is B_1) and was studied systemically by Mehta–Stiénon–Xu <cit.>. It is a key ingredient in the formality theorem and Duflo–Kontsevich type theorem for dg manifolds <cit.>, an extension of the famous work of Kontsevich <cit.>. It also appears in the construction of Kapranov's L_∞ algebra for dg manifolds <cit.>, which generalises the celebrated work of Kapranov <cit.>. See also <cit.>. Following <cit.>, we recall the definition of the Atiyah class of a dg manifold below.Let (,Q) be a dg manifold. The space of (1,2)-tensors ; () equipped with the Lie derivativealong the homological vector field Q forms a cochain complex. Given an affine connection ∇ on , the (1,2)-tensor ^∇ of degree +1 defined by^∇(X,Y)=[Q, ∇_XY]-∇_[Q,X]Y -(-1)^X∇_X[Q,Y]for X,Y∈(), is a cocycle in the cochain complex (; (), ). The element ^∇ is called the Atiyah cocycle associated with ∇ and its cohomology class (,Q):=[^∇] ∈ H^1(; (), )is independent of choice of connection ∇. The cohomology class (,Q) is called the Atiyah class of the dg manifold (,Q). The Atiyah class of a dg manifold (,Q) is an obstruction to the existence of an affine connection compatible with the homological vector field Q.In particular, it was shown by Chen–Xiang–Xu <cit.> (see also <cit.>) that the Atiyah class of the dg manifold arising from a complex manifold is the Atiyah class of the complex manifold, introduced by Atiyah <cit.>.Naturally, one can ask the following question.What is the Atiyah class of dg manifolds of amplitude +1?This paper attempts to answer this question. Before we present our main theorem, we need the notion of clean intersection <cit.>.Given two embedded submanifolds X and Y in W, we say X intersects with Y cleanlyif the intersection Z:=X∩ Y is a smooth manifold and pX∩pY=pZ for each p∈ Z.It can be checked (see for instance <cit.>) that the embedded submanifolds X and Y intersect cleanly in W if and only if for each intersection point p∈ X∩ Y, there exists an open neighbourhood U of p and a chart ϕ:U→^ W such that ϕ(X∩ U) and ϕ(Y∩ U) are vector spaces in ^ W.Thus, around each intersection point p∈ X∩ Y, there is a submanifold W'⊂ W near p such that X and Y intersects transversally in W'. Naively speaking, the space obtained by clean intersection is as good as the space obtained by transversal intersection. An answer to Question <ref> is our main theorem. [Theorem <ref>] Let E be a vector bundle and let s be a section on E. The Atiyah class of a dg manifold (E[-1],ι_s) vanishes if and only if the intersection of s and the zero section 0_E is clean. The proof of Thoerem <ref> is based on the following observation: the Atiyah class of a dg manifold of positive amplitude is a local invariant (Proposition <ref>).More precisely, let (,Q) be a dg manifold of positive amplitude whose base manifold is M.For each open subset U⊂ M, let (_U,Q_U) denote the dg submanifold ofobtained by restricting the base manifold M to open subset U. Under this notation, we proved that the Atiyah class of (,Q) vanishes if and only if there exists an open cover {U_i}_i∈ I of M such that the Atiyah class of (_U_i,Q_U_i) vanishes for all i∈ I. In general, such locality of the Atiyah class is not expected. For instance, the Atiyah class of a dg manifold arising from a complex manifold is an obstruction to the existence of holomorphic connection, but holomorphic connections always exist locally. In other words, if (,Q) is the dg manifold arising from a complex manifold X, then for each p∈ X, there exists a small neighbourhood U⊂ X of p such that the Atiyah class of (_U,Q_U) vanishes, whilst, for many complex manifolds X, the Atiyah class of (,Q) does not vanish. In fact, the property which enables such locality follows from the linearity of the homological vector field over smooth functions on the base manifold.For arbitrary dg manifold (,Q) with base manifold M, the homological vector field Q is not necessarily M-linear. However, ifis of positive amplitude, then by degree reason, the homological vector field Q is always M-linear. Going back to the dg manifold (E[-1],ι_s) of amplitude +1, it suffices to consider the Atiyah class of (E|_U[-1],ι_s|_U) for a small neighbourhood of each point p∈ M. Under this setting, we explicitly compute an Atiyah cocycle and the coboundary operator in a local coordinate. Finally, we prove that the Atiyah class of (E|_U[-1],ι_s|_U) vanishes for some open neighbourhood U of p if and only if the section s intersects cleanly along the zero section 0_E in an open neighbourhood of p.The work of Behrend–Liao–Xu <cit.> addresses the category of dg manifolds of positive amplitude, rather than dg manifolds of amplitude +1. Thus, it would be interesting to extend our work to the Atiyah class of dg manifolds of positive amplitude. Moreover, they introduced a notion of weak-equivalence between dg manifolds of amplitude +1 (in fact, weak-equivalence in the category of positive amplitude) so that a quasi-smooth derived manifold can be thought of as a dg manifold of amplitude +1 up to weak-equivalence. Another interesting question is whether the Atiyah class of dg manifolds of amplitude +1 (and more generally, the Atiyah class of dg manifolds of positive amplitude) is invariant under weak-equivalence.§.§ Notations and conventionsThroughout this paper, the base field is the field of real numbers : vector spaces, manifolds, vector bundles, functions in this paper are over .For any smooth function f:^n→, we write f=f(x_1,⋯,x_n). The notation x^i denotes the coordinate function x^i:^n→ defined by x^i(x_1,⋯,x_n)=x_i.We reserve the symbol M for a manifold exclusively. By a manifold, we mean a smooth manifold without boundary with Hausdorff and second countability properties. The sheaf of smooth functions on M is denoted by _M. The algebra of globally defined smooth functions on M is M=_M(M).All gradings in this paper are -gradings and the symbolsis always a finite-dimensional graded manifold. The abbreviation `dg' stands for `differential graded'. Let R be a graded ring. Given any element v in a graded R-module V=⊕_k∈ V^k, the symbol v=d means that v is a homogeneous element of degree d, or equivalently v∈ V^d.Whenever the symbol|v| appears, we assume that v is a homogeneous element. The suspension of the graded R-module V is V[1] whose homogeneous component of degree k is (V[1])^k=V^k+1.Given any graded vector bundle , we use the symbol S() to denote the bundle of graded symmetric tensors of .Let E→ M be a vector bundle. By abuse of notation, a section s∈E is often identified with s∈E[-1]. Moreover, a M-connection ∇:M×E→E on E is often identified with a M-connection ∇:M×E[-1]→E[-1] on E[-1]. § BACKGROUNDS ON DG MANIFOLDSIn this section, we present some background materials on the Atiyah class of a dg manifold. Readers are invited to <cit.> for more detail.Let M be a smooth manifold and let _M be its sheaf of ring of smooth functions.A graded manifoldwith base M is a sheaf of graded commutative _M-algebraon M such thatthere exists a -graded vector space V and for each p∈ M, there exists an open neighbourhood U⊂ M of p satisfying(U) ≅_M(U) S(V^∨)where S(V^∨) is the graded symmetric tensor of V^∨.The space of global sections of the sheafwill often be denoted by .We say a graded manifoldis of amplitude [n,m], for n≤ m, if the graded vector space V is of the formV = ⊕_i=n^mV_iwhere the graded vector space V_i consists of vectors of degree i. If 0<n≤ m, then we say thatis of positive amplitude. The amplitude [n,n] will be simply denoted by amplitude n.A graded manifoldis called finite dimensional if both M<∞ and V < ∞. All graded manifolds in this paper will be finite dimensional.In literature, such as <cit.>, the sheaf of _M-algebrais defined using formal power series on V rather than polynomial functions. However, all results in this paper are valid for both versions. Let M be a finite dimensional smooth manifold and for each i∈, let E_i→ M be a vector bundle of finite rank over M. By denoting E_i[-i]→ M, we mean a graded vector bundle such that the space of sections E_i[-i]=E_i[-i] is a graded M-module concentrated in degree i. The graded vector bundle E=⊕_iE_i[-i] forms a graded manifold : the algebra of functions is (U)=U; S(E^∨) where S(E^∨) is the graded symmetric tensor of E^∨. If (E_i)=0 for all but n≤ i≤ m, then the amplitude of the graded manifoldis [n,m]. If such n,m∈ exists, thenis finite dimensional. A graded vector bundle π:→ is a vector bundle object in the category of graded manifolds. In terms of sheaves, a graded vector bundle is a sheaf of locally free graded -module on M. A section s: → is a morphism of graded manifolds such that π∘ s = 𝕀_. The -module of all sections ofoveris denoted by ;=. An important example of a graded vector bundle overis the tangent bundle . A section ofis called a vector field, and the space of vector fields , often denoted as (), is identified with the space of graded derivations (). The graded derivations () has a Lie algebra structure, defined as a graded commutator, hence so does . Given a graded manifold , a -connection on a graded vector bundleis a -bilinear map ∇:×→ of degree 0 satisfying * ∇_fXs=f ·∇_X s,* ∇_X (f · s)= X(f)· s+(-1)^ f· Xf·∇_X s,for f∈, X∈() and s∈.When =, the -connection ∇ on is called an affine connection. We say an affine connection ∇ is torsion-free if ∇_XY-(-1)^X·Y∇_YX = [ X,Y] for X,Y∈().It is well known that affine torsion-free connections always exist <cit.>.A dg manifold is a graded manifoldtogether with a homological vector field, i.e. a vector field Q∈() of degree +1 satisfying [Q,Q]= Q∘ Q-(-1)^1 Q∘ Q=0.A dg vector bundle π:(,Q_)→ (,Q) is a vector bundle object in the category of dg manifolds. See <cit.> for precise definition. In fact, given a dg manifold (,Q), a graded vector bundle → has a dg vector bundle structure if and only if there exists an operator :→ of degree +1 such that ^2=0 and (f· e)=Q(f)· e +(-1)^f f·(e) for f∈ and e∈. In particular, the tangent bundlenaturally forms a dg vector bundle over (,Q) by setting= Q= [Q, ] to be the Lie derivative along the homological vector field Q. The corresponding homological vector field Q_ onis the tangent lift of the homological vector field Q on .Consider a graded vector bundle = ⊗() over .We define an operatorof degree +1 on the graded -module ⊗()::⊗()^∙→⊗()^∙+1by the Lie derivative along the homological vector field Q:( F)(X,Y)=[Q,F(X,Y)]-(-1)^kF([Q,X],Y)-(-1)^k+ XF(X,[Q,Y])for any (1,2)-tensor F∈⊗()^k of degree k and vector fields X,Y∈(). One can easily check thatinduces a dg vector bundle structure on = ⊗() over (,Q).Now given an affine connection ∇ on , consider the (1,2)-tensor ∈⊗() of degree +1, defined by(X,Y) = [Q, ∇_XY] - ∇_[Q,X]Y -(-1)^ X∇_X[Q,Y]for X,Y∈(). In the above setting, the following statements hold. * If the affine connection ∇ onis torsion-free, then ∈S^2()⊗. In other words,(X,Y)=(-1)^ X· Y(Y,X) . * The degree 1 element ∈⊗()^1 is a 1-cocycle.* The cohomology class [] does not depend on the choice of connection.The elementis called the Atiyah cocycle associated with the affine connection ∇. The cohomology class :=[]∈1(⊗()^∙,) is called the Atiyah class of the dg manifold (,Q) <cit.>. See also <cit.> and <cit.>.The Atiyah class of a dg manifold (,Q) is an obstruction to the existence of an affine connection ∇ oncompatible with the homological vector field Q. § THE ATIYAH CLASS AND CLEAN INTERSECTION As shown in <cit.>, the category of dg manifolds of positive amplitude is equivalent to the category of L_∞-bundles. In particular, any dg manifold of amplitude +1 is of the form (E[-1],ι_s) where E→ M is a smooth vector bundle and s∈E is a section of E. Explicitly, the graded algebra of smooth functions E[-1] on E[-1], equipped with a homological vector field ι_s, is formulated as a cochain complex of M-modules⋯→Λ^2 E^∨ι_sΛ^1E^∨ι_sM→ 0where ι_s is the interior product with s. The way we count the degree of E[-1]≅Λ^-∙E^∨ here is indicated by the integer ∙. This is a commonly used notation, especially in cochain complexes: de Rham complex, Chevalley–Eilenberg cochain complex, etc.However, in some cases, such as the equation in Lemma <ref> or Eq. (<ref>) below, the degree coming from ∙ and the degree coming from the graded space are mixed. Thus, one needs to be careful with counting the degree.To avoid confusion, one should view the graded space Λ^-∙E^∨ as a graded spaceS(E[-1])^∨^∙ and the section s should be viewed as s∈E[-1].Given a dg manifold (E[-1],ι_s), there are two natural embeddings: one is the section s:M→ E and the other is the zero section 0_E: M→ E. Then, one can consider their intersection Z:= ⊷ (s) ∩⊷ (0_E). It turns out that the Atiyah class of the dg manifold (E[-1],ι_s) measures how good the intersection Z of s and 0_E is. The following is our main theorem. Let M be a smooth manifold and let E be a vector bundle over M. Given a section s∈E, the Atiyah class of the dg manifold (E[-1],ι_s) vanishes if and only if the intersection of s and the zero section 0_E is clean. §.§ Clean intersectionRecall that given two embedded submanifolds X and Y in W, we say X and Y intersect cleanly if their intersection Z:=X∩ Y is a manifold and pZ=pX∩pY for all p∈ Z. Moreover, we say the embeddings f:X→ W and g:Y→ W intersect cleanly if their images intersect cleanly. In particular, let E be a vector bundle over M and let s:M→ E be a section. Consider the intersection of s and the zero section 0_E, or equivalently, the intersection of X:=⊷(s) and Y:=⊷(0_E) in W:=E.In this case, the intersection Z:=X∩ Y is naturally identified with the zero locus s^-1(0)⊂ M. Moreover, for each p∈ s^-1(0), the intersection s(p)X∩s(p)Y⊂s(p)E of the tangent spaces can be identified with a subspace of pM. For this, we need a map Ds_p:pM→ E_p for each p∈ s^-1(0).Note that if p∈ M satisfies s(p)=0∈ E_p, then the fibre s(p)E of the bundle E at s(p) ∈ E has a canonical splitting s(p)E≅pM⊕ E_p.For each p∈ s^-1(0), the map Ds_p:pM→ E_p is defined by Ds_p=_E_p∘s_p where s_p is the tangent map of s at p and _E_p:s(p)E→ E_p is the projection map:pMrs_p[bend left]rrDs_p s(p)E ≅pM⊕ E_pr_E_pE_p . For each p∈ s^-1(0), the vector space s(p)X∩s(p)Y in s(p)E is isomorphic to Ds_p. Moreover, if s^-1(0) is a manifold, then it satisfies p(s^-1(0))⊂ Ds_p.For each p∈ M, it is clear by definition that s(p)X= ⊷s_p and 0_E(p)Y = ⊷ (0_E)_p where s_p:pM→s(p)E and (0_E)_p:pM→0_E(p)E are the tangent maps of s and 0_E at p, respectively. Suppose that p∈ s^-1(0). Under the splitting s(p)E≅pM⊕ E_p, the image of each tangent maps at v∈pM iss_p(v)=(v,Ds_p(v)), (0_E)_p(v)=(v,0).Thus,s(p)X∩s(p)Y≅{(v,0)∈pM⊕ E_p≅s(p)E: Ds_p(v)=0}≅ Ds_p.Since (X∩ Y)⊂X∩Y, the last statement follows immediately.If the intersection of s and 0_E is clean, then p(s^-1(0)) +Ds_p =M. Consider a vector bundle E=^2×^3→^2=M with a section s:^2→^2×^3 defined by s(x,y)=(x,y; x,xy,x^2). Then s^-1(0)={(0,y)∈^2}≅ is a manifold. Moreover, simple computation shows thatDs_(0,y)= [ 1 0; y 0; 0 0 ]and thus, Ds_(0,y)=(0,y)(s^-1(0)). Therefore, the triple (E,M,s) gives a clean intersection.Even in the case when the connected submanifolds X and Y intersect cleanly, their intersection does not have to be connected, nor constant dimensional. Consider a vector bundle E=^2×^3 over M=^2. Let U_1={(x,y): x<6} and U_2={(x,y): x>4 } be open sets in M and let {ρ_1,ρ_2} be a partition of unity subordinate to {U_1, U_2}. For sections s_1(x,y)=(x,y; x,xy,x^2) and s_2(x,y)=(x,y;10-x,y,0) of E, we define s=ρ_1· s_1+ρ_2· s_2.Then one can check that the triple (E,M,s) gives a clean intersection, but s^-1(0)={(x,y)|x=0}∪{(10,0)} is a union of two connected manifolds in different dimensions.The following lemma will be useful.Let E be a vector bundle of rank m over M=^n. Suppose that a section s∈E satisfies * s(0)=0,* (Ds_0 )=r, * s(x)=0 for all x∈{0}^r×^n-r,then there exist a local chart ϕ:U→^n around 0∈ M and a local frame {e_1,⋯,e_m} of E around 0∈ M such that s∈E has a local expressions∘ϕ^-1|_ϕ(U)=x^1· e_1+⋯ +x^r· e_rwhere x^1,⋯, x^n are coordinate functions on ^n.Given any frame {ẽ_1, ⋯ẽ_m} of E over M, the section s∈E can be written as s=∑_i=1^m s^i·ẽ_i for some functions s^1,⋯, s^m∈M. Thus, we may identify the section s∈E with a map s=(s^1,⋯, s^m):^n→^m.The condition (1) and (2) implies that, by the inverse function theorem (or, by  <cit.>), there exists a chart ϕ:U→^n around 0∈ M such that s∘ϕ^-1|_ϕ(U)=(x^1,⋯,x^r,s̃^r+1,⋯, s̃^m) for some functions s̃^r+1,⋯, s̃^m, by rearranging the order of the frame {ẽ_1,⋯, ẽ_m}.Moreover, by the condition (3), the functions s̃^r+1,⋯, s̃^m are in the ideal generated by the first r coordinate functions x^1,⋯, x^r, (c.f. <cit.> or <cit.>). Indeed, we may assume U is convex, and if we denote 𝐱=(x_1,⋯,x_r) and 𝐲=(x_r+1,⋯, x_n), then by the fundamental theorem of calculus, we haves̃^i(𝐱,𝐲) =∫_0^1d/dts̃^i(t·𝐱,𝐲) dt =∑_j=1^r x_j·( ∫_0^1∂s̃^i/∂ x^j(t·𝐱,𝐲) dt) = (∑_j=1^r x^j· g_j^i)(𝐱,𝐲)for all i=r+1,⋯, m, by setting g_j^i(𝐱,𝐲)= ∫_0^1∂s̃^i/∂ x^j(t·𝐱,𝐲) dt.Finally, we obtain our desired expression of s∘ϕ^-1|_ϕ(U) by considering a new local frame {e_1,⋯,e_m} of E around 0∈ M, defined by e_k={[ ẽ_k + ∑_j=r+1^m g_k^j·ẽ_j, ifk≤ r; ẽ_k,ifk>r . ].This completes the proof. We now characterise the clean intersection of a section s:M→ E along the zero section 0_E:M→ E. Let E be a rank m vector bundle over an n-dimensional manifold M. A section s∈E intersects along the zero section 0_E cleanly if and only if for each p∈ s^-1(0), there existsa local chart ϕ:U→^n around p and a frame {e_1,⋯,e_m} of E|_U such thats∘ϕ^-1|_ϕ(U)=x^1· e_1+⋯+x^r· e_r where r is the rank of Ds_p:pM→ E_p. Suppose that the section s is characterised by the formula (<ref>) in a neighbourhood U=U_p of each p∈ s^-1(0). Then clearly, s and 0_E intersect cleanly.Conversely, suppose s and 0_E intersect cleanly. Then s^-1(0)⊂ M is a smooth manifold. By the tubular neighbourhood theorem, there exists an open neighbourhood V⊂ M of s^-1(0) such that V is diffeomorphic to the normal bundle of s^-1(0). In particular, for each p∈ s^-1(0), there exists an open neighbourhood U⊂ M of p and a diffeomorphism α:U→^d×^n-d such that α(p)=0 and α(s^-1(0)∩ U)=^d×{0}^n-d. Also, by Corollary <ref>, the rank of Ds_p=n-d. Observe that we have shown that the composition s∘α^-1:^n→ E|_U≅^n×^m satisfies all the conditions in Lemma <ref>. Thus, by Lemma <ref>, the proof is complete.§.§ Locality of the Atiyah class in positive amplitudeConsider a dg manifold (over ) arising from arising from complex manifold X whose dg algebra of functions is the Dolbeault complex of X.It is shown <cit.> that the Atiyah class of this dg manifold is identical to the Atiyah class of the complex manifold X — it is an obstruction to the existence of holomorphic connections <cit.>. Observe that if the base manifold X is ^n, then there always is a holomorphic connection. Thus, the Atiyah class of a dg manifold, in general, is not expected to be characterised by the local data.However, suppose we are given a dg manifold of positive amplitude. Then its algebra of functions is concentrated in non-positive degrees and moreover, its degree 0 component consists of smooth functions on the base manifold. Since the homological vector field increase the degree by 1, it acts on the smooth functions on the base manifold trivially. In this case, the Atiyah class is completely determined by local data. In particular, the Atiyah class of a dg manifold of positive amplitude vanishes if and only if it vanishes locally. To be more precise, consider a dg manifold (, Q) of positive amplitude whose base manifold is M. By <cit.>, the graded manifoldcorresponds to a negatively graded vector bundle E→ M. Given an open subset U⊂ M, denote the graded manifold with base U which corresponds to the negatively graded vector bundle E|_U→ U by _U. Naturally, _U is a graded submanifold of . Also, denote the restriction of the homological vector field Q to _U by Q_U. Thus, each open subset U⊂ M gives rise to a dg manifold (_U,Q_U) of positive amplitude.Let (,Q) be a dg manifold of positive amplitude, and let {U_i}_i∈ I be an open cover of the base manifold M.The Atiyah class α of (,Q) is completely determined by {α_i}_i∈ I where α_i is the Atiyah class of (_U_i,Q_U_i). In particular, α=0 if and only if α_i=0 for all i∈ I. The positive amplitude condition on (,Q) which distinguishes from arbitrary dg manifold is that, by degree reason, the homological vector field Q acts on the space of smooth functions on M trivially, i.e. Q(f)=0 for ∀ f∈_M.To prove Proposition <ref>, we use the notion of sheaves. The following lemma might be known but could not find a reference.Let _M be the sheaf of algebra of smooth functions on M and let (,d)=(^∙,d) be a sheaf of dg module over _M. Then the cohomology H^∙(, d) of the cochain complex (^∙, d) is naturally a sheaf of graded _M-modules. By the assumption, the differential d:^∙→^∙+1 is a morphism of sheaves of _M-modules. In particular, the differential d is _M-linear and is compatible with the restriction map. Thus, the cohomology group H^∙(,d) is naturally a presheaf of graded _M-module. For simplicity of notation, we use the symbol ℋ=ℋ^∙ to denote the presheaf of graded _M-module H^∙(,d).To show that ℋ is a sheaf, we need to show the locality axiom and gluing axiom.Fix an open subset U⊂ M and an open cover {U_i}_i∈ I of U.For each n∈, we have a map F^n: ^n(U)→∏_i∈ I^n(U_i) induced by the restriction map ^n(U)→^n(U_i) for each i∈ I.Moreover, if we denote the induced differential on ∏_i∈ I^n by ď=∏_i∈ Id|_U_i, it satisfies ď∘ F^n=F^n+1∘ d.Given a partition of unity {ρ_i}_i∈ I subordinate to {U_i}_i∈ I,consider a map G^n: ∏_i∈ I^n(U_i) →^n(U) defined byG^n: {s_i}_i∈ I↦∑_i∈ Iρ_i· s_iwhere s_i∈^n(U_i). Note that for each fixed i∈ I, we have ρ_i|_U_j≡ 0 for all but finitely many j∈ I. Thus, for each p∈ M, there exists an open neighbourhood U such thatd(s)|_U=d|_U(s|_U) =d|_U(∑_i∈ I(ρ_i|_U· s_i|_U_i∩ U))=∑_i∈ I d|_U(ρ_i|_U· s_i|_U_i∩ U)=∑_i∈ Iρ_i|_U· d|_U_i∩ U(s_i|_U_i∩ U)= (∑_i∈ Iρ_i· d|_U_i(s_i)) |_Uby writings=∑_i∈ Iρ_i· s_i.By the locality axiom, it satisfiesd(∑_i∈ Iρ_i· s_i)=∑_i∈ I d(ρ_i· s_i) =∑_i∈ Iρ_i· d|_U_i(s_i)and therefore we have d∘ G^n=G^n+1∘ď.Observe that, by construction, both F^∙ and G^∙ are chain maps and they satisfy G^n∘ F^n=𝕀_^n(U). It implies that on the level of cohomology, they induce mapsF_∗:H^n(,d)(U) ⇄∏_i∈ I H^n((U_i), d|_U_i) ≅ H^n(∏_i∈ I(U_i), ď) : G_∗for each n∈ such that G_∗∘ F_∗=𝕀_H^∙(,d)(U).In particular, if α∈ℋ(U) satisfies F_∗(α)=0 (i.e. 0=α|_U_i∈ℋ(U_i) for all i∈ I), then α=G_∗∘ F_∗(α)=0. This proves the locality axiom.Consider an element {α_i}_i∈ I∈∏_i∈ Iℋ^n such thatα_i|_U_i∩ U_j= α_j|_U_i∩ U_jfor all i,j∈ I. If we pick a representative s_i∈ d ∩^n(U_i) of α_i for each i, then Eq. (<ref>) is equivalent to s_i|_U_i∩ U_j= s_j|_U_i∩ U_j+ dw_ij for some w_ij∈^n-1(U_i∩ U_j). Now, consider s=∑_i∈ Iρ_i· s_i∈^n. By Eq. (<ref>), the element s is in d. Moreover, it satisfies s|_U_j=(∑_i∈ Iρ_i· s_i)|_U_j=∑_i∈ Iρ_i|_U_j· s_i|_U_i∩ U_j=∑_i∈ Iρ_i|_U_j· (s_j|_U_i∩ U_j+ dw_ij) =∑_i∈ Iρ_i|_U_j· s_j|_U_i∩ U_j + ∑_i∈ Iρ_i|_U_j· dw_ij=s_j+d(∑_i∈ Iρ_i|_U_j· w_ij) since ρ_i|_U_j≡ 0 for all but finitely many j∈ I. Therefore, the cohomology class α represented by s satisfies F_∗(α)={α_i}_i∈ I. This proves the gluing axiom, whence proves that ℋ=H^∙(,d) is a sheaf of graded _M-module. Let (,Q) be a dg manifold of positive amplitude with base manifold M.If (,Q_) is a dg vector bundle over (,Q), then its space of sections ;, equipped with the differential d:;→; induced by Q_, becomes a sheaf of dg _M-moduleU↦ (_U; _U, d_U)where _U is the restriction of the bundleto the graded submanifold _U⊂, and the operator d_U:_U;_U→_U;_U is the differential such that the diagram;rdd ;d _U;_Urd_U _U;_Uis commutative. Here, vertical maps send a section s:→ to a section s|__U:_U→_U. To be more precise with the notations above, the graded algebra _U of smooth functions on _U has a natural identification_U≅U_Mand the inclusion _U corresponds to the map →_U defined by f↦ 1 f. Moreover, we have a natural identification_U; _U≅_U_;and under this identification, we have d_U=Q_U 1 + 1 d.We have proved the following. Suppose that the dg manifold (,Q) with base M is of positive amplitude. If (,Q_) is a dg vector bundle over (,Q), then the assignmentU↦ H^∙(_U; _U, d_U)is a sheaf of _M-module where H^∙(,d) is the cohomology of the dg space of sections (;, d) of (, Q_). Now we are ready to prove Proposition <ref>. Consider the graded vector bundle =, equipped with homological vector field Q_ induced by the complete lift of Q. By Corollary <ref>, it induces a sheaf ℋ of graded _M-module, defined byU ↦ℋ(U)=H^∙(_U;_U_U_U, _U)whereis defined by (<ref>). Note that the Atiyah class α_(,Q) of (,Q) belongs to its global section ℋ(M).Given open sets V⊂ U ⊂ M, let ρ_U,V:ℋ(U)→ℋ(V) be the restriction map of the sheaf ℋ. Now, to prove Proposition <ref>, it suffices to proveρ_M,U(α_(,Q))=α_(_U,Q_U)where α_(,Q) and α_(_U,Q_U) are the Atiyah class of (,Q) and (_U,Q_U), respectively. Indeed, this means that for any two open subsets U,V ⊂ M, it satisfies ρ_U, U∩ V(α_(_U,Q_U))=ρ_V,U∩ V(α_(_V,Q_V)), and thus using the gluing axiom, the Atiyah class computed locally completely determines the Atiyah class of (,Q).Given an open subset U of M, denote the inclusion of graded manifold by i_U:_U. Observe that for any graded vector bundleover , there is a canonical identification i_U^∗≅_U where i_U^∗ is the pullback bundle ofover i_U. In particular, when =(), we have_U=(_U)≅_U_() and one can check that the restriction map ()→(_U) defined byX↦ X|_U=1 X∈_U_()is a morphism of Lie algebra.Moreover, if ∇:()×()→() is an affine connection on , then the pullback connection ∇^i_U:(_U)×(_U)→(_U), characterised by∇^i_U_X|_UY|_U = (∇_XY)|_Ufor X,Y∈(), is again an affine connection on _U. For the Atiyah cocycle ^∇ corresponds to an affine connection ∇ on , we have^∇(X,Y)|_U = [Q,∇_XY]|_U - (∇_[Q,X]Y)|_U -(-1)^X (∇_X[Q,Y])|_U=[Q_U,(∇_XY)|_U]-∇^i_U_[Q,X]|_UY|_U-(-1)^X|_U∇^i_U_X|_U[Q,Y]|_U=[Q_U, ∇^i_U_X|_U Y|_U] - ∇^i_U_[Q_U,X|_U]Y|_U -(-1)^X|_U∇^i_U_X|_U[Q_U,Y|_U]=^∇^i_U.Therefore, the induced restriction map ρ_M,U satisfies ρ_M,U(α_(,Q))=α_(_U,Q_U). This completes the proof of Proposition <ref>.§.§ The Atiyah class of dg manifolds of amplitude +1 We now focus on dg manifolds of amplitude +1. In this section, we will present the explicit description of the space of vector fields of dg manifolds of amplitude +1 and its Lie algebra structure. Also, we will characterise torsion-free affine connections on dg manifolds of amplitude +1.Then, we will finally describe the Atiyah class of dg manifolds of amplitude +1 in both global and local picture.For the remainder of this section, unless otherwise stated, π:E→ M is a (usual) vector bundle over M and E[-1] is always considered as a graded manifold of amplitude +1, rather than a graded vector bundle.§.§.§ Vector fields and affine connections Recall that given an ordinary smooth vector bundle π:E→ M, there exists a short exact sequence0→E; π^∗E→(E)π_∗E; π^∗M→ 0of E-modules.Analogously, for the graded manifold =E[-1] with base M, the algebra of smooth functions is ≅Λ^-∙E^∨ and one obtains a short exact sequence0→Λ^-∙E^∨_ME[-1]ι(E[-1])π_∗Λ^-∙E^∨_M(M)→ 0of graded Λ^-∙E^∨-modules. Here, the map ι is the Λ^-∙E^∨-linear extension of the interior product and the map π_∗ is induced by the morphism of dg manifolds E[-1]→ E π M.Note that a Λ^-∙E^∨-linear map τ: Λ^-∙E^∨_M(M) →(E[-1])satisfying π_∗∘τ = 𝕀 is called a horizontal lift.Let ∇^E be a M-connection on E. Extending its dual connection by derivation law, one obtains a M-connection on Λ^-∙E^∨, again denoted by ∇^E, by abuse of notation. The horizontal lift associated with ∇^E is the Λ^-∙E^∨-linear mapτ=τ^∇^E:Λ^-∙E^∨_M(M) →(E[-1]) ≅(Λ^-∙E^∨)characterised by τ(X): ξ↦∇^E_Xξfor X∈(M) and ξ∈Λ^-∙ E^∨. In particular, we have ∇^E_Xf=X(f) for X∈(M) and f∈M, thus satisfies π_∗∘τ(X)=X for each X∈(M). Upon a choice of a M-connection ∇^E on E, there exists a Λ^-∙E^∨-module isomorphismΛ^-∙E^∨_M(E[-1]⊕M) ≅(E[-1])characterised bya↦ι_a, and X↦∇^E_Xfor X∈(M) and a∈E[-1].We note that the vector field ∇^E_X∈(E[-1]) on the graded manifold E[-1] is a horizontal lift of the vector field X∈(M) on M. Lemma <ref> indicates that any homogeneous vector fields on E[-1] has degree at most +1. Moreover, by restricting to homogeneous component of degree +1 and 0, we have(E[-1])^+1≅E[-1], (E[-1])^0≅Λ^1E^∨ E[-1]⊕Mwhere the symbol (E[-1])^n denotes homogeneous component of degree n in (E[-1]).Now, the Lie bracket [,] on (E[-1]) can be computed easily. Fix a M-connection ∇^E on E and denote the horizontal lift of a vector field X by X̂:=∇^E_X. Then, for a,b∈E[-1] and X,Y∈(M), we have[ι_a,ι_b]=0,[X̂, ι_a]=ι_∇^E_Xa,[X̂,Ŷ]=[X,Y]_M-ι_R^∇^E(X,Y)where [,]_M denotes the Lie bracket on M and R^∇^E is the curvature of the connection ∇^E. Here the symbol ι_R^∇^E(X,Y) denotes the image of the element R^∇^E(X,Y)∈E^∨_ME[-1] under the map ι in (<ref>). Finally, we investigate torsion-free affine connections on the graded manifold E[-1]. A torsion-free affine connection ∇:(E[-1])×(E[-1]) →(E[-1]) is equivalent to a triple (∇^E, ∇^M, β) where* ∇^E:M×E→E is a M-connection on E;* ∇^M:M×M→M is a torsion-free affine connection on M;* an element β∈MM(E) satisfyingβ(X,Y)-β(Y,X)=- R^∇^E(X,Y). In particular, denoting X̂=∇^E_X∈(E[-1]) for each X∈(M), we have an explicit formula:∇_ι_aι_b=0,∇_ι_aX̂=0,∇_X̂ι_a=ι_∇^E_Xa,∇_X̂Ŷ=∇^M_XY+ι_β(X,Y),where ι_β(X,Y) is the image of β(X,Y)∈E≅E^∨_ME[-1] under the map ι in (<ref>). Given a triple (∇^E,∇^M,β), it is easy to show that the relation (<ref>) completely determines a torsion-free affine connection on E[-1].Conversely, suppose that a torsion-free affine connection ∇:(E[-1])×(E[-1])→(E[-1]) is given. If we denote an arbitrary horizontal lift of X∈(M) by X̂, by Lemma <ref> and by the torsion-freeness, the connection ∇ is completely determined by values of the form ∇_ι_aι_b, ∇_X̂ι_a and ∇_X̂Ŷ for a,b∈E[-1] and X,Y∈(M).First, by degree reason, for all a,b∈E[-1], we have ∇_ι_aι_b=0.Second, observe that the assignment (X,a)↦∇_X̂ι_a∈(E[-1])^+1≅E[-1] defines a M-connection ∇^E:M×E[-1]→E[-1] on E[-1]. Indeed, since X̂(f)=X(f) for f∈M, it satisfies ∇_X̂ι_f· a=X(f)·ι_a+f·∇_X̂ι_a. Thus, we have ∇_X̂ι_a=ι_∇^E_Xa for some ∇^E. Moreover, the connection ∇^E is chosen independently of the choice of horizontal lifts.Denote a different horizontal lift of X∈(M) by X̅. By the short exact sequence (<ref>), X̂-X̅∈⊷ι. Thus, because ∇_ι_bι_a=0 for all a,b∈E[-1], we have ∇_X̂-X̅ι_a=0 for all a∈E[-1]. This implies that ∇ induces ∇^E independently of the choice of horizontal lifts. Similarly, the assignment (X,Y)↦π_∗(∇_X̂Ŷ)∈Mdefines an affine connection ∇^M:M×M→M on M. Here π_∗:(E[-1])^0→M is induced by the map in the short exact sequence (<ref>). Similar argument shows that ∇^M is chosen independently of the choice of horizontal lifts. Now, given a torsion-free affine connection ∇ on E[-1], we have ∇^E and ∇^M, which does not depend on the choice of horizontal lift. Now, we choose the horizontal lift X̂ of X∈(M) to be the horizontal lift X̂=∇^E_X associated with ∇^E. Define ι_β(X,Y):=∇_X̂Ŷ-∇^M_XY.Finally, it is easy to show that the torsion-free condition implies that the connection ∇^M is torsion-free and that the relation β(X,Y)-β(Y,X)=-R^∇^E(X,Y) holds.This completes the proof.§.§.§ Atiyah class: global description Given a dg manifold (,Q), the Atiyah class is the 1st cohomology class of the cochain complex(;()^∙, ). In fact, by Proposition <ref>, it suffices to consider the subcomplex (; (S^2(), )^∙, ). When (,Q)=(E[-1],ι_s), by Lemma <ref>, a M-connection ∇^E on E induces a graded E[-1]-module isomorphism; (S^2(), )≅M;Λ^-∙E^∨ (S^2(E[-1]⊕M),E[-1]⊕M).It can be easily checked that * ; (S^2(), ) is concentrated in degree ≤ 1.* degree +1 component of ; (S^2(), ) is ; (S^2(), )^1≅M; (S^2(M), E[-1]). * degree 0 component of ; (S^2(), ) is; (S^2(), )^0≅ M; Λ^1 E^∨(S^2(M), E[-1])⊕M; (E[-1]M, E[-1])⊕M; (S^2(M), M).The differential:; (S^2(), )^0→; (S^2(), )^1decomposes into =d_1+d_2+d_3 where * The first map d_1: M; E^∨(S^2(M), E[-1])→M; (S^2(M), E[-1]) is obtained by applying ι_s:E^∨→M on the first component.* The second map d_2: M; (E[-1]M, E[-1])→M; (S^2(M), E[-1])is obtained by the following commutative diagramM; (E[-1]M, E[-1])rd_2rd[swap](∇^Es)^∗ M; (S^2(M), E[-1])M; (MM, E[-1])u[swap]^∗where (∇^Es)^∗ is the pre-composition of ∇^Es:M→ E[-1] on the first component of the domain and ^∗ is the pre-composition of : S^2(M)→MM defined by (X⊙ Y)=X Y + Y X.* The last map d_3:M; (S^2(M), M)→M; (S^2(M), E[-1]) is obtained by post-composition of ∇^Es:M→ E[-1].Here, we view ∇^Es:M→ E[-1] as a morphism of vector bundles defined by X↦∇^E_Xs. This describes the first cohomology group of (; (S^2(), ), ). In particular, if s is a nowhere vanishing section, then its first cohomology group vanishes.Let (,Q)=(E[-1],ι_s) be the dg manifold associated with a vector bundle E and its section s∈E. If s is a nowhere vanishing section, then the first cohomology group H^1(; (S^2(), ), ) vanishes. If s is nowhere vanishing, then there exists a dual section ξ∈E^∨ such that ι_s(ξ) = 1 ∈M. Then the aforementioned map d_1 is surjective and thus the mapin (<ref>) is surjective. In other words, all 1-cocycles are in the coboundary, and this proves the lemma.If s∈E is nowhere vanishing, then the Atiyah class of (E[-1],ι_s) vanishes.Next, we compute an Atiyah cocycle of the dg manifold (E[-1],ι_s).Consider a dg manifold (E[-1],ι_s). Given a torsion-free affine connection ∇ on the graded manifold E[-1], the associated Atiyah cocycle ^∇∈; S^2() is completely determined by^∇(X̂,Ŷ)=ι_∇^E_X∇^E_Ys-∇^E_∇^M_XYs + β(X,Y)sfor X,Y∈(M) where ∇ = ( ∇^E,∇^M, β) as in Lemma <ref>, and X̂ = ∇^E_X∈(E[-1]) denotes the horizontal lift of X∈(M).Since ^∇ is -linear in both argument, it suffices to compute ^∇(ι_a,ι_b), ^∇(ι_a,X̂), ^∇(X̂, Ŷ)for a,b∈E[-1] and X,Y∈(M). By degree reason, ^∇(ι_a,ι_b)=0 and^∇(ι_a,X̂)=0.By Lemma <ref>, we have^∇(X̂,Ŷ)=[Q,∇_X̂Ŷ]-∇_[Q,X̂]Ŷ - ∇_X̂[Q,Ŷ]=[ι_s, ∇^M_XY+ ι_β(X,Y)] + ∇_ι_∇^E_XsŶ + ∇_X̂(ι_∇^E_Ys)=ι_β(X,Y)s- ∇^E_∇^M_XYs +0 + ι_∇^E_X∇^E_Ys.This completes the proof. Under the identification (<ref>), the Atiyah cocycle ^∇ corresponds to a bundle map α:S^2(M) → E[-1] of degree +1 defined byα(X,Y)=∇^E_X∇^E_Ys - ∇^E_∇^M_XYs + β(X,Y)s. §.§.§ Atiyah class: local descriptionBy Proposition <ref>, in order to investigate the Atiyah class of a dg manifold of amplitude +1, it suffices to consider the local model. Thus, we will compute an Atiyah cocycle and the cohomology operator explicitly in a local coordinate.For the local computation, we assume that M=^n and E=^n×^m is a rank m-bundle over M. We denote the coordinate functions of M by {x^1,⋯, x^n}, the corresponding basis of vector fields by {∂/∂ x^1,⋯,∂/∂ x^n} and the corresponding basis of 1-forms by {dx^1,⋯,dx^n}.Also, a frame of E[-1] is denoted by {e_1, ⋯, e_m} and its dual frame on (E[-1])^∨ is denoted by {ξ^1,⋯, ξ^m}. Observe that the graded algebra of smooth functions E[-1]=Λ^-∙E^∨ on the graded manifold =E[-1] is generated by {x^1,⋯, x^n, ξ^1,⋯, ξ^m}. Then we may identify ∂/∂ x^i and e_j as an element of (E[-1]) satisfying∂/∂ x^i(x^k)=δ_i^k, ∂/∂ x^i(ξ^k)=0,e_j(x^k)=0,e_j(ξ^k)=δ_j^kwhere δ_i^j is the Kronecker delta. Thus the set {e_1,⋯,e_m, ∂/∂ x^1,⋯, ∂/∂ x^n} can be identified with a frame of the tangent bundle E[-1] over E[-1].More precisely, under the isomorphism in Lemma <ref>, the element ∂/∂ x^i∈(E[-1]) is, in fact, a horizontal lift of ∂/∂ x^i∈(M) with respect to the connection ∇^E:M×E→E satisfying ∇^E_∂/∂ x^ie_j=0 for all i,j. One can naturally construct an affine connection ∇^0 on E[-1] which is trivial with respect to this frame. That is, ∇^0:(E[-1])×(E[-1])→(E[-1]) satisfies∇^0_e_ie_j=0, ∇^0_∂/∂ x^ie_j=0, ∇^0_e_i∂/∂ x^j=0, ∇^0_∂/∂ x^i∂/∂ x^j=0for all possible i,j. In fact, by Lemma <ref> and Remark <ref>, one can show that ∇^0 corresponds to (∇^E, ∇^M, β) where ∇^E and ∇^M are trivial connections and β=0. Thus, ∇^0 is a torsion-free affine connection on E[-1].Suppose that a section s=s^1· e_1+⋯+ s^m· e_m∈E[-1] is given. By Proposition <ref>, the Atiyah cocycle ^∇^0 is completely determined by^∇^0(∂/∂ x^i,∂/∂ x^j)=∑_k∂^2s^k/∂ x^i∂ x^j· e_k. Under the identification ; (S^2(),)≅; S^2(), we may write ^∇^0=∑_i≤ j∑_k∂^2s^k/∂ x^i∂ x^j· dx^i⊙ dx^j e_k∈M; S^2(M) (E[-1])^∨. Now, the differentialrestricted to degree 0 has a decomposition=d_1+d_2+d_3:; S^2()^0→; S^2()^1as in Section <ref>, and they are explicitly written asd_1(ξ^l dx^i⊙ dx^j e_k)=s^l· dx^i⊙ dx^j e_kd_2(ξ^l dx^j e_k)=∑_r=1^n∂_rs^l· dx^r⊙ dx^j e_kd_3(dx^i⊙ dx^j∂_l)=∑_r=1^m∂_ls^r· dx^i⊙ dx^j e_rfor each l,i,j,k. § PROOF OF THEOREM <REF>The goal of this section is to prove Theorem <ref>, which states that the Atiyah class of a dg manifold (E[-1],ι_s) vanishes if and only if the section s:M→ E intersects with the zero section cleanly.By Proposition <ref>, it suffices to consider the Atiyah class on a neighbourhood of each p∈ M. Moreover, by Corollary <ref>, it suffices to consider the neighbourhood of each p∈ s^-1(0). Finally, by Lemma <ref>, the clean intersection of s along the zero section is characterised by Eq. (<ref>). Thus, Theorem <ref> is an immediate consequence of the following lemma. Let M be a smooth manifold of dimension n and let E be a rank m vector bundle over M. Let s∈E[-1] be a section. For the dg manifold (E[-1],ι_s), the following are equivalent. (L1) For each p∈ M with s(p)=0, there exists a neighbourhood U⊂ M of p such that the Atiyah class of (E|_U[-1],ι_s|_U) vanishes.(L2) For each p∈ M with s(p)=0, there exists a neighbourhood U ⊂ M of p and a local chart ϕ:U→^n such that s∘ϕ^-1|_ϕ(U) =x^1· e_1+⋯+x^r· e_rfor some frame {e_1,⋯,e_m} of E|_U where r is the rank of Ds_p as in (<ref>) and x^1,⋯,x^n are the coordinate functions on ^n.The rest of this paper is devoted to proving Lemma <ref>. Observe that the proof of (L2)⇒ (L1) is easy to prove. Indeed, first observe that the diffeomorphism ψ:=ϕ^-1|_ϕ(U) induces an isomorphism of dg manifolds (E|_U[-1],ι_s)≅ (ψ^∗E|_ϕ(U), ι_s∘ψ). Thus, it suffices to prove that the Atiyah class of (ψ^∗E|_ϕ(U), ι_s∘ψ) vanishes. By the formula (<ref>), the Atiyah cocycle associated with the trivial connection with respect to the local frame vanishes, and hence the Atiyah class vanishes.Before we begin the proof of (L1)⇒ (L2), we fix some conventions and notations. For the proof, without loss of generality, we may assume M=^n and E=^n×^m and the point p∈ M will always be identified with the origin 0∈^n. Thus, s(0)=0.In the proof, we will mostly use the notations from Section <ref>.The standard coordinate functions on M will be denoted by {x^1,⋯,x^n},the corresponding basis of vector fields will be denoted by {∂/∂ x^1, ⋯, ∂/∂ x^n} and that of differential forms will be denoted by {dx^1,⋯,dx^n}.For simplicity, we sometimes write ∂_i instead of ∂/∂ x^i. The standard frame of E[-1] will be denoted by {e_1,⋯,e_m} and its dual frame on (E[-1])^∨ will be denoted by {ξ^1,⋯,ξ^m}.For a section s∈E[-1], we write s=s^1· e_1+⋯ + s^m· e_m where s^i∈M for each i.Finally, the symbols d_1, d_2 and d_3 are used to mean the maps defined in (<ref>) - (<ref>).The proof of (L1)⇒ (L2) for arbitrary n= M and m= E is full of complicated and technical computations. To show the idea of the proof, we begin with two sample cases. §.§ Proof of (L1)⇒ (L2): sample cases§.§.§ Case 1M=1 and E=1.For simplicity, we write x=x^1, e=e_1 and ξ=ξ^1. Also, we write s=s(x)· e. In this case, observe that * the Atiyah cocycle with respect to the trivial connection is s”(x)· dx⊙ dx e,* the image of d_1 is -generated by s(x)· dx⊙ dx e,* and the image of d_2 and d_3 are both generated by s'(x)· dx⊙ dx e. Thus, if there exists a neighbourhood U of 0∈ M such that the Atiyah class of (E|_U[-1],ι_s|_U) vanishes then s”(x) is generated by s(x) and s'(x) in U. In other words, the condition (L1) implies that there exists an open neighbourhood U of 0∈ M such that for some f,g∈U, we have an equations”(x) = f(x)· s(x)+ g(x)· s'(x)for all x∈ U. Suppose s'(0)≠ 0. Then by Lemma <ref>, the function s:→ in a neighbourhood of 0 can be written as s(x)=x, up to a local diffeomorphism. Thus, we may assume that s(x)=x. Since s”(x)=0, by setting f(x)=g(x)=0, the Atiyah class vanishes.Suppose s'(0)=0. We claim that the Eq. (<ref>) holds for some f,g in a neighbourhood U of 0 only when s(x)≡ 0. In fact, by setting y(t)=(s(t),s'(t)) and F(t,a,b)=(b,f(t)a+g(t)b),the uniqueness of ODE (a.k.a. Cauchy–Lipschitz, Picard–Lindelöf) states that the initial value problem(s'(t),s”(t))=y'(t)=F(t,y(t))=(s'(t), f(t)s(t)+g(t)s'(t)),y(0)=(s(0),s'(0))=(0,0)has a unique solution near 0. Since s(x)≡ 0 is one of the solution, by the uniqueness, this is the only possibility. This completes the proof for Case 1.§.§.§ Case 2M=^2, E=^2×^2 and Ds_0=1.Denote the section s∈E[-1] with s(0)=0 by s=s^1· e_1+ s^2· e_2 = (s^1, s^2). The condition Ds_0=1 means that the 2× 2 matrix[ ∂ s^i/∂ x^j(0) ]_ij =[ ∂_1 s^1(0) ∂_2 s^1(0); ∂_1 s^2(0) ∂_2 s^2(0) ]has rank 1. By the inverse function theorem (or, by <cit.>), we may assume that s^1=x^1 and ∂_2s^2(0)=0.For simplicity, we use the notation ∂^2s^k/∂ x^i∂ x^j=∂_ijs^k.By (<ref>), the Atiyah cocycle with respect to the trivial connection is=∑_i≤ j∑_k∂_ijs^k· dx^i⊙ dx^j e_kand it can be written as a vector= [ ∂_11s^1; ∂_12s^1; ∂_22s^1; ∂_11s^2; ∂_12s^2; ∂_22s^2 ]with respect to the ordered basis (dx^1⊙ dx^1 e_1, dx^1⊙ dx^2 e_1, dx^2⊙ dx^2 e_1, dx^1⊙ dx^1 e_2, dx^1⊙ dx^2 e_2, dx^2⊙ dx^2 e_2).With the same ordered basis, the Eqs. (<ref>)-(<ref>) shows that the coboundary elements are in the image of the linear transformations induced by the left multiplication of the following 3 matrices d_1= [ s^1 0 0 0 0 0 s^2 0 0 0 0 0; 0 s^1 0 0 0 0 0 s^2 0 0 0 0; 0 0 s^1 0 0 0 0 0 s^2 0 0 0; 0 0 0 s^1 0 0 0 0 0 s^2 0 0; 0 0 0 0 s^1 0 0 0 0 0 s^2 0; 0 0 0 0 0 s^1 0 0 0 0 0 s^2 ] d_2= [ ∂_1s^1000 ∂_1s^2000; ∂_2s^1 ∂_1s^100 ∂_2s^2 ∂_1s^200;0 ∂_2s^1000 ∂_2s^200;00 ∂_1s^1000 ∂_1s^20;00 ∂_2s^1 ∂_1s^100 ∂_2s^2 ∂_1s^2;000 ∂_2s^1000 ∂_2s^2 ]andd_3= [ ∂_1s^1 ∂_2s^10000;00 ∂_1s^1 ∂_2s^100;0000 ∂_1s^1 ∂_2s^1; ∂_1s^2 ∂_2s^20000;00 ∂_1s^2 ∂_2s^200;0000 ∂_1s^2 ∂_2s^2 ] Since s^1=x^1, the Atiyah cocycle is= [ 0; 0; 0; ∂_11s^2; ∂_12s^2; ∂_22s^2 ]and the matrix d_2 isd_2= [1000 ∂_1s^2000;0100 ∂_2s^2 ∂_1s^200;00000 ∂_2s^200;001000 ∂_1s^20;000100 ∂_2s^2 ∂_1s^2;0000000 ∂_2s^2 ]by applying s^1=x^1. Because of the first 4 columns of d_2, the 1st, 2nd, 4th and 5th rows ofis always in the image of = d_1+d_2+d_3. Thus, to check whether the Atiyah cocycleis in the coboundary, it suffices to consider 3rd and 6th rows. In other words, it now simplifies to the question of whether[ 0; ∂_22s^2 ] is in the image of the linear transformation induced by the left multiplication of the matrix[x^10s^20 ∂_2s^2010;0x^10s^20 ∂_2s^2 ∂_1s^2 ∂_2s^2 ]where first 4 columns are from d_1, column 5 and 6 are from d_2 and the rest 2 columns are from d_3.Suppose that the Atiyah class vanishes. Then there exists a set of functions {f_1,⋯,f_8} such that[ 0; ∂_22s^2 ] =f_1[ x; 0 ] +f_2[ 0; x ] +f_3[ s^2; 0 ] +f_4[ 0; s^2 ] +f_5[ ∂_2s^2;0 ] +f_6[0; ∂_2s^2 ] +f_7[1; ∂_1s^2 ] +f_8[0; ∂_2s^2 ]holds. The 1st row of the Eq. (<ref>) is equivalent tof_7=-x^1· f_1-s^2· f_3-f_5·∂_2s^2.This means that f_7 is a M-linear combination of x^1, s^2 and ∂_2s^2. Together with the 2nd row of (<ref>), it implies that ∂_22s^2 is also a M-linear combination of x^1, s^2 and ∂_2s^2. Recall that we started with the conditions s(0)=0 and ∂_2s^2(0)=0. If the Atiyah class vanishes, then we have∂_22s^2(x_1,x_2) ∈⟨ x^1, s^2(x_1,x_2), ∂_2s^2(x_1,x_2) ⟩and in particular, when x_1=0, we have∂_22s^2(0,x_2) ∈⟨ s^2(0,x_2), ∂_2s^2(0,x_2) ⟩where the symbol ⟨ a,b ⟩ denotes the ideal generated by functions a,b ∈M. That is, there exist functions f,g∈ such that∂_22s^2(0,x_2)=f(x_2)· s^2(0,x_2) + g(x_2)·∂_2s^2(0,x_2)with the initial condition ∂_2s^2(0,0)=0 and s^2(0,0)=0. The uniqueness theorem of ODE implies that s^2(0,x_2)≡ 0 is the only solution.Then, by Lemma <ref>, the proof for Case 2 is completed.§.§ Proof of (L1)⇒ (L2): general caseM=^n, E=^n×^m and Ds_0=r.Finally we move on to the general case. Observe that the proof of Case 1 and Case 2 consists of 2 steps. Step 1. We pick up the essential information from d_1, d_2 and d_3, and reduce to the problem of ODE.Step 2. Use the uniqueness of ODE to complete the proof. Assume that M=^n, E=^n×^m and Ds_0=r. Again, by the inverse function theorem (or, by <cit.>), we may assume that the given section s=s^1· e_1+⋯+s^m· e_m satisfies s^1=x^1,⋯, s^r=x^r. By the condition on rank, we have ∂_is^j(0)=0 for all i,j>r. By (<ref>), the Atiyah cocycle associated with the trivial connection is= ∑_i ≤ j∑_k∂^2s^k/∂ x^i∂ x^j· dx^i⊙ dx^j e_k = ∑_i ≤ j∑_k∂_ijs^k· dx^i⊙ dx^j e_kand the formulas of =d_1+d_2+d_3 are described in Eqs. (<ref>)-(<ref>). If the Atiyah class of the dg manifold (E[-1],ι_s) vanishes, then for each triple (i,j,k) with i,j,k >r, we have ∂_ijs^k∈⟨ s^1,⋯, s^m, ∂_is^r+1,⋯, ∂_is^m, ∂_js^r+1,⋯, ∂_js^m, ∂_r+1s^k,⋯, ∂_ns^k⟩where ⟨ a,b ⟩ denotes the ideal generated by functions a,b ∈M. Similarly to Case 2, we first investigate the image of d_2.Observe that if i≤ r, then we haved_2(ξ^i dx^j e_k)=dx^i⊙ dx^j e_k and similarly, if j≤ r, then we haved_2(ξ^j dx^i e_k)=dx^i⊙ dx^j e_k .This implies that the component ∂_ij s^k· dx^i⊙ dx^j e_k is always in the coboundary if either i≤ r or j≤ r. Thus, it suffices to consider when both i,j > r. Consider an element b=∑_α≤β∑_γc_αβ^γdx^α⊙ dx^β e_γ. Then, for a fixed triple (i,j,k) with i,j>r, the following must satisfy. * By (<ref>), if b∈⊷ d_1, then the function c_ij^k is generated by {s^1,⋯,s^m}.* By (<ref>), if b∈⊷ d_2, then the function c_ij^k is generated by {∂_is^r+1,⋯, ∂_is^m, ∂_js^r+1,⋯, ∂_js^m}.* By (<ref>), if b∈⊷ d_3, then the function c_ij^k is generated by {∂_1s^k,⋯, ∂_ns^k}.Thus, if the Atiyah class vanishes, then for each i,j,k with i,j>r,∂_ijs^k∈⟨ s^1,⋯, s^m, ∂_is^r+1,⋯, ∂_is^m, ∂_js^r+1,⋯, ∂_js^m, ∂_1s^k,⋯,∂_ns^k⟩ . Fix a triple (i,j,k') with i,j>r and k'≤ r. Since s^k'=x^k', we have ∂_ls^k'=0 if l≠ k'. The only coboundary of d_3 which has non-zero coefficient of dx^i⊙ dx^j e_k' comes fromd_3(dx^i⊙ dx^j∂_k')= dx^i⊙ dx^j e_k' + ∑_p>r∂_k's^p· dx^i⊙ dx^j e_p.Moreover, note that d_3 is the only map that changes the index k of e_k. Summing up, our situation can be pictured as a matrix=[ ⋮; ∂_ijs^a; ⋮; ∂_ijs^b; ⋮ ]↔[ ⋮⋮⋮⋮⋮⋮;⋯s^l0⋯ ∂_is^l0⋯ ∂_as^a(=1)0⋯; ⋮⋮⋮⋮⋮⋮;⋯0s^l⋯0 ∂_is^l⋯ ∂_as^b ∂_a+1s^b⋯; ⋮⋮⋮⋮⋮⋮]where a ≤ r < b, i,j>r and 1≤ l ≤ m. In particular, if the Atiyah class vanishes, then LHS is the image of the linear transformation induced by the left multiplication of the matrix on the RHS — compare with (<ref>).By the hypothesis, the Atiyah cocyclemust be in the image of =d_1+d_2+d_3. In particular, for each fixed (i,j,k') with i,j>r and k'≤ r,0=∂_ijs^k'=∑_p=1^mf^k'_ij,p· s^p+ ∑_q=r+1^m(g^k'_ij,q·∂_is^q+g̃^k'_ij,q·∂_js^q) + h^k',k'_ij·∂_k's^k'for some f^k'_ij,p,g^k'_ij,q,g̃^k'_ij,q,h^k',k'_ij∈M.Since ∂_k's^k'=1 for k'≤ r, we may conclude thath^k',k'_ij∈⟨ s^1,⋯, s^m, ∂_is^r+1,⋯, ∂_is^m, ∂_js^r+1,⋯, ∂_js^m⟩if k'≤ r. Similarly, for each fixed (i,j,k) with i,j>r and k>r, ∂_ijs^k=∑_p=1^mf_ij,p^k· s^p+ ∑_q=r+1^m(g_ij,q^k·∂_is^q+g̃_ij,q^k·∂_js^q) + ∑_t=1^nh_ij^k,t·∂_ts^kfor some f_ij,p^k,g_ij,q^k,g̃_ij,q^k,h_ij^k,t∈M. Moreover, by the condition (<ref>), it must satisfy h_ij^k,t=h_ij^t,t for all t≤ r. Because of (<ref>), we conclude that∂_ijs^k∈⟨ s^1,⋯, s^m, ∂_is^r+1,⋯, ∂_is^m, ∂_js^r+1,⋯, ∂_js^m, ∂_r+1s^k,⋯, ∂_ns^k⟩ .This proves the claim.The above claim is equivalent to the following: if the Atiyah class of (E[-1],ι_s) vanishes, then for each (i,j,k) with i,j,k>r, it satisfies∂_ijs^k=∑_p=1^mF_ij,p^k· s^p+ ∑_q=r+1^m(G_ij,q^k·∂_is^q+G̃_ij,q^k·∂_js^q) + ∑_t=r+1^nH_ij^k,t·∂_ts^kfor some F_ij,p^k,G_ij,q^k,G̃_ij,q^k,H_ij^k,t∈M. By assumption, we have s^1=x^1,⋯,s^r=x^r. Let X∈{0}^r×^n-r⊂^n. Then the condition (<ref>) induces that for each (i,j,k) with i,j,k>r,∂_ijs^k(X)=∑_p=r+1^mF_ij,p^k(X)· s^p(X)+ ∑_q=r+1^m(G_ij,q^k(X)·∂_is^q(X)+G̃_ij,q^k(X)·∂_js^q(X)) + ∑_t=r+1^nH_ij^k,t(X) ·∂_ts^k(X)for some F_ij,p^k,G_ij,q^k,G̃_ij,q^k,H_ijk^k,t∈M. Moreover, by assumption, we haves^α(0)=0, ∂_βs^α(0)=0for all α, β >r.Suppose there exist functions F_ij,p^k,G_ij,q^k,G̃_ij,q^k,H_ij^k,t∈M with i,j,k,p,q,t>r. If Eq. (<ref>) and Eq. (<ref>) holds for all i,j,k,α,β>r, then s^k|_{0}^r×^n-r≡ 0 for all k>r. Note first that all indices are larger than r. Thus, it suffices to consider when r=0. The claim follows from the uniqueness theorem of ODE.Assume that there exist ν=(v^1,⋯, v^n)∈^n such that s^k(ν)≠ 0 for some k. Then, consider y_ν:→^m+nm defined byy_ν(t)=(s^1(t·ν),⋯, s^m(t·ν), ∂_1s^1(t·ν),⋯, ∂_1s^m(t·ν), ∂_2s^1(t·ν),⋯, ∂_ns^m(t·ν))and a smooth function F_ν:^1+m+nm→^m+nm defined byℱ_ν(t,a_1,⋯, a_m, b_11,⋯,b_1m, b_21,⋯, b_nm)=(c_1,⋯,c_m,d_11,⋯, d_1m,d_21,⋯,d_nm)where c_α=∑_iv^i· b_iαand d_αβ=∑_iv_i·( ∑_p(F_iα,p^β· a_p) + ∑_q(G_iα,q^βb_iq+G̃_iα,q^βb_α q)+∑_t(H_iα^β, t· b_tβ)).Then y_ν'(t)=ℱ_ν(t,y_ν(t)) is exactly Eq. (<ref>) when X=t·ν∈^n, and Eq. (<ref>) is the initial condition y_ν(0)=0. Thus, by the uniqueness of ODE, we have a unique solution y_ν(t)≡ 0. This contradicts to the assumption that s^k(ν)≠ 0. This completes the proof. Finally, by Lemma <ref>, the proof of the general case of (L1)⇒ (L2) is completed.§ ACKNOWLEDGEMENTThe author would like to thankZhuo Chen, Dongwook Choa, Dongnam Ko, Camille Laurant-Gengoux, Hsuan-Yi Liao, Mathieu Stiénon, Maosong Xiang and Ping Xu for their interest in this work and their helpful comments. Also, the author is grateful to National Center for Theoretical Sciences, Institut Henri Poincaré and Pennsylvania State University for their generous support and hospitality.
http://arxiv.org/abs/2312.16622v1
{ "authors": [ "Seokbong Seol" ], "categories": [ "math.DG", "math-ph", "math.MP", "math.QA" ], "primary_category": "math.DG", "published": "20231227160811", "title": "The Atiyah class of dg manifolds of amplitude $+1$" }
^1Department of Physics, The Pennsylvania State University, University Park PA 16802, USA^2 School of Physics, Dalian University of Technology, Dalian 116024, China^3 Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Callinstraße 38, 30167 Hannover, Germany^4 School of Astronomy and Space Science, Nanjing University, Nanjing 210023, China^5Leibniz Universität Hannover, 30167 Hannover, Germany^6 Nikhef, Science Park 105, 1098 XG Amsterdam, The Netherlands ^7 Institute for Gravitational and Subatomic Physics (GRASP), Utrecht University, Princetonplein 1, 3584 CC Utrecht, The Netherlands^8 Departament de Física, Universitat de les Illes Balears, IAC3-IEEC, E-07122 Palma, SpainThe ringdown (RD) phase of gravitational waves is of prime interest for testing general relativity (GR). The modelling of the linear quasi-normal modes (QNMs) within the Kerr spectrum — or with agnostic parameterized deviations to that GR spectrum — has become ordinary; however, specific attention has recently emerged to calibrate the effects of nonlinear perturbations for the predominant quadrupolar l=2, m=2 mode. In this paper, we test the performance of a few nonlinear toy models and of the nonlinear inspiral-merger-ringdown (IMR) model IMRPhenomD for faithfully representing the RD regime and we compare them with the results obtained using linear solutions as sums of QNM tones. Using several quasi-circular, non-precessing numerical waveforms, we fit the dominant l=2, m=2 mode of the strain, and we assess the results in terms of both the Bayes factor and the inferred posterior distributions for the mass and spin of the final black hole (BH). We find that the nonlinear models can be comparable or preferred over the linear QNM-only solutions when the analysis is performed from the peak of the strain, especially at high signal-to-noise ratios consistent with third-generation observatories. Since the calibration of the tones' relative amplitudes and phases in high-overtone models to the progenitor parameters is still missing, or even not achievable, we consider the use of non-linear models more pertinent for performing confident tests of general relativity based on the RD regime starting from early times.Linear vs. nonlinear modelling of black hole ringdowns Yi Qiu^1,2,3,4, Xisco Jiménez Forteza^3,5,6,7, Pierre Mourier^8,3,5 January 14, 2024 =======================================================================§ INTRODUCTION Gravitational waves (GW) provide an excellent arena to study gravity on its strongest regime.Since the breakthrough observation of the first event GW150914 <cit.>, the GW field itself has experienced an unprecedented growth, as a result of the early-on unexpected but nowadays confirmed high number of GW events observed. Currently, the number of confirmed GW events from compact binary mergers has risen significantly, summing up to about 90 in the last completed observing run(O3) <cit.> of theLIGO-Virgo-KAGRA (LVK) Collaboration <cit.>. While these observations have already allowed us to set extraordinary constraints on the general theory of gravity (see, e.g., <cit.>), the near-future prospects are even more promising, with about 200 new eventsanticipated by the end of the current LVK O4 run <cit.>. The number of observed binary black hole (BBH) mergers is dominating the LVK event catalogue. A typical BBH GW event is described by three different regimes: the inspiral, the merger and the RD. For the optimized search and characterization of the signals, the IMR gravitational waveform templates are used. IMR models are calibrated to numerical relativity (NR) solutions and provide us with the most accurate representation of the full waveform. For BBH mergers, current IMR waveform approximants are normally split into three different families: the IMRPhenom <cit.>, SEOBNR <cit.>, TEOB <cit.> and NRSurrogates (for a detailed description of the models see <cit.> and the references therein).The study of RD regime has drawn some attention in the last recent years. The RD describes the post-merger phase, in which the final, perturbed BH relaxes rapidly towards its stationary Kerr configuration, a phase which is associated with a characteristic late train of radiation <cit.>. Linear perturbation theory provides a simple description of this late radiation regime in the form of a countably infinite sum of damped sinusoids. Each damped sinusoid — or mode (lmn) — is at most described by four parameters, namely its frequency, damping time, amplitude and phase. The family of frequencies and damping times is known as the quasi-normal-mode (QNM) spectrum <cit.> and, by virtue of the BH no-hair theorem, is uniquely determined by the final BH mass and spin <cit.>. The set of amplitudes and phases is determined by the progenitor parameters and the initial orbital conditions <cit.>. The BH no-hair theorem is tested through two common scenarios: namely, by performing an IMR consistency test <cit.> and through BH spectroscopy <cit.>. BH spectroscopy typically targets the independent estimation of the spectrum of at least two separate RD modes — although GR deviations can be also measured with a single mode if one considers information from the progenitor BHs <cit.>. So far, several works from different groups dispute the successful/unsuccessful multiple-mode testing of the theorem with the loud-RD events GW150914 <cit.> and GW190521 <cit.>. In coming years, robust testing of the no-hair theorem <cit.> through BH spectroscopy may be achieved with loud events at LVK design sensitivity (see <cit.>) and become more precise at the LIGO A# sensitivity <cit.>, reaching even the percent accuracy with third-generation gravitational wave interferometers such as the Einstein Telescope (ET) <cit.>, Cosmic Explorer (CE) <cit.> and LISA <cit.>, thanks to the expected and promising gain in signal-to-noise ratios (SNRs).One of the major debates among the BH ringdown community pertains to the suitability of linear perturbation theory for describing the whole RD regime <cit.>. On the one hand, some previous works advocate that the results of linear perturbation theory can be applied from the peak of the gravitational strain onwards, which implies that the non-linear effects observed at the merger regime become quickly irrelevant <cit.>. This is assessed by calibrating to NR data the RD models with a large number of QNM tones — typically N=7 overtones in addition to the fundamental mode — to obtain an accurate recovery of the BH parameters, while fixing its QNM spectrum to GR. On the other hand, such claims have been disputed by other works by observing that a high instability of high-overtone models can arise due to i) a likely overfitting of the data and ii) neglecting the yet unmodelled non-linear contributions on the dominant lm = 22mode <cit.>. In particular, even the linear-order contributions arising from the branch cut[The QNMs of Kerr don't form a complete basis even at linear order. The time-domain Green's function might be split into three different terms, namely, the quasinormal mode solution, the branch cut and a high frequency response. In particular, the prompt emission is originated from the branch cut solution and it is expected to be important at times around the peak of the emission (see <cit.> for a detailed review).], such as the prompt response or the late tail effects <cit.>, are neglected by ringdown models solely based on QNMs. Moreover, <cit.> have found clear evidences of quadratic contributions in higher harmonics of the RD (specifically, quadratic contributions to the lm = 44mode sourced by the first-order 22-mode perturbations), which provide more accurate and more stable models than the linear models for these modes. Separately, similar conclusions are obtained by studying the shear at the horizon in head-on BH collisions <cit.>. Unfortunately, an analogous but conclusive analysis for the quadrupolar and dominant 22 mode in quasi-circular mergers is still absent.In this work we have tested the performance of linear and non-linear RD models, by fitting the post-peak regime of NR waveforms from the SXS and the (associated) Ext-CCE NR catalogues <cit.>. The SXS NR waveforms are extracted at finite radii and then extrapolated to future null infinity. The Ext-CCEwaveforms use the Cauchy characteristic extraction procedure, thus reducing significantly the gauge dependence of the waveforms obtained at null infinity. We consider the following RD models to fit the data: i) and ii) two RD models described by linear perturbation theory with a variable number of tones, with or without degrees of freedom allowing for restricted deviations from the GR QNM spectrum; iii) the RD sector of the non-linear IMRPhenomD model; and iv) a non-linear RD toy model that uses the linear solution but modifies it slightly to add a non-linear qualitative behaviour at early times. Those models are described in Sec. <ref>. In Sec. <ref> and Sec. <ref> we introduce the Bayesian framework and other statistical tools used to perform parameter inference and to assess the physical reliability of the models. In Sec. <ref>, we perform Bayesian parameter inference on a set of zero-noise-realization NR signal injections for each of the models described in Sec. <ref>. Finally we conclude about the accuracy and suitability of each model at describing the RD regime in Sec. <ref>.§ RD MODELS§.§ QNM overtone models At late enough times, the RD regime can be modelled via the Teukolsky equation <cit.>, which describes linear perturbations off a Kerr background spacetime, and hence tells us how GWs propagate as s=-2 gravitational perturbations. This equation is typically solved by applying outgoing boundary conditions at null infinity and infalling boundary conditions at the BH horizon. The Teukolsky equation then becomes an eigenvalue problem whose solution is the countably infinite set of the complex QNMs of the final (Kerr) BH. In a time evolution, these modes take the form of exponentially damped sinusoids. Their complex frequencies ω_lmn=w_lmn-ι/τ_lmn, corresponding to poles of the Green function <cit.>, are solely determined by the remnant BH's mass M_f and spin a_f, in the absence of a BH charge. Here, Re[ω_lmn]=w_lmn are the so-called oscillation frequencies and -Im[ω_lmn]=1/τ_lmnare the damping rates (inverse of the damping times). These modes are labelled by three integers l, m, and n. Here l=2,3,… and m=-l,-l+1, …, l-1, l denote the two angular indices of the spheroidal harmonics decomposition. The complex strain at future null infinity h =h_+-ih_× (where h_+ and h_× denote the two polarization components measured in the detector frame) of the gravitational radiation is accordingly written as:h(t,θ,ϕ) = ∑_l,m h_lm (t) _-2𝒴_lm(θ,ϕ;a_f) ,where _-2𝒴_lm(θ,ϕ;a_f) are the spin-weighted spheroidal harmonics of spin weight s = -2, which depend upon the polar angle θ, the azimuthal angle ϕ, and the final spin a_f. The third index, n=0,1,2,…, labels the tones, in order of decreasing damping times τ_lmn for any given (l,m) harmonic. This convention sets the n=0 (fundamental) mode as the dominant one at late times while the n ≥ 1 mode (overtones) are shorter-lived. In addition, there are two branches of QNMs for each (l,m) harmonic, respectively with positive and negative w_lmn values <cit.>. The counter-rotating modes, with w_lmn < 0, are nevertheless, expected to have very small relative amplitudes in the dominant harmonics sourced by a quasi-circular merger <cit.>, and we shall not consider them further in this paper.The methods to calculate numerical values of QNM frequencies are present in the literature <cit.> for various situations, typically building upon Leaver's continued fraction method <cit.>. In our work, we mainly use the open-sourcePython package <cit.> to compute the required Kerr QNM frequencies as functions of the remnant's mass and spin. Alternative open-source algorithms to compute the Kerr spectrum are also available in <cit.>.In this work, we focus on the dominant (l=2, m=2) spheroidal mode of the strain, h_22(t), both in terms of the NR data we are considering and of the models used to describe it. Note that the mode we select from NR simulations is in fact the (l=2, m=2) mode in a spherical harmonics decomposition, meaning that it includes some contributions from higher spheroidal harmonics: in aligned-spins cases like we consider, there is some mode mixing with (l ≥ 3, m=2) harmonics. These contributions are however expected to be negligible at all times due to the combination of much smaller amplitudes of these higher modes and small mixing coefficients, so that the spherical (2,2) mode considered is a close approximation to the spheroidal one <cit.>.The decomposition of h_22(t) into the (l=2, m=2) QNMs up to a given number N ≥ 0 of overtones defines the linear overtone model for the RD:OM_N(t) = ∑_n=0^N 𝒜_ne^- ι (t-t_r)ω_22n,where t_r is a reference time, usually chosen as a sufficiently late point for the system to reside in the linear regime <cit.>, and 𝒜_n=A_ne^ιφ_n is the complex amplitude of the (l=2,m=2,n) tone at t = t_r. These N+1 complex amplitudes, plus the final dimensionless mass and spin m_f, a_f which parametrize the QNM frequencies ω_22n(m_f,a_f), define the 2 N + 4 real-valued free parameters of the model.This model provides a priori a good description of the RD once sufficiently into the linear regime, and while asymptotic late-time non-QNM linear tail contributions are still negligible[Such a tail contribution, with a non-oscillatory power-law decay, is indeed expected as an additional solution to the Teukolsky equation since QNMs do not form a complete solution basis <cit.>. Its amplitude is nevertheless small enough that this term only appears at very late times and is typically not visible within the post-merger time range of quasi-circular NR simulations such as the ones we consider <cit.>. Such tail effects have only been observed very recently in eccentric mergers from NR, where they were expected to be enhanced compared to quasi-circular cases <cit.>.]. This model has been argued to be potentially applicable early on in the RD, up to the peak of the strain's amplitude <cit.>, for a sufficiently large N. We shall however also consider several alternative models including nonlinear terms, which may better capture the behaviour of the strain close to merger. §.§ Phenomenological nonlinear toy models The first model with deviations to the linearized GR sum of QNMs that we consider, is based on the parameterized QNM models popular in spectroscopic studies, which allow for deviations of the QNMs to the Kerr spectrum. To avoid the very large parameter space and reduce the possible overfittings of models with multiple overtones that can each independently deviate from GR, we consider the restricted case where only the highest tone of the model is modified. This defines the `highest-tone perturbation models' HTPM:HTPM_N(t) ≡OM_N-1(t) + 𝒜_Ne^- ι (t-t_r) w_22N (1+α_N)e^- (t-t_r) / (τ_22N (1+β_N)),where α_N and β_N are respectively the oscillation frequency and damping time perturbation parameters. Their values measure the deviations of the highest included tone from the Kerr spectrum, and the QNM solution from GR would be recovered for α_N=β_N=0. HTPM_1 corresponds to the most widely used QNM model in overtone-based spectroscopy (e.g. <cit.>), including the 220 and 221 modes with possible frequency and damping times deviations to the Kerr values for the first overtone. HTPM_≥ 2 is a restricted extension of this model to higher numbers of overtones, where only the last tone is allowed to deviate from the GR spectrum; while HTPM_0 is of very limited interest due to the full degeneracy between the free final mass and spin and the deviation parameters on the complex frequency of the only mode present (n=0). Note that HTPM_N depends in total on 2 N + 6 real-valued free parameters (i.e., the same number as OM_N+1): the final mass and spin (M_f, a_f); the perturbation parameters α_N and β_N; and the complex amplitudes of the N+1 tones included.The other phenomenological toy model considered in this work amounts to a sum of QNMs with a nonlinear transformation of the time coordinate. The time parameter is shifted by an exponentially decaying term, so that the model can exhibit this nonlinear modification at early times close to merger (where it leads to a slower variation of the phase and amplitude of the waveform), while asymptotically recovering the linear model at later times[Since the prompt response is expected to be higher close to the peak of the emission, the TCTM model is designed to account for this excess, although its fundamental form remains unknown.]. We define the corresponding `time coordinate transformation models' TCTM as follows:TCTM_N(t) ≡OM_N(t+A e^-t/τ) ,where A and τ are two constant parameters to be determined. Like in the HTPM case above, TCTM_N depends on 2 N + 6 real-valued free parameters: the final mass and spin; A; τ; and the N+1 complex tone amplitudes.We also considered several additional phenomenological toy models, introduced in <cit.> for the description of the decaying deformations of the final horizon formed in a merger. These models corresponds to overtone models modified e.g. by the addition of a non-oscillatory exponential decaying component (under the form OM_N(t) + B exp [-(t-t_r) / τ̃]), or of a power-law decaying term with or without oscillations (OM_N(t) + C (t-t_1)^-γexp[ι ω (t-t_r) ], or the same form with ω set to 0). These models are designed to recover the steep decay featured by the deformations of the common horizon shortly after its formation, in addition to the late-time QNM oscillations. Hence, these models were not expected to be necessarily also well-suited to the description of ringdown waveforms at null infinity, which rather have a slower change of amplitude near the merger than further into the RD regime. Indeed, the preliminary comparative tests of the various models discussed in this section (see below in Sec. <ref>) indicated a poor performance for these additional models for the description of waveform ringdowns, and we did not consider them further for the present work.§.§ IMR model One can also use full inspiral-merger-ringdown waveform models to describe the GW ringdown, by selecting only the post-merger part of these models. They are built upon non-linear ansätze, calibrated to NR waveforms, which depend solely on the progenitors' parameters. In general, they will require fewer input parameters than the overtone models — especially compared to those including a large number of overtones N_max. In this work, we use the nonprecessing "IMRPhenomD" waveform model. This model is calibrated to mass ratios up to q=1/18 and initial effective spins up to 0.98 <cit.>, and is well sufficient to cover the non-precessing, 22 mode–only waveforms we consider in this work[We have also tested two other waveform approximants for comparison, namely IMRPhenomPv2 and SEOBNRv4, in the case of analyses starting at the strain amplitude peak (t_0 = 0), and found similar or better performances than IMRPhenomD. We chose IMRPhenomD in part because of the fact that more state-of-the-art approximants were usually calibrated to a much larger number of NR waveforms <cit.>, and thus potentially exhibit artificial advantages in the model comparison.].We generate the IMRPhenomD using the PyCBC <cit.> time-domain approximant. We truncate the waveform at t = 0, aligning it with the peak of the strain for the 22 mode. This alignment corresponds to a frequency f_min∼ 170 Hz. We use a sampling rate δ t=1/2048 s, appropriate to resolve the high-frequency RD regime. We select only the 22 mode. The inclination angle is fixed to zero so that the complex strain is just h_+ - i h_× = h_22 (t) _-2𝒴_22(θ,ϕ). Note that we are comparing the IMRPhenomD model with NR waveforms, hence, we must translate the IMR output to geometric units with G =c=M=1, which implies that the final mass referred in this work represents the fraction of energy radiated M_f/M = 1.With this setup, we are left with 4 free parameters in the model: mass ratio, χ_1^z, χ_2^z, and phase, where χ_1^z, χ_2^z denote the z component of the initial dimensionless spin of the progenitor BHs. Finally, the final mass M_f and final spin a_f are obtained from fits to NR in terms of the progenitor parameters <cit.>.§ MODEL COMPARISON §.§ Mismatch analyses While the bulk of our analysis is based on a full Monte-Carlo sampling parameter estimation for each model, we first performed a least-mean-squares fit-based preliminary comparison and selection among the models at hand. The modelling quality can be assessed in this case using the mismatch ℳ. This quantity commonly used in GW astronomy to quantify the similarity between the model and data <cit.> is defined as:ℳ = 1 - ⟨ h_ NR|h_m⟩/√(⟨ h_ NR|h_ NR⟩⟨ h_m|h_m⟩)where we compare between our model for the strain 22 mode, h_m(t), and the corresponding numerical data, h_ NR(t), with:⟨ f|g⟩ = Re∫_t_0^t_f f^*(t) g(t)dt .The limits of the integral t_0 and t_f mark the fit starting and ending time, respectively. ℳ varies between 0 and 1, with higher mismatches meaning that the model deviates more from data. We can thus compare models by computing the mismatch of each of them (with the best-fit parameter values) to the data. A lower value of that mismatch is associated with a more faithful modelling — without, at this stage, accounting for possibly different numbers of free parameters. §.§ Bayesian frameworkApart from mismatches, another quantity of common use for comparing the fitting performance of different GW models is the Bayes factor. As opposed to the mismatch, the Bayes factor directly quantifies the relative evidence in favour of a model versus another given the data, and this comparison already includes an implicit penalty cost for additional parameters and potential overfitting, through the prior and parameter space volume. It applies within a Bayesian parameter inference framework for each model ℳ, where the posterior probability distribution of the parameters θ⃗ associated with the model, given the data d (here d= h_NR), is given by:p(θ⃗| d, ℳ)= p(θ⃗|ℳ) p(d |θ⃗, ℳ)/p(d |ℳ).Above, p(θ⃗ |ℳ) is the prior probability of the parameters θ⃗ , p(d | θ⃗, ℳ) is the likelihood function which represents the conditional probability of observing d given the model ℳ with parameters θ⃗, and 𝒵≡ p(d |ℳ) = ∫ p(d |θ⃗, ℳ) p(θ⃗|ℳ)dθ⃗ is the evidence associated with model ℳ. Two possible models ℳ_A and ℳ_B for the description of a given dataset d can then be compared by computing their relative Bayes factor, i.e., the ratio of evidence between them:ℬ_A B=p(d |ℳ_A)/p(d |ℳ_B).As per usual practice, and for convenience, we will be using the logarithmic Bayes factor log_10ℬ_A B, as well as the logarithmic evidence for any given model log_10𝒵 (the difference of logarithmic evidences between two models providing directly the logarithmic Bayes factor between them). Note that we use the base-10 log here. ℬ_A B > 1 (log_10ℬ_A B > 0) would support model ℳ_A over model ℳ_B, and vice versa; although a significant claim of preference of ℳ_A over ℳ_B would require log_10ℬ_A B≳ 1 <cit.>.§.§ Bias analyses Typically, both the best-fit mismatch and Bayes factor (with respect to a reference model) for each model are only used to assess the fitting quality, without accounting for the physical accuracy of the values obtained for the model parameters. This accuracy can be measured separately via the combined recovery bias ϵ on the final mass M_f and dimensionless final spin a_f <cit.>:ϵ = √((M_f^fit-M_f^true/M)^2 + (a_f^fit-a_f^true)^2),where the final masses are normalized by the initial total mass M from the NR simulation. The true parameters M_f^true and a_f^true correspond to the values from the NR simulation, that are estimated from the mass and spin quasi-local definitions on the apparent horizon <cit.>. The uncertainties of these final mass and spin local estimates from NR are typically in the order of 10^-5 to 10^-4 <cit.>. §.§ Preliminary tests As a preliminary step to rapidly select relevant models and assess general trends, we fitted various models, with a range of numbers of tones, to the h_22 ringdown strain mode from the BBH:0305 binary BH simulation from the SXS catalogue <cit.>. The ringdown phase of the signal was selected by starting the analysis at the peak of the corresponding amplitude |h_22| (which we use to define t=0), i.e., by setting t_0 = 0; up to the end of the available data (t_f / M ∼ 150). For the sake of stability and efficiency of the fits, we followed the same algorithm as in <cit.> to obtain the best-fit parameters for each model. The two to four parameters of a given model which are not tone amplitudes and phases (that is, for instance, {M_f, a_f } for overtones models, and { M_f, a_f, A, τ} for the HTPM models) are distributed on an adaptative grid. For each value of those parameters on the grid, the best-fit value for the remaining parameters of the model, i.e. the tone complex amplitudes, on which the model depends linearly, is obtained through the analytic minimization of the sum of squared residuals for a linear model. The mismatch between the corresponding best-fit waveform and the NR waveform is computed for each grid point, and the optimal value for the `nonlinear' parameters on the grid such as M_f and a_f is then determined as the value minimizing the mismatch. Note that minimizing over the mismatch or over the sum of squared residuals, for any given set of parameters, gives equivalent results in the limit of small mismatches (or residuals).In Fig. <ref>, we show the mismatch ℳ (left panel) and final mass/spin bias ϵ (right panel) of these best-fit solutions for the OM and the alternative TCTM and HTPM models, at various numbers of overtones. For each value of N = 1 … 7 on the x axis, we show the results for the OM_N, TCTM_N-1 and HTPM_N-1 models (plus OM_0 for N=0), for direct comparison of models which have the same number of free parameters, i.e. 2 N + 4. We also considered the IMRPhenomD model, which has 4 parameters like the OM_0; we represent its mismatch and bias as a reference magenta horizontal dashed line and shaded area on each panel. The estimated numerical error on the mismatch and bias parameter are also shown as dark gray horizontal lines and shaded areas.These comparison plots show that the TCTM (at any N) and the HTPM (from HTPM_1 onwards) provide a comparable or (in most cases) better description in terms of mismatch to the ringdown 22 mode of SXS:BBH:0305, than the OMs for the same number of free parameters. On the other hand, these two classes of models typically recover more poorly the parameters of the final BH than the corresponding overtone model, with the notable exception of TCTM_1 and TCTM_2 (compared to OM_2,3); while like for the overtone models the bias does decrease in most cases when more tones are included in the model. Finally, the 4-parameter IMRPhenom model provides a description of the NR data of a quality comparable to the 8-parameter OM_2 and HTPM_1 models (and better than the 6-parameter TCTM_0), with a much lower bias on the final mass and spin. On the other hand, the other phenomenological RD models, not shown here, that we considered in this preliminary study (e.g. adding a non-oscillatory damped power-law or exponential term to a sum of QNM overtones) were typically performing much more poorly in both mismatch and bias than the models shown, and are consequently not considered further in this work.We shall now focus on the models with low or moderate numbers of parameters which showed comparable or better performance than the first few overtone models, i.e., OM_0 … 4, TCTM_0 … 2, HTPM_1,2, and IMRPhenomD, for a more thorough comparison in a Bayesian inference approach using nested sampling <cit.>.§ PARAMETER ESTIMATION For GW detectors, the frequency-domain likelihood function under stationary Gaussian noise is defined from the noise-weighted frequency-domain inner product (·,·) as follows <cit.>:p(d| θ⃗, ℳ) = exp[- 1/2( d(f)-m(θ⃗,f), d(f)-m(θ⃗,f)) ] ,where m(θ⃗,f) denotes the strain 22 mode from model ℳ with particular parameters θ⃗, evaluated at frequency f, and d is the sum of the GW strain and the noise realization <cit.> for the 22 mode. The inner product (·,·) itself is defined from the one-sided power spectral density S_n(f) of the detector's noise, as:(x , y) = 4×Re∫_0^∞x^*(f) y(f)/S_n(f) d f .In our case, the parameter estimation is performed on time-domain data, with the NR ringdown 22-mode strain as our data set d(t), considered as being injected in a zero-noise realization. We moreover consider here a flat noise spectrum within the relevant frequency range, i.e., that the noise sensitivity curve can be approximated by its value at the frequency of the (220) mode <cit.>, S_n(f)≈ S_n(f_220) =cst. This assumption allows us to directly relate this constant noise amplitude to the optimalρ of the data, with ρ^2 = (d(f),d(f)) = 2⟨ d(t)|d(t) ⟩ / S_n(f_220), and to convert more generally via Parseval's theorem the noise-weighted frequency-domain inner product (·,·) of Eq. (<ref>) into the time-domain scalar product of Eq. (<ref>):(x(f),y(f)) = 2⟨ x(t)|y(t) ⟩/S_n(f_220) = ρ^2⟨ x(t)|y(t) ⟩/⟨ d(t)|d(t) ⟩.The likelihood expression, Eq. (<ref>), can thus be re-expressed in time domain as:p(d| θ⃗, ℳ) = exp[-ρ^2⟨ d(t)-m(θ⃗,t)|d(t)-m(θ⃗,t)⟩/2 ⟨ d(t)|d(t)⟩].We can therefore directly set the optimalρ in our parameter estimations through the likelihood function, which for a given data d is equivalent to setting the constant noise amplitude within this approximation.To perform parameter estimation and both obtain posterior distributions and calculate the models' Bayesian evidences, we use the dynamical nested sampling method from the dynesty Python package <cit.>. One particular feature of this method is that it estimates the evidence and the posterior simultaneously.Throughout our tests, we used 2000 n-live points and a stopping criterion of Δ (ln𝒵)=0.1 for the nested samplings.§ RESULTS We start our initial inference analysis on the GW150914-like simulation, SXS:BBH:0305, employing the parameter estimation framework this waveform into white Gaussian noise, simulating an event observed by third-generation (3G) observatories with a signal-to-noise ratio of ρ = 100. SXS:BBH:0305 is consistent with a signal with masssratio q=m_1/m_2=1.22, effective dimensionless spin χ_eff=(χ_1 m_1+χ_2 m_2) /(m_1+m_2)=0.01. Our Bayesian inference is performed using four RD models described in Section <ref>. Further details on the various prior choices for each RD model and their impact on the obtained posterior distributions are provided in Section <ref>. §.§ Bayes factor analysis for t_0=0 We initiate our PE analysis at t_0 = 0, i.e., consistent with the peak of the strain. We firstestimate the values of the evidence 𝒵 for each of the models considered here. The results of these runs are shown in Table <ref>. We can deduce from the results that the IMRPhenomD model yields the highest value for log_10𝒵, suggesting a better fit to the NR data compared to the other models. Moreover, among the non-linear models examined in this study, the second and third highest-ranking models in log evidence are the non-linear TCTM models. Specifically, the TCTM_2 yields a log Bayes value of log_10ℬ∼ 3, indicating that it is approximately 𝒪(10^3) times larger than the log evidence for the overtone solutions OM_2 and OM_3. Notably, the TCTM_2 achieves this superior performance while having the same number of parameters as the latter. We also observe that, among the linear solutions, the log_10𝒵 saturates at OM_2 and it provides a slightly lower value for OM_3. We verified that the log_10𝒵 value consistently decreases for OM_4 and OM_5 thus, showing that the optimal performance of the OM models occurs at OM_2. This is compatible with the overtone-models results of <cit.> and — in the complementary context of deformations of the final horizon — of <cit.>. Finally, the non-GR model HTPM_1 provides similar log_10𝒵 as OM_2. To ensure the robustness of these results and rule out potential influences from varying prior choices, we have examined and confirmed that adjusting priors does not qualitatively alter these findings. The results shown for different prior choices is shown in Appendix <ref>. Therefore, and from the Bayes factors' point of view, the nonlinear models are preferred for the waveform SXS:BBH:0305. In the fourth column of the same table, we show the mass/spin recovery bias ϵ as defined in Eq. (<ref>) for the ten models considered. The true values M_f^true,a_f^true are obtained from the NR metadata files while the M_f^fit,a_f^fit correspond to the maximum likelihood values obtained after sampling the likelihood distribution of Eq. (<ref>). The biases on the final mass and the final spinfor the OM models improve gradually from OM_0 to OM_3, and it starts to degrade for OM_4 (consistently with the overtone-models analyses of <cit.>). Notice that this last result qualitatively differs from the analysis we have done based on the value of evidence 𝒵. This is expected since the computation of the evidence provides an extra penalty factor to the increasing of the prior volume, which in this case is sourced by the large number of free parameters of the high- OM models. Regarding the non-linear models, we observe that the best model in terms of ϵ is TCTM_2 with ϵ=0.0045. Moreover, we obtain that for the nonlinear IMRPhenomD model ϵ^IMRPhenomD <ϵ^OM_3 while the evidence 𝒵 for the IMRPhenomD is larger.§.§ Mass and spin posterior distributions for t_0=0 We study the marginalised posterior distribution of remnant properties for each of the models considered in this work. Notice here that the posterior distribution of IMRPhenomD are originally q, χ_1z, χ_2z, which are parameters of the progenitor BHs. Thus, these are converted to the final state parameters using NR fits to the final BH state <cit.>.In Fig. <ref>, we show the final mass and final spin posterior distributions for the models OM_0-3 and IMRPhenomD (in magenta). The "+" symbols, represent the maximum likelihood values for each of the models considered, and that correspond to the values of ϵ shown in the fourth column of Table <ref>. Notice that the posterior distributions on the final mass and the final spin are consistent with the true parameters for the models with OM_n ≥ 2, while it shows large offsets for the models OM_n=0,1. Moreover, it is noteworthy that the 90% credible contours increase slightly with the overtone index, from n=0 to n=3. The broadening of the posterior contours might be sourced by the observed correlations among the tones (see the Appendix B of <cit.>), which become specially relevant for tones with n≳ 1. Therefore, the precision at which one can measure the final mass and final spin for signals at ρ =100 using OMs reaches a maximum for n=2 and it slightly degrades for n>2. Alternatively, notice that the IMRPhenomD model provides i) lower values for the bias ϵ than the best OM solution, and ii) tighter mass and spin contours than all the other models considered. Therefore, we observe that the IMRPhenomD model provides a more accurate description of the signal than the OM models if; either one uses the Bayesian evidence as a ranking criterion or from testing the consistency of the posterior distributions with respect to the true parameters at t_0. In Fig. <ref>, we show then the posterior distributions of the final mass and the final spin for the models OM_2, TCTM_1, HTPM_1 and IMRPhenomD for the same waveform at ρ=100. We compare the inferred posterior distributions of the optimal linear model OM_2 with all the other nonlinear models. Notice that the 90% credible contours of the nonlinear IMRPhenomD model still provide tighter constraints on the remnant parameters and a lower value for for the bias ϵ. It is noteworthy that IMRPhenomD is characterized by only four parameters, distinguishing itself from all the other models under consideration, each of which involves eight parameters. Another interesting observation is that even though TCTM_1 has better Bayes evidence compared to the corresponding OM_2 (i.e. having the same number of parameters), both its posterior distributions and its values for the ϵ are still compatible to each other. Notably, the model exhibiting the least favorable performance is HTPM_1, displaying a significant bias (approximately 0.2) in relation to the true values. This bias arises from the model's flexibility in deviating from GR and from the mismodelling of the early time features by the GR-only OM_1 model, resulting in a looser and biased constraint on the remnant mass and spin values. Significantly, the model HTPM_1 is presently employed for conducting GR tests in the analysis of ongoing events observed by the LVK collaboration thus, the use of this model might be limited to low ρ scenarios, in which the statistical errors dominate the systematic ones. To this aim, in Fig. <ref> we have quantified impact of the systematic errors with respect to the statistical ones. In particular, we have estimated the ratio between ϵ –systematic error – with respect to the statistical one σ_ϵ estimated at the 90% percentile and in terms of the SNR ρ. We note that the applicability of HPTM_1 appears to approach its limit around RD ρ≈ 30. At this point, the influence of systematic errors begins to overtake the statistical uncertainties in the inference of mass and spin values (combined in the variable ϵ). Yet, the prevailing statistical uncertainty evident in present events, driven by the noise sensitivity of observatories, notably outweighs the parameter biases identified in this study. However, as demonstrated in our study, the accurate determination of the final mass and spin becomes predominantly influenced by systematic errors for current OM models when confronted with the anticipated high-ρ scenarios characteristic of third-generation observatories.An analogous analysis for the waveform Ext-CCE:BBH:0002 is presented in Appendix <ref>.§.§ Dependence on the starting time t_0 Usually, there is a trade-off in truncating the waveform at different starting times. On the one hand, performing BH spectroscopy at late times RD leads to large statistical uncertainties due to the rapid decay of the SNR[The SNR ρ scales inversely with the parameter uncertainty σ_λ as ρ∼ 1/σ_λ. As a rule of thumb, ρ(t_0 = 0)/ρ(t_0/M = 10) ∼ e^-1 = 0.37, which implies that the posterior distributions on the physical parameters at t_0/M =10 are about e times larger than the ones obtained at t_0 = 0.]. On the other hand, starting the spectroscopic analysis close to the peak amplitude of the strain, results in a biased estimation of the parameters. This bias stems from the absence of modes in the OMs and potentially ignoring the nonlinear effects <cit.>.Here, we vary the fit starting time by truncating the waveform at different t_0/M, and we analyse the results in terms of the Bayes factor. We repeat the analysis for a set of starting times t_0/M = -5, -2.5, 0, 2.5, …, 20, where negative t_0/M include part of the waveform slightly before merger. The runs have been performed for the models OM_2, OM_3, TCTM_1, HTPM_1 and IMRPhenomD with reference (t_0 = 0) SNR[Note that the actual SNR also relies on the choice of fit starting time t_0. Truncating and fitting the waveform from the peak can result in larger ρ compared to fitting with only the later time of data. In order to make sure we consider the same event, we fix ρ=100 at t_0=0 and let the other t_0≠0 SNR ρ scaled with respect to the truncated waveform length.] ρ_t_0 = 0=100. In Fig. <ref> we show the log_10 Bayes factors for each model with respect to the OM_3 model via Eq. (<ref>). We have found that the two nonlinear models considered in this work, IMRPhenomD and TCTM_1, provide positive Bayes factors over OM_3, while the linear models OM_2 and HTPM_1 provide negative Bayes factors around the merger (for negative and low t_0/M values). In particular, the Bayes evidence values we have obtained strongly support the IMRPhenomD model over OMs at early starting times (including before the waveform amplitude peak) and up to t_0/M=15. This suggests that the nonlinear perturbations or even the prompt response effects could still dominate a short time after the merger. However, OM_2 consistently exhibits a comparable performance compared to all other models starting from t_0=15 onward, as the truncated waveform transitions into the linear regime.§.§ Can we observe the high-overtone amplitudes? The confidence statistical observation of a tone solely from the RD regime, requires that the inferred value of its amplitude must be incompatible with zero, at least, within a given confidence level per tone amplitude C_n. Here, we use the one-sided 90% confidence value C_n^90% to assert the confident observation[The posterior amplitudes, denoted as A_nand obtained from running PE are typically constrained to be positive based on our chosen priors. This imposes a stringent requirement on the lower limit, specifically, that C_n^90% must be greater than zero. In this context, the one-sided C_n^90% is determined by assuming a Gaussian symmetry around the peak value, making it approximately equivalent to the upper bound of the 90% confidence interval.] of a tone at a given SNR ρ for the NR waveform SXS:BBH:0305 <cit.>. In particular,we study the dependence of the magnitude A_n/C_n^90% for a set of SNRs injections with a network SNR ρ∈[10,100], where A_n provides the real valued amplitude of the tones. An approximated observation of a tone would correspond to A_n/C_n^90%∼ 1. To estimate C_n^90% we use the framework described in Sec. <ref>, where we also use the best fitting values of the amplitudes and phases obtained at t_0. In Fig. <ref> we show the quantity A_n/C_n^90% in terms of the SNR ρ for two of the OMs considered in this work; N_max=2 (top panel) and N_max=3 (top panel). The dashed vertical orange line corresponds to the post-peak SNR of GW150914 <cit.>. Notice that for each tone amplitude, the SNR required to cross one is larger as the n index increases. This is because as n increases the corresponding damping time of each tone τ_n decreases, thus carrying out a lower per-tone ρ. Specifically,ρ_n=1(A_0/C_0^90%=1) > ρ_n=1 (A_1/C_1^90%= 1) > ρ_n=2 (A_2/C_2^90%=1) >ρ_n=3 (A_3/C_3^90% =1). Moreover, notice that ρ∼ 30 – A_2/C_2^90%=1 – isrequired for the full and confident observation of the N_max=2 model while ρ∼ 80 – A_3/C_3^90%=1– is expected if N_max=3. A RD with ρ∼ 30 might be observed in the LIGO A# while for ρ∼ 80 is expected to occur only for 3G observatories as ET and CE <cit.>. However, a significant concern with the high-order overtone models lies in the pronounced variation of tone amplitude when altering the number of tones, denoted as N_max in the models <cit.>. For instance, the amplitude ratio between the amplitudes obtained from the fits to the NR data and for the N_max=2,3 models varies asA_n^N_max=3/A_n^N_max=2= { 1.04, 1.50, 3} for the tones n=0,1,2 respectively, from the fits to the NR data. The substantial variability, on the order of 𝒪(2-3), in the amplitude values for tones n=1,2 poses a challenge in determining whether these values represent the system faithfully or are influenced by other fitting systematics, such as the absence of tones or the presence of nonlinearities. Conversely, the value of the fundamental tone amplitude A_0 remains almost constant regardless the number of tones considered in this example. These amplitude variation results are consistent with, e.g., <cit.>. §.§ Analysis on different NR waveforms To assess the robustness of our findings, we conduct parameter estimation using a distinct set of NR waveforms sourced from the SXS catalog. These waveforms, namely SXS:BBH:0150, SXS:BBH:0300, and SXS:BBH:1221, cover a spectrum of mass ratios spanning 1, 3, 8.5 and effective spins of 0.2, 0, 0. We also append waveform Ext-CCE:0002 from the Ext-CCE catalog to the list, with the detailed discussion presented in Appendix <ref>. Notice that these waveforms, as well as SXS:BBH:0305 considered so far, were not used to calibrate IMRPhenomD <cit.>, thus are not biasing our model comparison. In Table <ref> we show the log_10 Bayes evidence and biases ϵ for a selected set of models, i.e., IMRPhenomD, OM_2, OM_3, TCTM_1 and HTPM_1 for all NR waveform truncated at a starting time t_0=0. We observe that IMRPhenomD consistently provides Bayes factors of approximately ∼ 10 over OMs at ρ=100, which denotes a clear preference for the nonlinear IMRPhenomD model compared to linear RD models. In addition, the nonlinear model TCTM_1 emerges as the second best solution based on the Bayes evidences, showing a considerable advantage over the OMs. In particular, notice that we don't observe significant differences on the values of log_10𝒵 for the OM_2 and OM_3, indicating that OM_2 provides a sufficiently accurate solution at t_0=0, consistent with the results observed for SXS:BBH:0305. While the non-GR model HTPM_1 offers log evidence values similar to those of the OM models.Regarding the bias analysis,we have found that the IMRPhenomD model recovers the minimum value for ϵ, for the waveforms SXS:BBH:0150, SXS:BBH:0300, and SXS:BBH:1221. TCTM_1 appears as the second best model with comparable/better epsilon values comparing to the OM_2 and OM_3. And HTPM_1 generally performs the poorest in recovering the true final parameters as expected, which is similar as the case we have seen for SXS:BBH:0305 analysis. Therefore, for the five different NR waveforms tested here, we consistently observe the compelling preference for the nonlinear models compared to the OM models and non-GR model.§ CONCLUSIONS In this work we have tested the performance of several RD models by fitting them to the 22 mode of NR waveforms. The models we have used to fit the NR data are divided into four categories: i) The non-linear RD regime of the IMRPhenomD approximant — which has been widely used in GW data analysis, ii) the RD model as described by linear QNMs — dubbed here as OM,iii) a family of non-linear RD toy models, the TCTMs, which expand upon the linear models by including further qualitative non-linear contributions, and iv) the HTPM models, which are linear but allow for deviations of the QNM spectrum from GR. Our results are obtained on NR waveforms of different nature: four extracted at finite radii and extrapolated to null infinity — labelled as SXS:BBH:# — and one extracted through the Cauchy characteristic procedure, thus, with lower expected errors that the extrapolated ones, and labelled as Ext-CCE:BBH:0002. We first analyse the performance of the models at fitting the 22 mode of the waveform SXS:BBH:0305, at a starting time t_0/M = 0 (corresponding to the peak of amplitude of the 22 mode). We obtain the minimum mismatch ℳ and the bias ϵ on the recovered physical parameters of the remnant BH for all the models consideredand over a range of maximum number of tones included in the models. We observe that the non-linear TCTM models provide in general a lower mismatch than the overtone ones, even for N_max=7 — while the IMRPhenomD post-peak model has a match to the data comparable to the 8-parameter = 2 OM model despite relying on only 4 free parameters. Regarding the final mass and spin recovery, we find that the non-linear models IMRPhenomD and TCTMs both show a similar accuracy to the OMs up to N_max∼ 3, while the OM solution outperforms them at N_max≳ 3. Next, we have also performed injections of the NR waveform SXS:BBH:0305 in zero-realization white Gaussian noise at SNR ρ = 100 and at different starting times t_0/M∈[-5,20], simulating the expected SNRs of the third-generation gravitational wave observatories such as the Cosmic Explorer and Einstein Telescope. We first note that the non-linear model IMRPhenomD provides tighter constraints on the final mass and spin parameters at t_0 = 0 than the OM, even for N_max=3. In particular, N_max=2 and 3 provide compatible but broader mass and spin contours with IMRPhenomD albeit with a significantly lower Bayes evidence, while the =0,1 OM models are more biased, which is consistent with the results shown in <cit.>. Remarkably, there is currently no amplitude and phase overtone model calibrated to the BBH progenitor masses and spins for N_max>1 <cit.>, which contributes to the broader =2,3 OM constraints. If such a model were to be developed, the additional uncertainty introduced by the calibration process might be expected to further broaden the amplitude and phase uncertainties, potentially compromising the accuracy compared to the IMRPhenomD model.Moreover, we have computed the log_10 Bayes factor of the OM, HTPM, IMRPhenomD and TCTM models with respect to OM_3 (i.e., OM with N_max=3). We have found that in terms of the Bayes factor, at t_0=0, the IMRPhenomD is the preferred model, the second best ranked model being the TCTM_1: log_10ℬ^IMRPhenomD_OM_3∼ 8 and log_10ℬ^IMRPhenomD_TCTM_1∼ 5, showing a decisive (in the vocabulary of <cit.>) evidence towards the IMRPhenomD model. The preference for the nonlinear IMRPhenomD model remains consistent until t_0/M ∼ 15, at which point the OM_2 exhibits a comparable evidence. It is also important to note that the non-linear TCTM_1 model consistently offers superior fits to the data compared to the OM models at negative/early starting times. However, as anticipated, this difference also diminishes at late fitting starting times. Next, we have estimated the SNR required for observing confidently a tone amplitude for the OM models, with N_max = 1, 2, 3. First, we have obtained that, at 90% credible level, the ratio of bias to statistical uncertainty ϵ/σ_ϵ^90% exceeds one for the models OM_1 and HTPM_1 when ρ∼ 30. This implies that analyses of the RD of upcoming loud GW events might be soon dominated by the systematic errors if those models are used. For the nonlinear and the OM_n ≥ 2 models, we show that the systematic errors remain subdominant up to ρ∼ 150. Additionally, we observe that achieving a full observation and/or characterization of all the amplitude parameters of the linear models OM_n=2,3 would require a SNR ρ∼ 30, 80 respectively. However, the strong variability of their amplitudes pose reasonable doubts on the physical reliability of the models as compared to the full nonlinear solutions, which simply depend on the well-known progenitor parameters. Finally, we carried out parameter estimation with the same set of models on four additional waveforms from the SXS catalog, including one example from its Ext-CCE extension. We observe again the compelling preference for the nonlinear models compared to the OM and non-GR (HTPM) models regarding the evidence as well as (in the case of the IMRPhenomD model) the recovery bias, showing the robustness of the trends observed in our more detailed SXS:BBH:0305 analysis. In summary, our findings indicate that even in high-SNR scenarios, in comparison to IMR-based nonlinear models such as the IMRPhenomD model, QNM-only models, including those with a substantial number of overtones, yield lower Bayes factors. This implies a reduced accuracy in fitting NR ringdown waveforms up to early times. Additionally, these models result in broader posterior distributions for the black hole final mass and spin. We observe that the accuracy of the OM models saturates at = 2. The loss in accuracy at N_max>2 could be induced by the expected non-linearities affecting the early post-peak phase which, at the same time, may induce large instabilities on the amplitudes of the tones with n>1 <cit.>. For our toy nonlinear model TCTM_1, we observe similar posterior distributions than for the OM models, together with a better Bayes factor at early times — intermediate between that of OM models and that of the IMRPhenomD model. The IMRPhenomD model itself and, by extension, any state-of-the-art IMR model of the same nature and similar or higher accuracy, provides a simple, even more accurate solution that is as well calibrated to the progenitor parameters. We have used five independent NR simulations over a range of mass ratios and aligned, anti-aligned or vanishing spins; for each of them, the IMRPhenomD model provides slightly tighter constraints on the mass and the spin and larger Bayes factors than the linear OM models. Tests with the IMRPhenomPv2 and SEOBNRv4 approximants provided the same qualitative results. An extensive exploration of other approximants and of further NR waveforms of the same and other catalogues will be provided elsewhere. Hence, we conclude that the utilization of nonlinear and well-established IMR models is more pertinent when inferring physical parameters from a RD signal. This is particularly relevant when the analysis commences at the peak of the strain and is applicable to SNR ratios consistent with third-generation observatories such as ET and CE. This paper was initiated as a result of an internship carried out virtually under the Max Planck Institute for Gravitational Physics Hanover. We acknowledge the Max Planck Gesellschaft for support and we are grateful to the Atlas cluster computing team at MPG Hanover for their help. The authors are also thankful to Alex Nitz, Collin Capano, Sumit Kumar, and Yifan Wang for useful discussions, and to Juan Calderon Bustillo, Gregorio Carullo, Will Farr and Vasco Gennari for helpful comments on the draft.PM is supported by the Spanish Ministerio de Ciencia e Innovación (Fondos MRR) - Conselleria de Fons Europeus, Universitat i Cultura with funds from the European Union NextGenerationEU (PRTR-C17.I1) through the project `Tecnologías avanzadas para la exploración del universo y sus componentes' (ref. SINCO2022/6719 ; PI: Alicia Sintes, University of the Balearic Islands, Spain). PM's work was also supported by the Universitat de les Illes Balears (UIB); the Spanish Agencia Estatal de Investigación grants PID2022-138626NB-I00, PID2019-106416GB-I00, RED2022-134204-E, RED2022-134411-T, funded by MCIN/AEI/10.13039/501100011033; the MCIN with funding from the European Union NextGenerationEU/PRTR (PRTR-C17.I1); Comunitat Autonòma de les Illes Balears through the Direcció General de Recerca, Innovació I Transformació Digital with funds from the Tourist Stay Tax Law (PDR2020/11 - ITS2017-006), the Conselleria d'Economia, Hisenda i Innovació grant numbers SINCO2022/18146 and SINCO2022/6719, co-financed by the European Union and FEDER Operational Program 2021-2027 of the Balearic Islands; the “ERDF A way of making Europe”; and EU COST Action CA18108.§ ROBUSTNESS TESTS OF PRIOR CHOICESIn parameter estimation, the choice of prior range is always a crucial factor. In fact, even if the posterior distributions are rather similar, the Bayes evidence can still be different for different prior assumptions. As shown in Table <ref>, in our case, since we want to compare between the IMRPhenomD and OMs, the priors already differ in the first place that one is a IMR model that requires initial properties of the binary, while the another is only a RD model. For IMRPhenomD, the mass ratio of (1, 8) prior range + (-0.99, 0.99) spins priors would lead to a different prior range on the remnant mass after conversion. The new prior range is ∼ (0.85, 1) for the final mass, which is much different from the typical prior range we use for overtone model, i.e. (0.5, 1.3).To examine the robustness of our tests on different prior assumptions, we also perform the same sampling for OM_2 with 2 new different priors on particularly the final mass. They are (0.7, 1.2) and (0.8, 1.1), respectively. In Fig. <ref>, the two new priors are denoted as "Prior2" and "Prior3" with the "Prior1" being (0.5, 1.3). In this plot, we show the log_10ℬ between IMRPhenomD and OM_2 for 3 different prior ranges and for 5 different waveform. To examine the robustness of the choice of prior ranges, we compare the vertical 3 values in each column of the plot as they are computed on samplings of the same waveform. We find that the largest difference in the log_10 Bayes evidence we observe is in SXS:BBH:0305, which has a log_10ℬ∼ 1 for one of the new prior sampling comparing to the typical prior sampling. However, this change is rather little compared to the absolute values of the Bayes factors, which have log_10ℬ>7 for all the priors. Moreover, we don't see any clear trend in the improvement/reduction of Bayes factors when the prior range shrinks, i.e., from "Prior1" to "Prior3". Therefore, the difference we observe are also possibly subject to the statistical errors of the samplings. Not to mention in other tests, the difference between the Bayes evidence are even smaller. Therefore, we conclude that the prior choice of our test is robust.§ POSTERIORS OF EXT-CCE WAVEFORM As another cross-check, we also present our bias analysis for the Cauchy characteristic Ext-CCE waveform from the additional SXS catalog particularly in terms of the mass/spin posterior distribution comparison of different models. The specific waveform we test is Ext-CCE:BBH:0002, which has true final dimensionless mass and spin m_f, a_f given as [0.946, 0.746] In Fig. <ref>, we show firstly the comparison of final mass and spin posterior distribution for the sampling of OM_0-3 and IMRPhenomD. In the main text, we see that the posterior distributions improvement in successive OMs. Similarly, we also see here that the posterior distributions on the remnant mass and spin for OM improves significantly from OM_0 to OM_1, but then converges at OM_3. In particular, the epsilons values estimated for OM_2 and OM_3 are comparable, as the maximum likelihood values for both models almost overlap. The IMRPhenomD model here also shows a comparable parameter estimation performance to the OM_2,3 models. IMRPhenomD gives a more faithful recovery for the final mass, while it performs slightly worse in terms of the final spin as we can see from the 1D marginalized distribution. Therefore, the results obtained for the Ext-CCE:BBH:0002 waveform is consistent with the results obtained for SXS:BBH:0305 presented in the main text.In Fig. <ref>, we show then the comparison of final mass and spin posterior distribution for the sampling of OM_2, TCTM_1, HTPM_1 and IMRPhenomD assuming ρ=100. Again, this is test about different types of models. We can see from the 90% credible contours that the nonlinear IMRPhenomD model performs the best in both the recovery accuracy and precision with only 4 free parameters. While for all the other models we have 8 parameters. While unlike the conclusion in the main text, here the another nonlinear model, TCTM_1 has much worse posterior distributions comparing to the corresponding OM_2 (i.e. having the same number of parameters). The model with the worst performance here is still HTPM_1 as it considers the deviation to GR, which leads then to a looser/biased constraint on the remnant mass and spin.
http://arxiv.org/abs/2312.15904v1
{ "authors": [ "Yi Qiu", "Xisco Jiménez Forteza", "Pierre Mourier" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20231226064316", "title": "Linear vs. nonlinear modelling of black hole ringdowns" }
footnote ^1 Departamento de Física, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires, Argentina ^2 IFLP, CONICET - Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, C.C. 67, 1900 La Plata, Argentina ^3 Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC, E-46100 Burjassot (Valencia), Spain ^4 CONICET, Rivadavia 1917, 1033 Buenos Aires, Argentina We study the mass spectrum of light pseudoscalar and vector mesons in the presence of an external uniform magnetic field B⃗, considering the effects of the mixing with the axial vector meson sector. The analysis is performed within a two-flavor NJL-like model which includes isoscalar and isovector couplings together with a flavor mixing 't Hooft-like term. The effect of the magnetic field on charged particles is taken into account by retaining the Schwinger phases carried by quark propagators, and expanding the corresponding meson fields in proper Ritus-like bases. The spin-isospin and spin-flavor decomposition of meson mass states is also analyzed. For neutral pion masses it is shown that the mixing with axial vector mesons improves previous theoretical results, leading to a monotonic decreasing behavior with B that is in good qualitative agreement with LQCD calculations, both for the case of constant or B-dependent couplings. Regarding charged pions, it is seen that the mixing softens the enhancement of their mass with B. As a consequence, the energy becomes lower than the one corresponding to a pointlike pion, improving the agreement with LQCD results. The agreement is also improved for the magnetic behavior of thelowest ρ^+ energy state, which does not vanish for the considered rangeof values of B — a fact that can be relevant in connection with theoccurrence of meson condensation for strong magnetic fields.Masses of the magnetized pseudoscalar and vector mesons in an extended NJL model: the role of axial vector mesons M. Coppola^1, D. Gomez Dumm^2, S. Noguera^3 and N. N. Scoccola^1,4===================================================================================================================== § INTRODUCTIONThe effects caused by magnetic fields larger than |eB| ∼Λ_QCD^2 on the properties of strong-interacting matter have attracted a lot of attention along the last decades <cit.>. In part, this is motivated by the realization that such magnetic fields might play an important role in the study of the early Universe <cit.>, in the analysis of high energy noncentral heavy ion collisions <cit.> and in the description of compact stellar objects like the magnetars <cit.>. In addition to this phenomenological relevance, from the theoretical point of view, external magnetic fields can be used to probe QCD dynamics, allowing for a confrontation of theoretical results obtained through different approaches to nonperturbative QCD. In this sense, several interesting phenomena have been predicted to be induced by the presence of strong magnetic fields. They include the chiral magnetic effect <cit.>, the enhancement of the QCD quark-antiquark condensate (magnetic catalysis) <cit.>, the decrease of critical temperatures for chiral restoration and deconfinement QCD transitions (inverse magnetic catalysis) <cit.>, etc.In this context, the understanding of the way in which the properties of light hadrons are modified by the presence of an intense magnetic field becomes a very relevant task. Clearly, this is a nontrivial problem, since first-principle theoretical calculations require to deal in general with QCD in a low energy nonperturbative regime. As a consequence, the corresponding theoretical analyses have been carried out using a variety of approaches. The effect of intense external magnetic fields on π meson properties has been studied e.g. in the framework of Nambu-Jona-Lasinio (NJL)-like models <cit.>, quark-meson models <cit.>, chiral perturbation theory (ChPT) <cit.>, path integral Hamiltonians <cit.>, effective chiral confinement Lagrangians <cit.> and QCD sum rules <cit.>. In addition, several results for the π meson spectrum in the presence of background magnetic fields have been obtained from lattice QCD (LQCD) calculations <cit.>. Regarding the ρ meson sector, studies of magnetized ρ meson masses in the framework of effective models and LQCD can be found in Refs. <cit.> and Refs. <cit.>, respectively. The effect of an external magnetic field on nucleon masses has also been considered in several works <cit.>.In most of the existing model calculations of meson masses the mixing between states of different spin/isospin has been neglected. Although such mixing contributions are usually forbidden by isospin and/or angular momentum conservation, they can be nonzero (and may become important) in the presence of the external magnetic field. Effects of this kind have been studied recently by some of the authors of the present work, for both neutral <cit.> and charged mesons <cit.>. Those analyses have been performed in the framework of an extended NJL-like model, where, for simplicity, possible axial vector interactions have been neglected. The aim of the present work is to study how those previous results get modified when the presence of axial vector mesons is explicitly taken into account. In fact, due to symmetry reasons, in the context of the NJL model and its extensions <cit.> vector and axial vector interactions are expected to be considered on the same footing (see e.g. Refs. <cit.>). This, in turn, implies the existence of the so-called “π-a_1 mixing” even at vanishing external magnetic field. Such a mixing has to be properly taken into account in order to correctly identify the pion mass states. Thus, the inclusion of the axial interactions is expected to be particularly relevant for the analysis of lowest meson masses.Regarding the explicit calculation, as shown in previous works <cit.>, one has to deal with the meson wavefunctions that arise as solutions of the equations of motion in the presence of the external magnetic field (which we assume to be static and uniform). In particular, in the case of charged mesons, it is seen that one-loop level calculations involve the presence of Schwinger phases that induce a breakdown of translational invariance in quark propagators <cit.>. As a consequence, the corresponding meson polarization functions are not diagonal for the standard plane wave states; one should describe meson states in terms of wavefunctions characterized by a set of quantum numbers that include the Landau level ℓ. In addition, it is important to care about the regularization of ultraviolet divergences, since the presence of the external magnetic field can lead to spurious results, such as unphysical oscillations of physical observables <cit.>. As in previous works <cit.>, we use the so-called magnetic field independent regularization (MFIR) scheme <cit.>, which has been shown to be free from these oscillations; moreover, it is seen that within this scheme the results are less dependent on model parameters <cit.>. Concerning the effective coupling constants of the model, we consider both the case in which these parameters are kept constant and the case in which they show some explicit dependence on the external magnetic field. This last possibility, inspired by the magnetic screening of the strong coupling constant occurring for a large magnetic field <cit.>, has been previously explored in effective models <cit.> in order to reproduce the inverse magnetic catalysis effect observed at finite temperature LQCD calculations.The paper is organized as follows. In Sec. <ref> we introduce the magnetized extended NJL-like lagrangian to be used in our calculations, as well as the expressions of the relevant mean field quantities to be evaluated, such as quark masses and chiral condensates. In Sec. <ref> and <ref> we present the formalisms used to obtain neutral and charged meson masses, respectively, in the presence the magnetic field. In Sec. <ref> we present and discuss our numerical results, while in Sec. <ref> we provide a summary of our work, together with our main conclusions. We also include several appendices to provide some technical details of our calculations.§ EFFECTIVE LAGRANGIAN AND MEAN FIELD QUANTITIES Let us start by considering the Lagrangian density for an extended NJL two-flavor model in the presence of an electromagnetic field. We have, in Minkowski space,L=ψ̅(x)(i /D-m_c)ψ(x)+∑_a=0^3[(ψ̅(x)ψ(x))^2+(ψ̅(x) iγ_5ψ(x))^2]- [(ψ̅(x) γ_μτ⃗ ψ(x))^2+(ψ̅(x) γ_μ γ_5 τ⃗ ψ(x))^2]- (ψ̅(x) γ_μ ψ(x))^2- (ψ̅(x) γ_μ γ_5 ψ(x))^2+ 2g_D (d_++d_-) ,where ψ=(u d)^T, =(1,τ⃗), τ⃗ being the usual Pauli-matrix vector, and m_c is the current quark mass, which is assumed to be equal for u and d quarks. The model includes isoscalar/isovector vector and axial vector couplings, as well as a 't Hooft-like flavor-mixing term, where we have defined d_±= det[ψ̅(x)(1±γ_5)ψ(x)]. The interaction between the fermions and the electromagnetic field A_μ is driven by the covariant derivativeD_μ = ∂_μ+i Q̂𝒜_μ ,where Q̂=(Q_u,Q_d), with Q_u=2e/3 and Q_d=-e/3, e being the proton electric charge. A summary of the notation and conventions used throughout this work can be found in App. <ref>.We consider here the particular case in which one has a homogenous stationary magnetic field B⃗ orientated along the axis 3, or z. Now, to write down the explicit form of 𝒜^μ one has to choose a specific gauge. Some commonly used gauges are the symmetric gauge (SG) in which A^μ(x)=(0,-B x^2/2,B x^1/2,0), the Landau gauge 1 (LG1) in which A^μ(x)=(0,-B x^2,0,0) and the Landau gauge 2 (LG2), in which A^μ(x)=(0,0,B x^1,0). In what follows we refer to them as “standard gauges”. To test the gauge independence of our results, all these gauges will be considered in our analysis.Since we are interested in studying meson properties, it is convenient to bosonize the fermionic theory, introducing scalar, pseudoscalar, vector and axial vector fields (x), (x), (x), , with b=0,1,2,3, and integrating out the fermion fields. The bosonized action can be written asS_bos= -iln(i𝒟)-1/4g∫ d^4x [(x) (x)+π⃗(x)·π⃗(x)]- 1/4g(1-2)∫ d^4x [σ⃗(x)·σ⃗(x)+(x) (x)]+ 1/4∫ d^4x [ρ⃗_μ(x)·ρ⃗^ μ(x)+a⃗_μ(x)·a⃗^ μ(x)]+1/4∫ d^4x (x) (x)+1/4∫ d^4x (x) (x) ,withi𝒟_x,x' = δ^(4)(x-x') [i /D-m_0-((x)+i γ_5 (x)+γ_μ (x)+γ_μγ_5 (x))] ,where a direct product to an identity matrix in color space is understood. For convenience we have introduced the combinationsg =+ g_D ,=g_D/+g_D ,so that the flavor mixing in the scalar-pseudoscalar sector is regulated by the constant . For =0 quark flavors u and d get decoupled, while for =0.5 one has maximum flavor mixing, as in the case of the standard version of the NJL model.We proceed by expanding the bosonized action in powers of the fluctuations of the bosonic fields around the corresponding mean field (MF) values. We assume that the fields σ_b(x) have nontrivial translational invariant MF values given by τ_b σ̅_b=(σ̅_u,σ̅_d), while vacuum expectation values of other bosonic fields are zero; thus, we write𝒟_x,x' = 𝒟_x,x'^+δ𝒟_x,x' .The MF piece is diagonal in flavor space. One has𝒟_x,x'^ =diag(𝒟_x,x'^, u , 𝒟_x,x'^, d) ,where𝒟_x,x'^, f = -i δ^(4)(x-x')(i/∂+Q_f B x^1 γ^2-M_f) ,with f=u,d. Here M_f=m_c+σ̅_f is the quark effective mass for each flavor f.The MF action per unit volume is given byS_bos^/V^(4) =- (1-)(σ̅_u^2+ σ̅_d^2)-2σ̅_uσ̅_d/8g(1-2 )-iN_c/V^(4)∑_f=u,d∫ d^4x d^4x'_D ln(𝒮_x,x'^, f)^-1 ,where _D stands for the trace over Dirac space, and 𝒮_x,x'^, f=(i𝒟_x,x'^, f)^-1 is the MF quark propagator in the presence of the magnetic field. Its explicit expression can be written as𝒮_x,y^, f=e^iΦ_Q_f(x,y)∫d^4p/(2π)^4 e^-i p(x-y) S̅^f(p_∥,p_⊥) ,whereS̅^f(p_∥,p_⊥) = - i∫_0^∞dσ exp[-iσ(M_f^2-p_∥^2+p⃗_⊥^ 2 tan(σ B_f)σ B_f-iϵ)] ×[(p_∥·γ_∥+M_f)(1-s_f γ^1γ^2tan(σ B_f))-p⃗_⊥·γ⃗_⊥/cos^2(σ B_f)] ,with B_f = |B Q_f| and s_f= sign(B Q_f). Here we have defined the “parallel” and “perpendicular” four-vectorsp_∥^μ=(p^0,0,0,p^3) ,p_⊥^μ=(0,p^1,p^2,0) ,and equivalent definitions have been used for γ_∥, γ_⊥. The function Φ_Q(x,y) in Eq. (<ref>) is the so-called Schwinger phase, which is shown to be a gauge dependent quantity. For the standard gauges one hasΦ_Q(x,y)=-QB/2(x^1y^2-y^1x^2) ,Φ_Q(x,y)=-QB/2(x^2+x^2)(x^1-y^1) ,Φ_Q(x,y)=QB/2(x^1+y^1)(x^2-y^2) . Let us consider the quark-antiquark condensates ϕ_f≡⟨ψ̅_fψ_f⟩. For each flavor f=u,d we haveϕ_f=iN_c∫d^4p/(2π)^4 _D S̅^f(p_∥,p_⊥) .The integral in this expression is divergent and has to be properly regularized. As stated in the Introduction, we use here the magnetic field independent regularization (MFIR) scheme: for a given unregularized quantity, the corresponding (divergent) B→0 limit is subtracted and then it is added in a regularized form. Thus, the quantities can be separated into a (finite) “B=0” part and a “magnetic” piece. Notice that, in general, the “B=0" part still depends implicitly on B (e.g. through the values of the dressed quark masses M_f); hence, it should not be confused with the value of the studied quantity at vanishing external field. To deal with the divergent “B=0” terms we use here a proper time (PT) regularization scheme. Thus, we obtainϕ_f^ reg = ϕ_f^0,reg + ϕ_f^ mag ,whereϕ_f^0, reg=-N_c M_f I_1f ,ϕ_f^ mag=-N_c M_f I_1f^ mag .The expression of I_1f obtained from the PT regularization, I_1f^ reg, is given in Eq. (<ref>) in App. <ref>, while the “magnetic” piece I_1f^ mag readsI_1f^ mag=B_f2π^2[lnΓ(x_f)-(x_f-12)ln x_f+x_f-ln2π2] ,where x_f=M_f^2/(2B_f). The corresponding gap equations, obtained from ∂ S_bos^/∂σ̅_f=0, can be written asM_u= m_c-4g[(1-) ϕ_u^ reg+ ϕ_d^ reg] ,M_d= m_c-4g[(1-) ϕ_d^ reg+ ϕ_u^ reg] .As anticipated, for α=0 these equations get decoupled. For α=0.5 the right hand sides become identical, thus one has in that case M_u=M_d.§ THE NEUTRAL MESON SECTORTo determine the meson masses we have to consider the terms in the bosonic action that are quadratic in meson fluctuations. As expected from charge conservation, it is easy to see that the terms corresponding to charged and neutral mesons decouple from each other. In this section we concentrate on the neutral meson sector; the charged meson sector will be considered in Sec. <ref>. §.§ Neutral meson polarization functionsFor notational convenience we will denote isospin states by M=,,,,,,,. Here, , ,andcorrespond to the isoscalar states σ, η, ω and f_1, while , ,andstand for the neutral components of the isovector triplets a⃗_0, π⃗, ρ⃗ and a⃗_1, respectively. Thus, the corresponding quadratic piece of the bosonized action can be written asS_bos^ quad, neutral =- 1/2∫ d^4x d^4x'∑_M,M'δ M(x)^† _MM'(x,x') δ M'(x') .Notice that the meson indices M,M', as well as the functions _MM', include Lorentz indices in the case of vector mesons. This also holds for the functions δ_MM', J_MM', Σ_MM'^f, G_MM', etc., introduced below. In the corresponding expressions, a contraction of Lorentz indices is understood when appropriate. In particular, the functions _MM'(x,x') can be separated in two terms, namely_MM'(x,x') = 1/2g_M δ_MM' δ^(4)(x-x')- _MM'(x,x') ,where1/g_M δ_MM' = {[1/g M=M'=,; 1/[g(1-2)] M=M'=,; -η^μν/MM'=,; -η^μν/ MM'=; -η^μν/ MM'= ].,and δ_MM'=0 otherwise. Here η^μν is the Minkowski metric tensor, which can be decomposed as η^μν = η_∥^μν +η_⊥^μν, with η_∥^μν = (1,0,0,-1), η_⊥^μν=(0,-1,-1,0) (see App. <ref>). In turn, the polarization functions _MM'(x,x') can be separated into u and d quark pieces,_MM'(x,x') = Σ_MM'^u(x,x') + ε_M ε_M' Σ_MM'^d(x,x') .Here ε_M=1 for the isoscalars M=,,, and ε_M=-1 for M=,,,, while the functions Σ_MM'^f(x,x') are given byΣ_MM'^f(x,x') =-i N_c _D[i 𝒮_x,x'^, f Γ^M'i 𝒮_x',x^, f Γ^M ] ,withΓ^M = {[1M=,; iγ_5M=,;γ^μM=,; γ^μγ_5M=, ].. As stated, since we are dealing with neutral mesons, the contributions of Schwinger phases associated with the quark propagators in Eq. (<ref>) cancel out, and the polarization functions depend only on the difference x-x', i.e., they are translationally invariant. After a Fourier transformation, the conservation of momentum implies that the polarization functions turn out to be diagonal in the momentum basis. Thus, in this basis the neutral meson contribution to the quadratic action can be written asS_bos^ quad, neutral =-12∑_M,M'∫d^4q(2π)^4 δ M (-q)^† _MM'(q) δ M'(q) .We have_MM'(q) = 1/2g_M δ_MM' - _MM'(q) ,and the associated polarization functions can be written as_MM'(q) = Σ_MM'^u(q)+ε_Mε_M' Σ_MM'^d(q) .Here the functions Σ_MM'^f(q) readΣ_MM'^f(q) =-i N_c ∫d^4p/(2π)^4 _D[i S̅^f(p_∥^+,p_⊥^+) Γ^M'i S̅^f(p_∥^-,p_⊥^-) Γ^M] ,where we have defined p_a^±=p_a± q_a/2, with a=∥,⊥, and the quark propagators S̅^f(p_∥,p_⊥) in the presence of the magnetic field are those given by Eq. (<ref>). The explicit expressions of the non-vanishing functions Σ_MM'^f(q) for arbitrary four-momentum q^μ are given in App. <ref>.Since we are interested in the determination of meson masses, let us focus on the particular situation in which mesons are at rest, i.e.q^μ=(m,0,0,0), where m is the corresponding meson mass. We denote by Ĵ_MM' the resulting polarization functions. It can be shown that some of these functions vanish, while the nonvanishing ones are in general divergent. As we have done at the MF level, we consider the magnetic field independent regularization scheme, in which we subtract the corresponding unregularized “B=0” contributions and then we add them in a regularized form. Thus, for a given polarization function _MM' we have_MM'^ reg = _MM'^0, reg + _MM'^ mag . The “B=0” pieces of the polarization functions are quoted in App. <ref>, considering arbitrary four-momentum q^μ. In that appendix we give the expressions for the unregularized functions _MM'^0, unreg, and use a proper time regularization scheme to get the regularized expressions _MM'^0, reg. The terms _MM'^0, reg in Eq. (<ref>) are then obtained from these expressions by taking q^2=m^2. In the case of the “magnetic” contributions _MM'^ mag, we proceed as follows: we take the full expressions for the polarization functions _MM'(q) given in App. <ref>, and subtract the unregularized pieces _MM'^0, unreg; next, we take q^μ=(m,0,0,0) and make use of the relations in App. <ref>, performing some integration by parts when convenient. After a rather long calculation, it is found that _MM'^ mag can be expressed in the form given by Eq. (<ref>), viz._MM'^ mag = Σ̂_MM'^u,mag+ε_Mε_M' Σ̂_MM'^d,mag ,where the functions Σ̂_MM'^f,mag are given byΣ̂_π_bπ_b'^f,mag= N_c (I_1f^ mag-m^2I_2f^ mag) , Σ̂_ρ_b^μρ_b'^ν^f,mag μν= N_c (I_4f^ mag η_⊥^μν-m^2I_5f^ mag δ_ 3^μδ_ 3^ν) , Σ̂_a_b^μ a_b'^ν^f,mag μν= -N_c [4M_f^2I_2f^ mag δ_ 0^μδ_ 0^ν-(I_4f^ mag+2M_f^2 I_7f^ mag)η_⊥^μν+(m^2I_5f^ mag-4M_f^2 I_2f^ mag)δ_ 3^μδ_ 3^ν] , Σ̂_π_bρ_b'^μ^f,mag μ= -Σ̂_ρ_b^μπ_b'^f,mag μ=-i s_fN_c I_3f^ mag δ_ 3^μ , Σ̂_π_ba_b'^μ^f,mag μ= -Σ̂_a_b^μπ_b'^f,mag μ=2i N_c m M_f I_2f^ mag δ_ 0^μ , Σ̂_a_b^μρ_b'^ν^f,mag μν=Σ̂_ρ_b^ν a_b'^μ^f,mag νμ =s_f N_c [(I_6f^ mag+M_f/m I_3f^ mag)δ_ 0^μδ_ 3^ν +(I_6f^ mag-M_f/m I_3f^ mag)δ_ 3^μδ_ 0^ν] .The expression for I_1f^ mag has been given in Eq. (<ref>), whereas the integrals I_nf^ mag for n=2,…,7 readI_2f^ mag=1/8π^2∫_0^1dv[ψ(x̅_f)+1/2x̅_f-lnx̅_f] ,I_3f^ mag=M_fm/8π^2∫_0^1dv 1/x̅_f ,I_4f^ mag= - I_1f^ mag-B_f/4π^2∑_s=±1∫_0^1dv (x̅_f + m^2/4B_f + sv/2)[lnx̅_f-ψ(x̅_f+1+sv/2)] ,I_5f^ mag=1/8π^2∫_0^1dv (1-v^2)[ψ(x̅_f)+1/2x̅_f-lnx̅_f] ,I_6f^ mag=m^2/32π^2∫_0^1dv (1-v^2)/x̅_f ,I_7f^ mag=1/8π^2∑_s=±1∫_0^1dv [lnx̅_f-ψ(x̅_f+1+sv/2)] .Here we have defined x̅_f=[M_f^2-(1-v^2)m^2/4]/(2B_f). For m<2M_f, the integrals in the above expressions are well defined, while for m≥ 2M_f (i.e., beyond the qq̅ production threshold) they are divergent. Still, if this is the case one can obtain finite results by performing analytic extensions <cit.>. §.§ Box structure of the neutral meson mass matrixThe quadratic piece of the bosonized action in Eq. (<ref>) involves 20 meson states. However, it can be seen that some of these states do not get mixed, i.e., the 20×20 mass matrix can be separated into several blocks, or “boxes”.The vector fieldsand , as well as the axial vector fieldsand , can be written in a polarization vector basis. Since the magnetic field defines a privileged direction in space, to exploit the symmetries of the physical system it is convenient to choose one of the polarization vectors ϵ^μ in such a way that the spatial part ϵ⃗ is parallel to B⃗. A possible choice of a polarization vector set satisfying this condition is introduced in App. <ref>: the polarization vector denoted by ϵ^μ(q⃗,2) is such that ϵ⃗ (q⃗,2) is parallel to the magnetic field, regardless of the three-momentum q⃗. Now, as explained in App. <ref>, the system has an invariance related to the reflection on the plane perpendicular to the magnetic field axis. If we associate to this transformation an operator 𝒫_3, the pseudoscalar and scalar particle states transform under 𝒫_3 by getting phases η_𝒫_3^π_b=-1 and η_𝒫_3^σ_b=1, respectively (here b=0,3). In general, the transformation of the vector and axial vector states is more complicated, depending on their polarizations. However, the choice of ϵ^μ(q⃗,2) as one of the (orthogonal) polarization vectors guarantees a well definite behavior of vector particle states; indeed, considering the remaining polarization vectors in App. <ref>, which are denoted by ϵ^μ(q⃗,c) with c=1, 3, L, one has η_𝒫_3^ρ_b,2=η_𝒫_3^a_b,1= η_𝒫_3^a_b,3=η_𝒫_3^a_b,L=-1 and η_𝒫_3^a_b,2=η_𝒫_3^ρ_b,1= η_𝒫_3^ρ_b,3=η_𝒫_3^ρ_b,L=1. Here we have introduced the notation ρ_b,c, a_b,c, where b=0 and b=3 correspond to isoscalar an isovector states respectively, and the index c (=1,2,3,L) indicates the polarization state.To get rid of the Lorentz indices, it is convenient to deal with a mass matrix 𝐆 in which the vector and axial vector meson entries are given by the corresponding projections onto the polarization vector states. Taking into account the matrix G_MM' in Eq. (<ref>), and using the above mentioned polarization basis, we have𝐆_s_bs'_b'=G_s_bs'_b' , 𝐆_s_bv_b',c=G_s_bv_b'^μ^ μ ϵ_μ(q⃗,c) , 𝐆_v_b,c s_b'=ϵ_μ(q⃗,c)^∗ G_v_b^μ s_b'^ μ , 𝐆_v_b,cv'_b',c'=ϵ_μ(q⃗,c)^∗ G_v_b^μv'_b'^ν^ μν ϵ_ν(q⃗,c') ,where c,c'=1,2,3,L. Here s and s' stand for the scalar or pseudoscalar states π,σ, while v and v' stand for the vector or axial vector states ρ,a. Now, as shown in App. <ref>, the fact that the system is invariant under the reflection in the plane perpendicular to the magnetic field implies that particles with different parity phases η_𝒫_3^M cannot mix; therefore, the 20×20 matrix 𝐆 turns out to be separated into two 10×10 blocks. It can be written as𝐆 = 𝐆^(-)⊕ 𝐆^(+) ,where the corresponding meson subspaces are𝐆^(-), statesπ_b, ρ_b,2, a_b,1, a_b,3, a_b,L ,b=0,3 ;𝐆^(+), statesσ_b, ρ_b,1, ρ_b,3,ρ_b,L, a_b,2 ,b=0,3 . There are more symmetry properties that can still be taken into account. Notice that, according to its definition, the polarization vector ϵ^μ(q⃗,2) is invariant under rotations around the axis 3, which implies that it is an eigenvector of the operator S_3^μν=i(δ_1^μ δ_2^ν-δ_2^μ δ_1^ν) with eigenvalue s_3=0. Moreover, the whole physical system is invariant under rotations around the axis 3, and consequently the third component of total angular momentum, J_3=(x⃗×q⃗)_3+S_3, has to be a good quantum number. Thus, if we let q⃗_⊥=0⃗, the quantum number S_3 will be a good one to characterize the meson states.Let us consider the polarization vectors defined in App. <ref>. As stated, ϵ^μ(q⃗,2) is an eigenvector of S_3^μν, while ϵ^μ(q⃗,L) is defined as a “longitudinal” vector, in the sense that its spatial part is parallel to q⃗. The remaining polarization vectors, ϵ^μ(q⃗,1) and ϵ^μ(q⃗,3), do not have in general a simple interpretation. Now, if we let q⃗_⊥=0, they reduce toϵ^μ(q⃗_∥,1)=1/√(2)(0,1,i,0) ,ϵ^μ(q⃗_∥,3)=1/√(2)(0,1,-i,0) ,where q⃗_∥=(0,0,q^3). Thus, it is seen that ϵ⃗(q⃗_∥,1) and ϵ⃗(q⃗_∥,3) lie in the plane perpendicular to the magnetic field, and meson states with polarizations ϵ^μ(q⃗_∥,1) and ϵ^μ(q⃗_∥,3) are states of definite third component of the spin, with eigenvalues s_3=+1 and s_3=-1, respectively. The states with polarizations ϵ^μ(q⃗_∥,2) and ϵ^μ(q⃗_∥,L) are also eigenstates of S_3, with eigenvalue s_3=0. As stated, in this case S_3 is a good quantum number; this supports our choice of using for vector and axial vector states the polarization basis ρ_b,c, a_b,c.If mesons are taken to be at rest, i.e. if we take q⃗=0, we can identify the mesons with polarizations ϵ^μ(0⃗,L) as spin zero states, and those with polarizations ϵ^μ(0⃗,2) as spin one (s_3=0) states. In this case one has simplyϵ^μ(0⃗,2)=(0,0,0,1) ,ϵ^μ(0⃗,L)=(1,0,0,0) .We notice, however, that our physical system is not fully isotropic, but only invariant under rotations around the axis 3. Thus, |S⃗|^ 2 is not a conserved quantum number, and in general the states with polarizations L and 2 will get mixed.For clarification, we find it convenient to distinguish between the polarization three-vectors ϵ⃗(0⃗,c), c=1,2,3, and the spin vectors of the S=1 vector and axial vector states. We define the spin vector as the expected value⟨S⃗⟩_c =ϵ_μ(0⃗,c)^∗(S_1^μν,S_2^μν,S_3^μν)ϵ_ν(0⃗,c)/ϵ_α(0⃗,c)^∗ϵ^α(0⃗,c) ,with S_j^μν=i ϵ_jkl δ_k^μδ_l^ν. A simple calculation leads to ⟨S⃗⟩_1=(0,0,1), ⟨S⃗⟩_3=(0,0,-1) and ⟨S⃗⟩_2=(0,0,0), showing that for the polarization vectors ϵ^μ(0⃗,1) and ϵ^μ(0⃗,3) the spin is parallel or antiparallel to the magnetic field, whereas for the polarization vector ϵ^μ(0⃗,2) the spin has no preferred direction. Notice that in Ref. <cit.> the ρ^μ states with polarizations ϵ^μ(0⃗,2) and ϵ^μ(0⃗,c), c=1,3 were denoted as “perpendicular” (ρ_⊥) and “parallel” (ρ_∥), respectively.Let us turn back to the mass matrix 𝐆. From the regularized polarization functions in Eq. (<ref>) we can obtain a regularized matrix 𝐆̂(m^2), where we have taken q^μ=(m,0,0,0). Notice that the regularization procedure does not modify our previous analysis about the symmetries of the problem. Thus, according to the above discussion, we can conclude that —for neutral mesons— each one of the 10×10 submatrices of 𝐆̂(m^2) gets further decomposed as a direct sum of a subspace of s_3=0 states (that includes vector and axial vector mesons with polarization states c=2,L), a subspace of s_3=+1 states (polarization states c=1) and a subspace of s_3=-1 states (polarization states c=3). In this way, the 20×20 matrix 𝐆̂(m^2) can be decomposed in “boxes” as𝐆̂=𝐆̂^(0,-)⊕𝐆̂^(1,-)⊕𝐆̂^(-1,-)⊕𝐆̂^(0,+)⊕𝐆̂^(1,+)⊕𝐆̂^(-1,+) ,where the superindices indicate the quantum numbers (s_3,η_𝒫_3). The meson subspaces corresponding to each box are the following:[𝐆̂^(0,-) , statesπ_0,π_3,ρ_0,2,ρ_3,2,a_0,L,a_3,L ;;𝐆̂^(1,-) , statesa_0,1,a_3,1 ;; 𝐆̂^(-1,-) , statesa_0,3,a_3,3 ;;𝐆̂^(0,+) , statesσ_0,σ_3,ρ_0,L,ρ_3,L,a_0,2,a_3,2 ;;𝐆̂^(1,+) , statesρ_0,1,ρ_3,1 ;; 𝐆̂^(-1,+) , statesρ_0,3,ρ_3,3 . ] Finally, it can also be seen that at the considered level of perturbation theory the sigma mesons σ_b get decoupled from other states. Thus, the matrix 𝐆̂^(0,+) can still be decomposed as𝐆̂^(0,+) =𝐆̂_S^(0,+) ⊕ 𝐆̂_V^(0,+) .The submatrices in the right hand side correspond to the scalar meson subspace σ_b, with b=0,3, and the meson subspace ρ_b,L,a_b,2, with b=0,3, respectively. §.§ Neutral meson masses and wave-functionsFrom the expressions in the previous subsections one can obtain the model predictions for meson masses and wave-functions. Let us concentrate on the lightest pseudoscalar and vector meson states, which can be identified with the physical π^0, η, ρ^0 and ω mesons. The pole masses of the neutral pion, the η, and the S_z=0 neutral ρ and ω mesons are given by the solutions of𝐆̂^(0,-) = 0 ,while the pole masses of S_z=±1 vector meson states can be obtained from𝐆̂^(±1,+) = 0 .Clearly, the symmetry under rotations around the axis 3, or z, implies that the masses of S_z=1 and S_z=-1 states will be degenerate.Once the mass eigenvalues are determined for each box, the spin-isospin composition of the physical meson states can be obtained through the corresponding eigenvectors. In the S_z = 0 sector, the physical neutral pion state π̃^0 can be written as|π̃^0⟩ = c_π_3^π̃^0 |π_3⟩+c_π_0^π̃^0 |π_0⟩+i c_ρ_3,2^π̃^0 |ρ_3,2⟩+i c_ρ_0,2^π̃^0 |ρ_0,2⟩+c_a_3,L^π̃^0 |a_3,L⟩+c_a_0,L^π̃^0 |a_0,L⟩ ,and in a similar way one can define coefficients c_M^M̃ for other physical states M̃. On the other hand, in the S_z =± 1 sector it is convenient to write isospin states in terms of the flavor basis (ρ_u,c,ρ_d,c) for c=1,3, viz.| ρ_0,c⟩ = 1/√(2)(| ρ_u,c⟩ + | ρ_d,c⟩),| ρ_3,c⟩= 1/√(2)(| ρ_u,c⟩ - | ρ_d,c⟩) .Since in this sector vector mesons do not mix with pseudoscalar or axial vector mesons, the states |ρ_f,c⟩ (f=u,d) with c=1 and c=3 turn out to be the mass eigenstates that diagonalize the matrices 𝐆̂^(1,+) and 𝐆̂^(-1,+), respectively. This can be easily understood by noticing that the external magnetic field distinguishes between quarks that carry different electric charges, and in this case this represents the only source of breakdown of the u-d flavor degeneracy.§ THE CHARGED MESON SECTOR§.§ Charged meson polarization functions We address now the analysis of the charged mesons, i.e. the states s^±=(s_1∓ is_2)/√(2) and v^±μ=(v_1^μ∓ iv_2^μ)/√(2), with s=σ,π and v=ρ,a. We concentrate on the positive charge sector, noticing that the analysis of negatively charged mesons is completely equivalent. The corresponding quadratic piece of the bosonized action can be written asS_bos^ quad,+ = -1/2∫ d^4x d^4x'∑_M,M' δ M(x)^†G_MM'(x,x') δ M'(x') ,where, for notational convenience, we simply denote the positively charged states by M,M'=σ,π,ρ^μ,a^μ (a proper contraction of Lorentz indices of vector mesons is understood). The functions G_MM'(x,x') can be separated in two terms, namelyG_MM'(x,x') = 1/2g_M δ_MM' δ^(4)(x-x')- J_MM'(x,x') ,where1/g_M δ_MM' = {[ 1/gM=M'=π;1/[g(1-2)]M=M'=σ;-η^μν/ MM'=ρ^μρ^ν,a^μa^ν ].,and δ_MM'=0 otherwise. The polarization functions J_MM'(x,x') are given byJ_MM'(x,x') = -2i N_c _D[iS_x,x'^u Γ^M' iS_x',x^d Γ^M] ,where, as in the case of neutral mesons, one has Γ^σ=1, Γ^π=iγ^5, Γ^ρ^μ=γ^μ and Γ^a^μ=γ^μγ^5. Using Eq. (<ref>) we haveJ_MM'(x,x') =e^iΦ_e(x,x')∫d^4t/(2π)^4 e^-it(x-x')J_MM'(t) ,whereJ_MM'(t) = -2iN_c∫d^4p/(2π)^4 _D[iS̅^u(p_∥^+,p_⊥^+) Γ^M' iS̅^d(p_∥^-,p_⊥^-) Γ^M] .Here we have defined p_a^±=p_a± t_a/2, where a=∥,⊥. In addition, we have used Φ_e(x,x')=Φ_Q_u(x,x')+Φ_Q_d(x',x). Thus, Φ_e is the Schwinger phase associated with positively charged mesons.Contrary to the neutral meson case discussed in the previous section, here the Schwinger phases coming from quark propagators do not cancel, due to their different flavors. As a consequence, the polarization functions in Eq. (<ref>) do not become diagonal when transformed to the momentum basis. Instead of using the standard plane wave decomposition, to diagonalize the polarization functions it is necessary to expand the meson fields in terms of a set of functions associated to the solutions of the corresponding equations of motion in the presence of a uniform magnetic field. These functions can be specified by a set of four quantum numbers that we denote byq̅=(q^0,ℓ,χ,q^3)(see e.g. Ref. <cit.> for a detailed analysis). As in the case of a free particle, q^0 and q^3 are the eigenvalues of the components of the four-momentum operator along the time direction and the magnetic field direction, respectively. The integer ℓ is related with the so-called Landau level, while the fourth quantum number, χ, can be conveniently chosen (although this is not strictly necessary) according to the gauge in which the eigenvalue problem is analyzed <cit.>. In particular, since for the standard gauges SG, LG1 and LG2 one has unbroken continuous symmetries, in those cases it is natural to consider quantum numbers χ associated with the corresponding group generators. Usual choices areχ=n , ;χ=q^1 ,-i∂ /∂ x^1 ;χ=q^2 ,-i∂ /∂ x^2 .To sum or integrate over these quantum numbers, we introduce the shorthand notation_q̅ ≡ 12π∑_ℓ = ℓ_ min^∞∫dq^0 dq^3/(2π)^2{[ 1/2π∑_n; [5mm]1/2π∫ dq^i ].where ℓ_ min=0 (-1) for spin 0 (spin 1) particles.In this way, we can writeδσ(x) =_q̅ 𝔽(x,q̅) δσ(q̅) ,δπ(x) =_q̅ 𝔽(x,q̅) δπ(q̅) , δρ^μ(x) =_q̅ ℝ^μν(x,q̅) δρ_ν(q̅) ,δ a^μ(x) =_q̅ ℝ^μν(x,q̅) δ a_ν(q̅) ,where𝔽(x,q̅)= F_e(x,q̅) ,ℝ^μν(x,q̅)= ∑_λ=-1,0,1 F_e(x,q̅_λ) Υ_λ^μν ,with q̅_λ=(q^0,ℓ-sλ,χ,q^3), s=(B). The function F_Q(x,q̅) depends on the gauge choice; the explicit forms that correspond to the standard gauges are given in App. <ref>. Regarding the tensors Υ_λ^μν, one has various possible choices; here we takeΥ_0^μν=η_∥^μν ,Υ_±1^μν=1/2(η_⊥^μν∓ S_3^μν) . Given Eqs. (<ref>) we introduce the polarization functions in q̅-space (or Ritus space). They readJ_ss'(q̅,q̅') =∫ d^4x d^4x' 𝔽(x,q̅)^∗J_ss'(x,x') 𝔽(x',q̅') ,J_sv^μ^μ(q̅,q̅') =∫ d^4x d^4x' 𝔽(x,q̅)^∗J_sv^α^ α(x,x') ℝ_α^ μ(x',q̅') ,J_v^μ s^μ(q̅,q̅') =∫ d^4x d^4x' ℝ_α^ μ(x,q̅)^∗J_v^α s^ α(x,x') 𝔽(x',q̅') ,J_v^μ v^'ν^μν(q̅,q̅') =∫ d^4x d^4x'ℝ_α^ μ(x,q̅)^∗J_v^α v^'β^αβ(x,x') ℝ_β^ ν(x',q̅') ,where s,s' stand for the states σ or π, while v,v' stand for ρ or a. After a somewhat long calculation one can show that all these q̅-space polarization functions are diagonal, i.e., one hasJ_MM'(q̅,q̅') = δ̂_q̅q̅' J_MM'(ℓ,q_∥) ,whereδ̂_q̅q̅'=(2π)^4δ(q^0-q^' 0) δ_ℓℓ' δ_χχ' δ(q^3-q^' 3) .Here, δ_χχ' stands for δ_nn', δ(q^1-q^' 1) and δ(q^2-q^' 2) for SG, LG1 and LG2, respectively. It is important to stress that Eq. (<ref>) holds for all three gauges; moreover, the functions J_MM'(ℓ,q_∥) are independent of the gauge choice. The explicit form of these functions for the various possible MM' combinations, together with some details of the calculations, are given in App. <ref>. The quadratic piece of the bosonized action in Eq. (<ref>) can now be expressed asS_bos^ quad,+ = - 12_q̅∑_M,M' δ M(q̅)^† G_MM'(ℓ,q_∥) δ M^ '(q̅) ,whereG_MM'(ℓ,q_∥) = 1/2g_M δ_MM' -J_MM'(ℓ,q_∥) . As in the case of neutral mesons, to determine the charged meson masses it is convenient to write the vector and axial vectors states in a polarization basis. A suitable set of polarization vectors ϵ^μ(ℓ,q^3,c), where c=1,2,3,L is the polarization index, is given in App. <ref>. Here, c=L corresponds to the “longitudinally polarized” charged mesons, which will be denoted by ρ_L and a_L; for these states the polarization vector ϵ^μ(ℓ,q^3,L) is defined only for ℓ≥0, and it is proportional to the four-vector Π^μ defined by Eq. (<ref>), evaluated at q^0=√(m^2+(2ℓ+1)B_e+(q^3)^2). Next, to get rid of the Lorentz indices of vector and axial vector states, we consider the mass matrix 𝐆 and the polarization functions 𝐉 obtained in the basis given by the corresponding projections onto the polarization vector states. We have𝐆_ss'(ℓ,Π^2) =1/2g_s δ_ss'-𝐉_ss'(ℓ,Π^2) , 𝐆_sv_c(ℓ,Π^2) = - 𝐉_sv_c(ℓ,Π^2) , 𝐆_v_cs(ℓ,Π^2) = - 𝐉_v_cs(ℓ,Π^2) , 𝐆_v_cv'_c'(ℓ,Π^2) = -1/2g_v ζ_c δ_v_cv'_c'- 𝐉_v_cv'_c'(ℓ,Π^2) ,where𝐉_ss'(ℓ,Π^2) =J_ss'(ℓ,q_∥) , 𝐉_sv_c(ℓ,Π^2) =J_sv^μ^ μ(ℓ,q_∥) ϵ_μ(ℓ,q^3,c) , 𝐉_v_cs(ℓ,Π^2) =ϵ_μ(ℓ,q^3,c)^∗ J_v^μ s^ μ(ℓ,q_∥) , 𝐉_v_c v'_c'(ℓ,Π^2) =ϵ_μ(ℓ,q^3,c)^∗ J_v^μ v^'ν^ μν(ℓ,q_∥) ϵ_ν(ℓ,q^3,c') .In the above equations, s and s' stand for the scalar or pseudoscalar states π,σ, while v and v' stand for the vector or axial vector states ρ,a. We use once again the definitions g_π=g, g_σ=g(1-2α), whereas ζ_c is defined as ζ_c=1 for c=L and ζ_c=-1 for c=1,2,3. Moreover, we have defined Π^2 = Π_μ^∗ Π^μ. From Eq. (<ref>), one has Π^2 = q_∥^2-(2ℓ+1)B_e.To determine the physical meson pole masses corresponding to a given Landau level ℓ, we need to evaluate the matrix elements of 𝐆(ℓ,Π^2) at Π^2=m^2. However, as in the case of the neutral meson sector, it turns out that many of the corresponding polarization functions are divergent. Once again, we consider the magnetic field independent regularization scheme, according to which we have𝐉^ reg(ℓ,Π^2) = 𝐉^0,reg(Π^2) + 𝐉^ mag(ℓ,Π^2) .To obtain the regularized “B=0” matrix 𝐉^0,reg(Π^2) we calculate the projections over polarization states as in Eqs. (<ref>), replacing the functions J_MM'(ℓ,q_∥) by their regularized expressions. The latter are obtained by taking the corresponding regularized functions J_MM'^ reg(q) in App. <ref>, and performing the replacement q^μ→Π^μ. On the other hand, to determine the “magnetic” contribution 𝐉^ mag(ℓ,Π^2) we calculate the matrix elements of 𝐉(ℓ,Π^2) according to Eqs. (<ref>) (as stated, the functions J_MM'(ℓ,q_∥) in that equation are quoted in App. <ref>), and then we subtract the corresponding unregularized expressions in the above defined B→ 0 limit. These can be obtained from the unregularized functions J_MM'^ unreg(q) in App. <ref>, following the same procedure as for the regularized ones. §.§ Box structure of the charged meson mass matrix As in the case of neutral mesons, the symmetries of the system imply that not all charged mesons states mix with each other. Firstly, it is clear that the mass matrix can be separated into two equivalent sectors of positive and negative charges. Next, restricting ourselves to positively charged mesons, it is seen that one can exploit the symmetry of the system under the reflection on the plane perpendicular to the magnetic field to classify the meson states into two groups. This is discussed in detail in App. <ref>, where the action of the operator 𝒫_3, associated to this symmetry transformation, is studied. Considering the polarization basis introduced in the previous subsection, it is found that charged meson states M transform under 𝒫_3 by getting phases η_𝒫_3^M=±1. In a similar way as in the case of neutral meson states, the 10×10 mass matrix 𝐆(ℓ,Π^2) can be written as a direct sum of two 5×5 submatrices,𝐆 = 𝐆^(-)⊕ 𝐆^(+) ,where the corresponding meson subspaces are𝐆^(-), statesπ,ρ_2,a_L,a_1,a_3 ; 𝐆^(+), statesσ,a_2,ρ_L,ρ_1,ρ_3 . Now, it is worth noticing that while the above discussion holds for Landau levels ℓ≥1, one should separately consider the particular cases ℓ=-1 and ℓ=0. As mentioned above, one has ℓ_ min=0 for pseudoscalar and scalar fields; moreover, as discussed in App. <ref>, for ℓ=-1 there is only one nontrivial polarization vector, ϵ_μ(-1,q^3,1). Therefore, the charged mass matrix 𝐆(-1,Π^2) is given by a direct sum of two 1×1 matrices 𝐆^(-) and 𝐆^(+) corresponding to the states a_1 and ρ_1, respectively. These do not mix with any other state. The case ℓ=0 is also a particular one, since, as stated in App. <ref>, one cannot have a vector or axial vector meson field polarized in the direction ϵ_μ(0,q^3,c) with c=3. In this way, the charged mass matrix 𝐆(0,Π^2) is given by a direct sum of two 4×4 matrices. §.§ Charged meson masses and wave-functions Taking into account the results in the previous subsections, the pole masses of charged mesons can be obtained, for each value of ℓ, by solving the equations𝐆^(±)(ℓ,m^2) = 0 .Here we are interested in the determination of the energies of the lowest lying meson states. As stated, for the Landau mode ℓ=-1 the only available states are the vector meson ρ_1 and the axial vector meson a_1, which do not mix with each other. In turn, for ℓ=0 one gets the lowest energy charged pion, which gets coupled through 𝐆^(-) to the ℓ=0 vector and axial vector mesons. In what follows we analyze these two modes in detail.As mentioned above, for ℓ=-1 the matrix 𝐆^(+) has dimension 1. Thus, according to Eqs. (<ref>) and (<ref>), the pole mass of the ρ state can be obtained from1/2g_v - 𝐉_ρ_1ρ_1^ reg(-1,m^2) = 0 ,where𝐉_ρ_1ρ_1^ reg(-1,m^2) = 𝐉_ρ_1ρ_1^0,reg(m^2) +𝐉_ρ_1ρ_1^ mag(-1,m^2) .The functions on the r.h.s. of this equation can be obtained from the definitions in Sec. <ref>; one has𝐉_ρ_1ρ_1^0,reg(-1,m^2) = -2 b_ρρ,1^ud,reg(m^2) , 𝐉_ρ_1ρ_1^ mag(-1,m^2) = -2 [d_ρρ,2(-1,m^2-B_e)-b_ρρ,1^ud,unreg(m^2)] ,where b_ρρ,1^ud,reg and b_ρρ,1^ud,unreg are given in App. <ref>, while the expression of d_ρρ,2 can be found in App. <ref>. Once the solution m^2=m_ρ^+^2 has been determined, we can obtain the energy E_ρ^+ of the lowest charged ρ state asE_ρ^+ = √(m_ρ^+^2+(2ℓ+1)B_e+(q^3)^2)|_ℓ=-1,q^3=0 = √(m_ρ^+^2-B_e) . In the case of the lowest charged pion state (ℓ=0), we consider the 4×4 mass matrix 𝐆^(-)(0,m^2) that couples the states π, ρ_2, a_1 and a_L. The pole mass can be found from[(1/2g , 1/2g_v , 1/2g_v ,-1/2g_v)- 𝐉^ reg(0,m^2)] = 0 ,where, according to Eq. (<ref>),𝐉^ reg(0,m^2) = 𝐉^0,reg(m^2) + 𝐉^ mag(0,m^2) .The nonvanishing matrix elements of 𝐉^0,reg(m^2) read𝐉_ππ^0,reg(m^2) = 2 b_ππ,1^ud,reg(m^2) , 𝐉_ρ_2ρ_2^0,reg(m^2) = -2 b_ρρ,1^ud,reg(m^2) , 𝐉_a_L a_L^0,reg(m^2) = 2 b_aa,2^ud,reg(m^2) , 𝐉_a_1a_1^0,reg(m^2) = -2 b_aa,1^ud,reg(m^2) , 𝐉_π a_L^0,reg(m^2) =𝐉_a_Lπ^0,reg(m^2)^∗ = 2 m b_π a,1^ud,reg(m^2) ,where the functions on the right hand sides are given in App. <ref>. The matrix elements of 𝐉^ mag(0,m^2), obtained from the general expressions quoted in App. <ref>, are given in App. <ref>. The lowest solution of Eq. (<ref>) can be identified with the charged pion pole mass squared, m_π^+^2. Then the energy of the lowest charged pion readsE_π^+ = √(m_π^+^2+(2ℓ+1)B_e+(q^3)^2)|_ℓ=0,q^3=0 = √(m_π^+^2+B_e) .In the same way, higher solutions of Eq. (<ref>) are to be identified with vector meson pole masses; a similar analysis can be done for the sector corresponding to the 4×4 matrix 𝐆^(-)(0,m^2) (which involves the σ meson). In addition, one can obtain pole masses of other higher charged meson states by considering Landau levels ℓ≥1 (as stated, the mass matrix separates in those cases into two boxes of dimension 5).Together with the determination of meson pole masses, we can also obtain the spin-isospin composition of the physical meson states as in the case of neutral mesons. For ℓ=-1 there are just two states, ρ_1 and a_1, which do not get mixed due to the above described reflection symmetry. On the other hand, for ℓ≥0, one gets in general a decomposition similar to that obtained in the case of neutral states. Thus, in the particular case of the lowest lying charged pion, the physical state π^+ can be written as a combination of ℓ = 0 states|π^+⟩ = c_π^π^+ |π⟩+i c_ρ_2^π^+ |ρ_2⟩+ c_a_1^π^+ |a_1⟩+i c_a_L^π^+ |a_L⟩ . § NUMERICAL RESULTS §.§ Model parametrization and magnetic catalysis To obtain numerical results for particle properties it is necessary to fix the model parameters. In addition to the usual requirements for the description of low energy phenomenology, we find it adequate to choose a parameter set that also takes into account LQCD results for the behavior of quark-antiquark condensates under an external magnetic field. As stated, in our framework divergent quantities are regularized using the MFIR scheme, with a proper time cutoff. Within this scenario, we take the parameter set m_c = 7.01 MeV, Λ = 842 MeV, g=5.94/Λ^2 and α=0.114. For vanishing external field, this parametrization leads to effective quark masses M_f = 400 MeV and quark-antiquark condensates ϕ_u,d^0 = (-227 MeV)^3. Moreover, it properly reproduces the empirical values the pion mass, the eta mass and the pion decay constant in vacuum, namely m_π = 140 MeV, m_η=548 MeV and f_π=92.2 MeV, respectively. Regarding the vector couplings, we take = 3.947/Λ^2, which for B=0 leads to the empirical value m_ρ=775 MeV and to a phenomenologically acceptable value of about 1020 MeV for the a_1 mass. Notice that, as usual in this type of model, the a_1 mass is found to lie above the quark-antiquark production threshold and can be determined only after some extrapolation. For the sake of simplicity, the remaining coupling constants of the vector and axial vector sector are taken to be ==, which leads to m_ω=m_ρ and m_f_1=m_ a_1.As mentioned in the Introduction, while most NJL-like models are able to reproduce the effect of magnetic catalysis at vanishing temperature, they fail to describe the inverse magnetic catalysis effect observed in lattice QCD at finite temperature (an interesting exception is the case of models which include nonlocal interactions <cit.>). One of the simplest approaches to partially cure this behavior consists of allowing the model couplings to depend on the magnetic field, so as to incorporate the sea effect produced by the backreaction of gluons to magnetized quarks loops. Thus, we consider here both the situation in which the couplings are constant and the one in which they vary with the magnetic field. For definiteness, we adopt for g(B) the form proposed in Ref. <cit.>, namelyg(B)= g ℱ(B),whereℱ(B)= κ_1 + (1-κ_1) e^-κ_2(eB)^2 ,with κ_1= 0.321 and κ_2= 1.31 GeV^-2. Concerning the vector couplings, given the common gluonic origin of g and , we assume that they get affected in a similar way by the magnetic field; hence, we take (B)=ℱ(B).The effect of magnetic catalysis can be observed from Fig. <ref>, where we show the behavior of the normalized averaged light quark condensate as a function of the magnetic field, for eB up to 1 GeV^2. Following Ref. <cit.>, we use the definitionsΔΣ̅(B)= ΔΣ_u (B)+ΔΣ_d (B)2 , ΔΣ_f(B) =- 2 m_c [ϕ_f(B) - ϕ_f^0]/D^4 ,where D=(135× 86)^1/2 MeV is a phenomenological normalization constant. Solid and dashed lines correspond to constant and B-dependent couplings, respectively. Although the curves do not show an accurate fit to lattice data (gray band, taken from Ref. <cit.>), it is seen that the model is able to reproduce qualitatively the effect of magnetic catalysis. We have seen that a better agreement could be achieved using a parameter set that leads to lower values of the quark masses; however, this would hinder the analysis of the rho meson mass, since the latter would lie below the quark-antiquark production threshold even for B=0. Additionally, we have checked that the choice of a 3D-cutoff (within the MFIR scheme) leads in general to even lower values of ΔΣ̅, increasing the difference with LQCD results.§.§ Neutral mesons Let us analyze our results for the effect of the magnetic field on meson masses. We start with the neutral sector. As well known, for vanishing external field pseudoscalar mesons mix with “longitudinal” axial vector mesons. Now, as discussed in Sec. <ref>, for nonzero B the mixing also involves neutral vector mesons with spin projection S_z=0 (corresponding to the polarization state c=2). The four lowest mass states of this sector are to be identified with the physical states π̃^0, η̃, ρ̃^ 0 and ω̃, where the particle names are chosen according to the spin-isospin composition of the states in the limit of vanishing external field, see Eq. (<ref>).The masses of these particles can be determined from Eq. (<ref>). In Fig. <ref> we show their behavior with the magnetic field, for constant and B-dependent couplings (solid and dashed lines, respectively). In the case of ρ̃^ 0 and ω̃ mesons, for B=0 one has m_ρ = m_ω = 775 MeV, close to the quark-antiquark production threshold —which arises from the lack of confinement of the model— given by 2M_d(B=0)=800 MeV. As can be seen from the figure, since m_ρ̃^0 and m_ω̃ increase with the magnetic field, they overcome the threshold (shown by the dotted line) at relatively low values of eB. Beyond this limit, although one could obtain some results through analytic continuation <cit.>, pole masses would include an unphysical absorptive part, becoming relatively less reliable. For clarity, we display in Fig. <ref> just the curves for m_ρ̃^0 and m_ω̃ that correspond to the case of a constant value of the coupling g; in the case of the B-dependent coupling g(B), the situation is entirely similar. It is also worth mentioning that the results for the η̃ and ω̃ masses should be taken only as indicative, since a more realistic calculation would require a three-flavor version of the model in which flavor-mixing effects could be fully taken into account.Regarding the neutral pion mass, in Fig. <ref> we compare our results with those obtained in previous works <cit.> and those corresponding to LQCD calculations, in which quenched Wilson fermions <cit.>, dynamical staggered quarks <cit.> and improved staggered quarks <cit.> are considered. Although LQCD studies do not take into account flavor mixing (they deal with individual flavor states), according to the analysis in Ref. <cit.> the lightest meson mass is expected to be approximately independent of the value of the mixing parameter α. It is also worth noticing that LQCD results have been obtained using different methods and values of the pion mass at B=0. In the figure we show the results obtained for NJL-like models in which different meson sectors have been taken into account. Left and right panels correspond to g= constant and g=g(B) [given by Eqs. (<ref>-<ref>)], respectively. If one considers just the pseudoscalar sector (red dotted lines), when g is kept constant the behavior of m_π̃^0 with the magnetic field is found to be non monotonic, deviating just slightly from its value at B=0. In contrast, as seen from the right panel of Fig. <ref>, if one lets g to depend on the magnetic field the mass shows a monotonic decrease, reaching a reduction of about 30% at eB=1 GeV^2. This suppression is shown to be in good agreement with LQCD results. When the mixing with the vector sector is considered, the results for both constant and B-dependent couplings (red dash-dotted lines in left and right panels) are similar to each other and monotonically decreasing, lying however quite below LQCD predictions. Finally, if the mixing with axial vector mesons is also included (solid lines) we obtain, for both constant and B-dependent couplings, a monotonic decrease which is in good qualitative agreement with LQCD calculations for the studied range of eB. One may infer that the incorporation of axial vector mesons, being the chiral partners of vector mesons, leads to cancellations that help to alleviate the magnitude of the neutral pion mass suppression. Their inclusion into the full picture leads to relatively more robust results, in good agreement with LQCD calculations, and is in fact one of the main takeaways of this work.Let us discuss the composition of the π̃^0 state. The values of the coefficients associated with the spin-isospin decomposition given in Eq. (<ref>) are quoted in the upper part of Table <ref> for eB=0, 0.5 GeV^2 and 1 GeV^2. Those associated with the spin-flavor decomposition, defined in the same way as in Eq. (<ref>), are given in the lower part of the table. We quote the values corresponding to the model in which the couplings constants do not depend on the magnetic field; the results are qualitatively similar for the case of B-dependent couplings. One finds that while the mass eigenvalues do not depend on whether B is positive or negative, the corresponding eingenvectors do; the relative signs in Table <ref> correspond to the choice B>0. We consider first the results for vanishing magnetic field. It is seen that, due to the wellknown π-a_1 mixing, the π̃^0 state has already some axial vector component. We also note that even though α is relatively small (in our parametrization we have taken α=0.114, to be compared with its maximum possible value 1/2), the effect of flavor mixing is already very strong; the spin-isospin composition is clearly dominated by the π_3 component, which is given by an antisymmetric equal-weight combination of u and d quark flavors. This can be understood by noticing that, as soon as α is different from zero, the U(1)_A symmetry gets broken. The state π_3 is then the only one that remains being a pseudo-Goldstone boson, which forces the lowest-mass state π̃^0 to be dominated by the π_3 component. In the presence of the magnetic field, the mixing is expected to be modified, since the external field distinguishes between flavor components π_u and π_d instead of isospin states. From the upper part of Table <ref> it is seen that, even for the relatively small value of α considered here, the mass state π̃^0 is dominated by the π_3 component (|c^π̃^0_π_3|^2 ≳ 0.97) for the full range of values of eB up to 1 GeV^2. This means that the dominance of the flavor composition over the isospin composition will occur only for extremely large values of eB. In any case, from the values in Table <ref> one can still observe some effect of the magnetic field on the composition of the π̃^0 state: when eB increases, it is found that there is a slight decrease of the π_3 component in favor of the others. In addition, a larger weight is gained by the u-flavor components, as one can see by looking at the entries corresponding to the spin-flavor states (lower part of Table <ref>): one has |c^π̃^0_π_u|^2+|c^π̃^0_ρ_u,2|^2+ |c^π̃^0_a_u,L|^2 = 0.50 (0.64) for eB=0 (1.0) GeV^2. This can be understood by noticing that the magnetic field is known to reduce the mass of the lowest neutral meson state <cit.>; for large eB one expects the lowest mass state (π̃^0) to have a larger component of the quark flavor that couples more strongly to the magnetic field (i.e., the u quark). Concerning the vector meson components of the π̃^0 state, it is seen that they are completely negligible at low values of eB, reaching a contribution similar to the one of the axial vector meson (≃ 0.5%) at eB= 1 GeV^2.In addition, as discussed in Sec. <ref>, the neutral sector includes states with spin projections S_z = ± 1, i.e., spin parallel to the direction of the magnetic field. We consider here the effect of the magnetic field on vector meson states, whose masses can be obtained from the submatrices 𝐆̂^(± 1,+) in Eq. (<ref>). Since in this sector vector meson and axial vector meson states do not mix, the analysis is entirely equivalent to the one carried out in Ref. <cit.>, where the axial vector sector was not taken into account. As stated in Sec. <ref>, it is easy to see that the mass matrices involving the states ρ_0,c and ρ_3,c, with c=1,3 are diagonalized by rotating from the isospin basis to a flavor basis (ρ_u,c,ρ_d,c) given by Eq. (<ref>); moreover, the masses of these mesons turn out to be equal for polarization states c=1 (S_z = +1) and c=3 (S_z = -1).The numerical results for ρ_u and ρ_d meson masses as functions of the magnetic field are shown in Fig. <ref>. It is seen that both masses increase with B, the enhancement being larger in the case of the ρ_u mass; this can be understood from the larger (absolute) value of the u-quark charge, which measures the coupling with the magnetic field. The results are similar for the case of constant and B-dependent couplings, corresponding to solid and dashed lines in the figure, respectively. The dotted lines indicate the mass thresholds for qq̅ pair production, given by m_ρ_f^( th) = M_d+√(M_d^2+2B_d). As discussed in Ref. <cit.>, this threshold is given by a situation in which the spins of both the quark and antiquark components of the ρ_f meson are aligned (or anti-aligned) with the magnetic field; thus, one of the fermions lies in its lowest Landau level, while the other one lies in its first excited Landau level. In comparison with the S_z=0 threshold 2M_d, for S_z=± 1 the threshold m_ρ_f^( th) grows faster with B. For a constant coupling g, this allows the values of m_ρ_u and m_ρ_d to remain below the threshold for the studied range of magnetic fields. On the other hand, in the case of a B-dependent coupling g(B) the ρ_u meson is found to become unstable for eB somewhat larger than 0.6 GeV^2. Our results for the ρ_u mass are found to be in agreement, within errors, with values obtained through LQCD calculations, also shown in Fig. <ref> <cit.>. §.§ Charged mesonsAs discussed in Sec. <ref>, to study the lowest lying charged meson states in the presence of the magnetic field one has to consider the Landau modes ℓ=-1 and ℓ = 0. For ℓ = -1, the lowest mass state is the one that we have denoted as ρ_1, which does not get mixed with any other state. The corresponding pole mass m_ρ^+ can be obtained from Eq. (<ref>), while the lowest energy for this state is given by E_ρ^+ = √(m_ρ^+^2-B_e), see Eq. (<ref>).In Fig. <ref> we show our numerical results for E_ρ^+ as a function of eB, normalized by the value of the ρ mass at B=0. Black solid and dashed lines correspond to the cases of constant and B-dependent couplings, respectively, where g(B) is given by Eqs. (<ref>-<ref>). It can be seen that for g= the results differ considerably from those obtained in a similar model <cit.> which instead does not take into account the presence of axial vector mesons (red dotted line in the figure). On the contrary, for g=g(B) (red dash-dotted line) they remain basically unchanged. In fact, here the differences between models that include or not axial vector mesons do not arise from direct mixing effects (the ρ_1 state does not mix with axial vectors) but from the fact that axial vector states mix with pions already for B=0; this leads to some change in the model parameters so as to get consistency with the phenomenological inputs. In any case, it is found that —as in the case of neutral mesons— the results from the full model (black solid and dashed lines) appear to be rather robust: they show a similar behavior either for constant or B-dependent couplings, and this behavior is shown to be in good agreement with LQCD calculations <cit.>, also shown in the figure. Notice that our results, as those from LQCD, are not consistent with ρ^+ condensation for the considered range of values of eB. The curve corresponding to the lowest energy state of a pointlike ρ^+ meson as a function of eB is shown for comparison.It is worth mentioning that our results are qualitatively different from those obtained in other works in the framework of two-flavor NJL-like models <cit.>, which do find ρ^+ meson condensation for eB ∼ 0.2 to 0.6 GeV^2. As discussed in Refs. <cit.>, in those works Schwinger phases are neglected and it is assumed that charged π and ρ mesons lie in zero three-momentum states (i.e., meson wavefunctions are approximated by plane waves). Here we use, instead, an expansion of meson fields in terms of the solutions of the corresponding equations of motion for nonzero B, taking properly into account the presence of Schwinger phases in quark propagators. In fact, as shown in Ref. <cit.>, the plane wave approximation may have a dramatic incidence on these numerical results, implying a substantial change in the behavior of the ρ^+ mass for the ℓ = -1 Landau mode.In the case of the mode ℓ=0, as discussed in Sec. <ref>, the lowest mass state π^+ is given in general by a mixing between the states that we have denoted as π, ρ_2, a_L and a_1. The corresponding pole mass m_π^+ can be obtained from Eq. (<ref>), while the lowest energy for this state is given by E_π^+ = √(m_π^+^2+B_e), see Eq. (<ref>). Our numerical results are presented in Fig. <ref>, where, for the sake of comparison with LQCD values, we plot the values of the difference E_π^+(B)^2-E_π^+(0)^2 as a function of eB. Once again, black solid and dashed lines correspond to the cases of constant and B-dependent couplings, respectively. We also include for comparison the results obtained from similar NJL-like models that just include the pseudoscalar meson sector (red dotted line), or just include the mixing between the pseudoscalar and vector meson sectors (red dash-dotted line), neglecting the effect of the presence of axial vector mesons. It can be seen that the inclusion of the axial vector meson sector leads to an improvement of the agreement with LQCD data quoted in Refs. <cit.>, also shown in the figure.It is interesting to point out that, for large external magnetic fields, the values from LQCD shown in Fig. <ref> lie well below the curve that corresponds to a pointlike pion. From Eq. (<ref>), it is easy to see that to reproduce these results one should get a negative value of the pole mass squared, m_π^+^2<0. In fact, this is what we obtain from our NJL-like model if we assume that the coupling constants do not depend on B (solid line in the figure). The appearance of an imaginary pole mass does not signal the existence of a meson condensation, since meson energies are still positive quantities; indeed, the presence of the magnetic field generates a zero-point motion in the plane perpendicular to B⃗ that induces an “effective magnetic mass” √(m_π^+^2+B_e). Notice that in this case some analytical expressions have to be revised. The corresponding changes, basically related with the normalization of polarization vectors, are indicated in App. <ref>. In contrast, for B-dependent couplings one does not observe a large variation of the π^+ pole mass for the studied range of eB; the energy is essentially dominated by the magnetic field. Thus, the curve shown in Fig. <ref> (black dashed line) turns out to be approximately coincident with the one corresponding to a pointlike charged pion.We remark that our numerical results indicate a monotonic enhancement of the charged pion energy with the magnetic field, in contrast with the nonmonotic behavior found in some recent LQCD simulations (green circles in the figure) <cit.>. It would be interesting to get more insight on this open issue from other effective models and further LQCD calculations.To conclude this section, let us discuss the state composition of the charged pion mass state. In Table <ref> we quote our results for the coefficients of the linear combination in Eq. (<ref>) for some values of eB, considering both the cases g= and g(B) (upper and lower parts of the table, respectively). We also include the values of the normalized squared π^+ pole masses. For B=0, as well known, in these type of model the pion mass eigenstate is obtained from a mixing between the pseudoscalar state π and the longitudinal part of the axial-vector state (a_L, in our notation). Then, for nonzero B, the mixing between the states ρ_2 and a_1 is also turned on. As stated, for g= the value of m_π^+^2 becomes negative if the magnetic field is increased; this occurs at eB≃ 0.5 GeV^2. As shown in the upper part of Table <ref>, when approaching this point the mass eigenstate turns out to be strongly dominated by the axial vector states a_1 and a_L, which have similar weights. For larger values of eB the absolute value of m_π^+^2 gets increased, and once again the π^+ state becomes dominated by the pseudoscalar π contribution. Notice, however, that for eB=1 GeV the contributions of other states are nonnegligible; moreover, it is seen that the coefficients c^π^+_a_1 and c^π^+_a_L become imaginary. On the contrary, as shown in the lower part of the table, for g=g(B) these effects are not observed in the studied range of values of the external field. As mentioned above, in this case the π^+ pole mass does not show qualitative changes with eB; the main effect of the magnetic field is the enhancement of axial vector components, each of them reaching about 1/4 of the state composition at eB=1 GeV^2, whilethe remaining 1/2 fraction is almost saturated by the π component. As stated, recent LQCD data support a negative value of m_π^+^2 for large magnetic fields. It would be also interesting to get information from lattice calculations on the state composition, in particular, in the region eB∼ 1 GeV^2. § SUMMARY & CONCLUSIONSIn this work we have studied the mass spectrum of light pseudoscalar and vector mesons in the presence of an external uniform and static magnetic field B⃗, introducing the effects of the mixing with the axial vector meson sector. The study has been performed in the framework of a two-flavor NJL-like model that includes isoscalar and isovector couplings in the scalar-pseudoscalar and vector-axial vector sector, as well as a flavor mixing term in the scalar-pseudoscalar sector. For simplicity, the coupling constants of the vector and axial vector sector have been taken to be equal. The ultraviolet divergences associated to the nonrenormalizability of the model have been regularized using the magnetic field independent regularization method, which has been shown to be free from unphysical oscillations and to reduce the dependence of the results on the model parameters <cit.>. Additionally, we have explored the possibility of using magnetic field dependent coupling constants g(B) to account for the effect of the magnetic field on sea quarks.As well known, for vanishing external field pseudoscalar mesons mix with “longitudinal” axial vector mesons. Now, the presence of an external uniform magnetic field breaks isospin (due to the different quark electric charges) and full rotational symmetry, allowing for a more complex meson mixing pattern than in vacuum. The mixing structure is constrained by the remaining unbroken symmetries, in such a way that the mass matrices —written in a basis of polarization states— can be separated into several “boxes”.In the case of neutral mesons, Schwinger phases cancel and the polarization functions become diagonal in the usual momentum basis. Since mesons can be taken at rest, rotational invariance around B̂ implies that S_z (the spin in the field direction) is a good quantum number to characterize these states. The aforementioned symmetries restrict the allowed mixing in the original 20 × 20 mass matrix, which can be decomposed as a direct sum of subspaces of states with s_z=-1, 0, and 1. For s_z=± 1 (spin parallel to B⃗), it is seen that vector mesons do not mix with other sectors, and the mass eigenstates are those of the flavor basis (ρ_u,ρ_d). We have shown that the corresponding masses increase with B in qualitative agreement with LQCD, within uncertainties. For s_z=0 (spin perpendicular to B⃗), scalar mesons turn out to get decoupled from other states and therefore have been disregarded in our analysis. Meanwhile, pseudoscalar mesons mix with vector and axial vector mesons whose polarization states are parallel to B⃗. The four lowest mass states of this sector are to be identified with the physical states π̃^0, η̃, ρ̃^ 0 and ω̃. Regarding m_ρ̃^0 and m_ω̃, we have found that they get increased with the magnetic field, in such a way that they overcome a qq̅ decay threshold —which arises from the lack of confinement of the model— at relatively low values of eB. Concerning m_η̃, a slight decrease with B is observed.The impact of the inclusion of the axial vector meson sector on the mass of the lightest state π̃^0, identified with the neutral pion, is actually one of the main focus of our work. We have found that when axial vector mesons are taken into account, m_π̃^0 displays a monotonic decreasing behavior with B in the studied range eB<1 GeV^2, which is in good qualitative agreement with LQCD calculations for both g= constant and g(B). Thus, our current results represent an improvement over previous analyses that take into account just the mixing with the vector meson sector, or no mixing at all. When no mixing is considered, the behavior of m_π̃^0 with B is non monotonic when g is kept constant, deviating just slightly from its value at B=0.Only when g is allow to depend on the magnetic field one obtains a decreasing behavior which resembles LQCD results. Even though the inclusion of the vector sector leads to a reduction in m_π̃^0 together with a consistent decreasing trend, the values lie quite below LQCD predictions, for both g and g(B). We therefore conclude that the inclusion of axial mesons is important since it leads to more robust results for the neutral pion mass, even independently of the assumption of a magnetic field dependent coupling constant. Regarding the composition of the π̃^0 state, we have found that it is largely dominated by the isovector component π_3 (|c^π̃^0_π_3|^2 ≳ 0.97) for the studied range of values of eB. In terms of flavor composition, a larger weight is gained by u-flavor components for large values of B, which can be understood from the fact that the u quark couples more strongly to the magnetic field.In the case of charged mesons, the corresponding polarization functions are diagonalized by expanding meson fields in appropriate Ritus-like bases, so as to account for the effect of nonvanishing Schwinger phases. Once again, the symmetries of the system constrain the allowed mixing matrices, which also depend on the value of the meson Landau level ℓ. For ℓ=-1 one has only one vector and one axial vector polarization states. Moreover, they do not mix with any other particle state. Thus, for ℓ=-1 the effect of the inclusion of axial vector mesons on the ρ^+ mass comes solely from the model parametrization, which is affected by the presence of π-a_1 mixing at B=0. Our results show that when the axial vector sector is included, the energy E_ρ^+=√(m_ρ^+^2-B_e) of this state undergoes a considerable reduction, leading to a decreasing behavior which is in qualitative agreement with LQCD predictions, independently of the assumption of a B-dependent coupling constant. However —in accordance to LQCD calculations and with our previous results within NJL-like models that do not include axial vectors <cit.>— we find that E_ρ^+ does not vanish for any considered value of the magnetic field, a fact that can be relevant in connection with the occurrence of ρ^+ meson condensation for strong magnetic fields.For ℓ=0 only three polarization vectors are linearly independent, and the pion mixing subspace is given by π^+ρ^+ a_1^+ for only certain polarizations states. The lowest mass state in this sector can be identified with the π^+, whose lowest energy is given by E_π^+=√(m_π^+^2+B_e). Our results show that, even though vector mixing already induces a softening in the enhancement of the pion energy with B, the inclusion of the axial vector meson sector reinforces this softening, leading to an improved agreement with LQCD predictions. Remarkably, for a constant coupling g and magnetic fields stronger than eB=0.4 GeV^2, we obtain values of the pion energy which lie well below the ones correspoding to a pointlike pion, in concordance with LQCD results in Refs. <cit.>. On the other hand, in the case of a B-dependent coupling we find that the pole mass becomes approximatelyconstant; as a result, the energy is basically coincident with the one corresponding to a pointlike charged pion. As for the π^+ state composition, we have seen that in general the magnetic field induces a mixing between all states by increasing the contribution from vector and axial vector components.In view of the above results, one can conclude that the inclusion of axial vector mesons leads to more robust results and improves the agreement between NJL-like models and LQCD calculations. Still, issues about meson masses and mass eigenstate compositions at large magnetic fields are still open, and further results from LQCD and effective models of strong interactions would be welcome. NNS would like to thank the Department of Theoretical Physics of the University of Valencia, where part of this work has been carried out, for their hospitality within the Visiting Professor program of the University of Valencia. This work has been partially funded by CONICET (Argentina) under Grant No. PIP 2022-2024 GI-11220210100150CO, by ANPCyT (Argentina) under Grant No. PICT20-01847, by the National University of La Plata (Argentina), Project No. X824, by Ministerio de Ciencia e Innovación and Agencia Estatal de Investigación (Spain), and European Regional Development Fund Grant No. PID2019-105439GB-C21, by EU Horizon 2020 Grant No. 824093 (STRONG-2020), and by Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana, GVA PROMETEO/2021/083.§ APPENDICESsection .equation.subsection§ CONVENTIONS AND NOTATIONThroughout this section we use the Minkowski metric η^μν=(1,-1,-1,-1), while for a space-time coordinate four-vector x^μ we adopt the notation x^μ=(t,x⃗), with x⃗=(x^1,x^2,x^3).We study interactions between charged particles and an external electromagnetic field A^μ(x). The electromagnetic field strength F^μν and its dual F̃^μν are given byF^μν=∂^μ𝒜^ν-∂^ν𝒜^μ , F̃^μν=1/2 ϵ^μναβF_αβ ,where the convention ϵ^0123=+1 is used. We consider in particular the situation in which one has a static and uniform magnetic field B⃗; without losing generality, we choose the axis 3 to be parallel to B⃗, i.e., we take B⃗ = (0,0,B) (note that B can be either positive or negative). Moreover, definingF̂^μν=1/B F^μν ,F̂̃̂^μν=1/B F̃^μνfor i,j=1,2,3 one hasF̂^0ν=0 , F̂^ij=-ϵ_ij3 ,F̂̃̂^ij=0 , F̂̃̂^0k=- ϵ^0k ijϵ_ij3/2 ,i.e. the relevant components of the tensors are F̂^12=-F̂^21=-1, F̂̃̂^03=-F̂̃̂^30=-1.Since isotropy is broken by the particular direction of the external field B⃗, it is convenient to separate the metric tensor into “parallel” and “perpendicular” pieces,η_∥^μν=(1,0,0,-1) ,η_⊥^μν=(0,-1,-1,0) .In addition, given a four-vector v^μ, it is useful to define “parallel” and “perpendicular” vectorsv_∥^μ = (v^0,0,0,v^3) , v_⊥^μ = (0,v^1,v^2,0) . § NEUTRAL MESON POLARIZATION FUNCTIONSAccording to Eq. (<ref>), the polarization functions for neutral mesons can be written as a sum of flavor-dependent functions Σ_MM'^f(q). The latter, in turn, can be written in terms of a set of Lorentz covariant tensors asΣ_MM'^f(q)=∑_i=1,n_mm'c_mm',i^f(q_⊥^2,q_∥^2)𝕆_MM'^(i)(q) .Here, M=π_b,ρ_b^μ,a_b^μ correspond to m=π,ρ,a, and the same is understood for M' and m'. The coefficients c_mm',i^f are scalar functions, while the tensors 𝕆_MM'^(i) carry the corresponding Lorentz structures. Notice that the number of terms in the sum, n_mm', depends on the combination mm' considered. The scalar coefficients can be expressed asc_mm',i^f(q_⊥^2,q_∥^2)=N_c/8π^2∫_0^∞dz ∫_-1^1dξe^-z ϕ_0^f(q_⊥,q_∥,ξ,z) γ_mm',i^f(q_⊥^2,q_∥^2,ξ,z) ,whereϕ_0^f(q_⊥,q_∥,ξ,z)=M_f^2-1-ξ^2/4 q_∥^2 +cosh(z B_f)- cosh(ξ z B_f)/2 z B_f sinh(z B_f) q⃗_⊥^ 2 ,with B_f = |BQ_f|. In the following we list the sums associated to each polarization function, together with the explicit expressions of the functions γ_mm',i^f(q_⊥^2,q_∥^2,ξ,z) corresponding to the coefficients c_mm',i^f(q_⊥^2,q_∥^2). For brevity, the arguments of c_mm',i^f and γ_mm',i^f are not explicitly written.The ππ polarization function is a scalar, therefore there is only one coefficient c_ππ,1^f, and 𝕆_π_b π_b'^(1)(q)=1. One hasΣ_π_bπ_b'^f(q) =c_ππ,1^f ,while the associated function γ_ππ,1^f is given byγ_ππ,1^f=(M_f^2+1/z+1-ξ^2/4 q_∥^2)B_f/tanh(z B_f)+ B_f^2/sinh^2(z B_f)[1-cosh(z B_f)-cosh(ξ z B_f)/2B_fsinh(z B_f) q⃗_⊥^ 2] .Analogously, for the σσ polarization function we haveΣ_σ_bσ_b'^f(q)=c_σσ,1^f ,whileγ_σσ,1^f=(-M_f^2+1/z+1-ξ^2/4 q_∥^2) B_f/tanh(z B_f)+ B_f^2/sinh^2(z B_f)[1-cosh(z B_f)-cosh(ξ z B_f)/2B_fsinh[z B_f] q⃗_⊥^ 2] . For the ρρ polarization the sum in Eq. (<ref>) includes five terms. We findΣ_ρ_b^μρ_b'^ν^f μν(q)= c_ρρ,1^f η_∥^μν+c_ρρ,2^f η_⊥^μν+ c_ρρ,3^f q_∥^μ q_∥^ν+ c_ρρ,4^f q_⊥^μ q_⊥^ν+ c_ρρ,5^f (q_⊥^μ q_∥^ν+q_∥^μ q_⊥^ν) ,while the functions γ_ρρ,i^f readγ_ρρ,1^f= -(M_f^2+1-ξ^2/4 q_∥^2)B_f/tanh(z B_f)-B_f^2/sinh^2(z B_f)+B_f[cosh(z B_f)-cosh(ξ z B_f)]/2 sinh^3[z B_f] q⃗_⊥^ 2 , γ_ρρ,2^f= -(M_f^2+1/z+1-ξ^2/4 q_∥^2)B_fcosh(ξ z B_f)/sinh(z B_f)+B_f[cosh(z B_f)-cosh(ξ z B_f)]/2 sinh^3[z B_f] q⃗_⊥^ 2 , γ_ρρ,3^f= (1-ξ^2) B_f/2 tanh[z B_f] , γ_ρρ,4^f= B_f cosh(z B_f)-cosh(ξ z B_f)/sinh^3(z B_f) , γ_ρρ,5^f= B_f cosh(ξ z B_f)-ξ(z B_f)sinh(ξ z B_f)/2 sinh(z B_f) . For the aa polarization function we getΣ_a_b^μa_b'^ν^f μν(q) = c_aa,1^f η_∥^μν+c_aa,2^f η_⊥^μν+ c_aa,3^f q_∥^μ q_∥^ν+c_aa,4^f q_⊥^μ q_⊥^ν+ c_aa,5^f (q_⊥^μ q_∥^ν+q_∥^μ q_⊥^ν) ,while the functions γ_aa,i^f are given byγ_aa,1^f= -(- M_f^2+1-ξ^2/4 q_∥^2)B_f/tanh(z B_f)-B_f^2/sinh^2(z B_f)+B_f[cosh(z B_f)-cosh(ξ z B_f)]/2 sinh^3[z B_f] q⃗_⊥^ 2 , γ_aa,2^f= -(- M_f^2+1/z+1-ξ^2/4 q_∥^2)B_fcosh(ξ z B_f)/sinh(z B_f)+B_f[cosh(z B_f)-cosh(ξ z B_f)]/2 sinh^3[z B_f] q⃗_⊥^ 2 , γ_aa,i^f=γ_ρρ,i^f i=3,4,5 . For the πρ and ρπ polarization functions we getΣ_π_bρ_b'^μ^f μ(q)= Σ_ρ_b^μπ_b'^f μ(q)^ ∗ = c_πρ,1^f F̂̃̂^μα q_∥αandγ_πρ,1^f =-is_f B_fM_f ,with s_f =sign(B Q_f).For the π a and aπ polarization functions we getΣ_π_ba_b'^μ^f μ(q)= Σ_a_b^μπ_b'^f μ(q)^ ∗ = c_π a,1^fq_∥^μ+c_π a,2^fq_⊥^μ ,andγ_π a,1^f= -iM_f B_f/tanh(z B_f) , γ_π a,2^f= -iM_f B_f cosh(ξ z B_f)/sinh(z B_f) . For the σρ and ρσ polarization functions we getΣ_σ_bρ_b'^μ^f μ(q)= Σ_ρ_b^μσ_b'^f μ(q)^ ∗ = c_σρ,1^f F̂^μα q_⊥α ,andγ_σρ,1^f = is_f B_fM_f cosh(z B_f) cosh(v z B_f)-1/sinh^2(z B_f) . Finally, for the aρ and ρ a polarization functions we haveΣ_ρ_b^μ a_b'^ν^f μν(q) =Σ_a_b^νρ_b'^μ^f νμ(q) = c_aρ,1^f(F̂̃̂^μα q_α∥ q_∥^ν - q_∥^μ q_α∥ F̂̃̂^αν)+c_aρ,2^f(F̂̃̂^μα q_α∥ q_⊥^ν-q_⊥^μ q_α∥ F̂̃̂^αν)+c_aρ,3^f F̂̃̂^μν ,andγ_aρ,1^f=s_f/4 B_f (1-ξ^2) , γ_aρ,2^f= - s_f/2 B_f[ξ sinh(ξ z B_f)/sinh(z B_f)+ 1-cosh(z B_f)cosh(ξ z B_f)/sinh^2(z B_f)] , γ_aρ,3^f= - s_f B_fM_f^2 . § THE “B=0” POLARIZATION FUNCTIONSTo perform the MFIR we need to obtain the meson “B=0” polarization functions J_MM'^0(q) in both their unregularized (unreg) and regularized (reg) forms. As stated in Sec. <ref>, although these polarization functions are calculated from the propagators in the B→ 0 limit, they still depend implicitly on B through the values of the magnetized dressed quark masses M_f. Hence, they should not be confused with the polarization functions that one would obtain in the case of vanishing external field. Moreover, they will be in general different for neutral and charged mesons. In the case of neutral mesons (i.e., M,M'=σ_b,π_b,ρ_b^μ,a_b^μ, with b=0,3) one can writeJ_MM'^0,λ(q)= F_MM'^uu,λ(q) + ε_M ε_M' F_MM'^dd,λ(q) ,where λ stands for “reg” or “unreg”, and ε_M is equal to either 1 or -1 [see text below Eq. (<ref>)]. On the other hand, for charged mesons (M,M'=σ,π,ρ^μ,a^μ) one hasJ_MM'^0,λ(q) = 2 F_MM'^ud,λ(q) .The functions F_MM'^ff',λ(q) can be written in terms of scalar functions b_mm',i^ff',λ(q^2), with m,m'=π,ρ,a, as follows:F_ππ^ff',λ(q) = b_ππ,1^ff',λ(q^2) ,F_ρ^μρ^ν^ff',λ μν(q) = b_ρρ,1^ff',λ(q^2) (η^μν-q^μq^ν/q^2)+b_ρρ,2^ff',λ(q^2) q^μq^ν/q^2 ,F_a^μa^ν^ff',λ μν(q) = b_aa,1^ff',λ(q^2) (η^μν-q^μq^ν/q^2)+b_aa,2^ff',λ(q^2) q^μq^ν/q^2 ,F_π a^μ^ff',λ μ(q) = F_a^μπ^ff',λ μ(q)^ ∗= b_π a,1^ff',λ(q^2) q^μ .For the unregularized functions we findb_mm',i^ff',unreg(q^2)= N_c/8π^2∫_-1^1dv∫_0^∞dz/z e^-z ϕ^ff'(v,q^2) ω_mm',i^ff'(q^2,v,z) ,whereϕ^ff'(v,q^2)=1/2 (M_f^2+M_f'^2) - v/2 (M_f^2-M_f'^2) - (1-v^2)/4 q^2andω_ππ,1^ff'= M_fM_f'+ 2/z+1-v^2/4 q^2 ,ω_ρρ,1^ff'= - M_fM_f'- 1/z-1-v^2/4 q^2 ,ω_ρρ,2^ff'= - M_fM_f'- 1/z+1-v^2/4 q^2 , ω_aa,1^ff'= M_fM_f'- 1/z-1-v^2/4 q^2 ,ω_aa,2^ff'= M_fM_f'- 1/z+1-v^2/4 q^2 , ω_π a,1^ff'= - i/2[(1-v)M_f+(1+v)M_f'] . To express the regularized functions b_mm',i^ff',reg(q^2) it is convenient to introduce the ultraviolet divergent integralsI_1f= 4i∫d^4p/(2π)^4 1/p^2-M_f^2 + iϵ ,I_2ff'(q^2) = 2i∫d^4p/(2π)^4 1/(p_+^2-M_f^2+iϵ)(p_-^2-M_f'^2+iϵ) ,where p_±=p± q/2. Now we can consider some regularization scheme to obtain regularized integrals I_1f^ reg and I_2ff'^ reg(q^2). Using the definitionsM̅=M_f+M_f'/2 ,Δ=M_f-M_f' ,and introducing the shorthand notationI̅_1=I_1f^ reg+I_1f'^ reg/2 , I_2=I_2ff'^ reg(q^2) ,I_2^0=I_2ff'^ reg(0) ,I_2^'0=dI_2ff'^ reg(q^2)/dq^2|_q^2=0 ,we obtainb_ππ,1^ff',reg= N_c[I̅_1-(q^2-Δ^2)I_2] ,[2mm]b_ρρ,1^ff',reg=N_c/3[(4M̅^2+Δ^2-4M̅^2Δ^2/q^2)(I_2-I_2^0)- (3Δ^2-2q^2)I_2+16M̅^2Δ^2I_2^'0] ,[2mm]b_ρρ,2^ff',reg= -N_c Δ^2[I_2-4M̅^2/q^2(I_2-I_2^0)] ,[2mm]b_aa,1^ff',reg=N_c/3[(4M̅^2+Δ^2-4M̅^2Δ^2/q^2)(I_2-I_2^0) -(12M̅^2-2q^2)I_2+16M̅^2Δ^2I_2^'0] ,[2mm]b_aa,2^ff',reg= -4N_c M̅^2[I_2-Δ^2/q^2(I_2-I_2^0)] ,[2mm]b_π a,1^ff',reg= 2i N_c M̅[I_2-Δ^2/q^2(I_2-I_2^0)] . To regularize the vacuum loop integrals I_1f and I_2ff'(q^2) we use the proper time scheme. We get in this wayI_1f^ reg=Λ^2/4π^2 E_2(M_f^2/Λ^2) , I_2ff'^ reg(q^2) = -1/16π^2∫_-1^1 dvE_1(ϕ^ff'(v,q^2)/Λ^2) ,where E_n(x) = ∫_1^∞ dt t^-n exp(-tx) is the exponential integral function. The regularization requires the introduction of a dimensionful parameter Λ, which plays the role of an ultraviolet cutoff.§ USEFUL RELATIONSWe quote here a few relations that are found to be useful in order to obtain the neutral meson polarization functions, see Sec. <ref>. These are <cit.>∫_-1^1 dξ (1-ξ^2) e^z(1-ξ^2)/4=4/z+(1-2/z)∫_-1^1 dξ e^z(1-ξ^2)/4 , ∫_0^∞ dz/z e^-β z[cosh[ξ z]/sinh[z]-1/z]=β(1-lnβ/2)-ln2π+∑_s=±1lnΓ(β+s ξ+1/2) , ∫_0^∞ dz e^-β z[cosh[ξ z]/sinh[z]-1/z] =lnβ/2-1/2∑_s=±1ψ(β+s ξ+1/2) ,with Reβ> 0. For ξ=1, the last relation leads to∫_0^∞ dz e^-β z[ z-1/z]= lnβ/2-1/β-ψ(β/2) . § POLARIZATION VECTORS §.§ Neutral mesons For arbitrary three-momentum q⃗, a convenient choice for the polarization vectors of neutral mesons isϵ^μ(q⃗,1) =1/√(2) m^(0)_⊥ m^(0)_2⊥[ q_+(E,0,0,q^3)+m^(0)_⊥^2(0,1,i,0)] ϵ^μ(q⃗,2) =1/m^(0)_⊥(q^3,0,0,E) ϵ^μ(q⃗,3) =1/√(2) m m^(0)_2⊥[ q_-(E,0,0,q^3)+q_+^∗ q_-/2(0,1,i,0)+ m^(0)_2⊥^2(0,1,-i,0)] ,where q_±=q^1± i q^2, and we have used the definitionsm^(0)_⊥ = √(m^2+q⃗_⊥^ 2) , m^(0)_2⊥ = √(m^2+q⃗_⊥^ 2/2) ,with q⃗_⊥^ 2 = (q^1)^2+(q^2)^2. One has in this caseE^2=(q^3)^2 + q⃗_⊥^ 2 + m^2 .In addition, as stated in the main text, one can introduce a fourth “longitudinal” polarization vectorϵ^μ(q⃗,L) =1/m (E,q^1,q^2,q^3) .These four polarization vectors satisfyϵ^μ(q⃗,c)^∗ ϵ_μ(q⃗,c') ={[ ζ_cfor c=c'; 0 for c≠ c' ]. ,where ζ_c = -1 for c=1,2,3 and ζ_c = 1 for c=L. Note that for q⃗=0 they reduce to those given by Eqs. (<ref>) and (<ref>). §.§ Charged mesons In the case of charged mesons, for ℓ≥ 1 one finds three linearly independent polarization vectors. A convenient choice isϵ^μ(ℓ,q^3,1) =1/√(2) m_⊥ m_2⊥ [Π_+ (E, 0, 0,q^3)+m_⊥^2 (0, 1, is, 0)] , ϵ^μ(ℓ,q^3,2) =1/m_⊥ (q^3, 0, 0, E) , ϵ^μ(ℓ,q^3,3) =1/√(2) m m_2⊥[Π_-(E,0,0,q^3)+ Π_+^*Π_-/2(0,1,is,0)+m_2⊥^2(0,1,-is,0)] ,where we have used the definitionsm_⊥=√(m^2+(2ℓ+1)B_e) ,m_2⊥=√(m^2+ℓ B_e) , Π_+= -Π^1(ℓ,q_∥)+is Π^2(ℓ,q_∥)=-i√(2(ℓ+1)B_e) , Π_-= -Π^1(ℓ,q_∥)-is Π^2(ℓ,q_∥)=i√(2ℓ B_e) ,with B_Q=|QB|. One hasE^2=(q^3)^2 + (2ℓ +1) B_Q + m^2 ,while the four-vector Π^μ is given in Eq. (<ref>).For ℓ=0, two independent nontrivial transverse polarization vectors can be constructed. A suitable choice isϵ^μ(0,q^3,1) =1/√(2) m_⊥ m_2⊥ [Π_+ (E, 0, 0,q^3)+m_⊥^2 (0, 1, is, 0)] , ϵ^μ(0,q^3,2) =1/m_⊥ (q^3, 0, 0, E) ,where m_⊥, m_2⊥, Π_+ and E are understood to be evaluated at ℓ=0.For ℓ=-1, there is only one nontrivial polarization vector, which can be conveniently written asϵ^μ(-1,q^3,1)=1/√(2)(0, 1, is, 0) . Finally, for ℓ≥0 one can also define a “longitudinal” polarization vector that we denote as ϵ^μ(ℓ,q^3,L); it is given byϵ^μ(ℓ,q^3,L) =1/m Π^μ(ℓ,q_∥)|_q^0=E .For ℓ=-1 no longitudinal vector is introduced (notice that Π^μ has been defined only for ℓ≥ 0).In a similar way as in the neutral case, the above polarization vectors satisfyϵ^μ(ℓ,q^3,c)^∗ ϵ_μ(ℓ,q^3,c') ={[ ζ_cfor c=c'; 0 for c≠ c' ]. ,where the indices c and c' run only over the allowed polarizations for the corresponding value of ℓ, while ζ_c is defined below Eq. (<ref>).As stated in Sec. <ref>, for some range of values of the magnetic field one can get m^2<0. In that case, in Eq. (<ref>) one should replace ζ_c→ζ̃_c, where ζ̃_c depends on the value of ℓ. For ℓ=0, ζ̃_c = -1 (+1) for c=L,2 (1).§ REFLECTION SYMMETRY AND BOX STRUCTURE OF 𝐆 MATRICES §.§ Reflection at the plane perpendicular to the magnetic field As well known, the electromagnetic interaction is invariant under a parity transformationx^μ𝒫⟶x_μ .However, it is not easy to deal with this transformation in the presence of an external uniform magnetic field. This is due to the fact that the description of spatial reflections at a plane parallel to the magnetic field requires to choose a gauge. Instead, we can focus on the spatial reflection at the plane perpendicular to the magnetic field, say 𝒫_B̂. Since, as customary (and without losing generality), we choose the axis 3 to be in the direction of the magnetic field, in what follows we denote this transformation by 𝒫_3̂.The transformation 𝒫_3̂ distinguishes between the parallel and perpendicular components of x^μ. Namely,x_∥^μ 𝒫_3̂⟶ x_∥μ ,x_⊥^μ 𝒫_3̂⟶x_⊥^μ .For a plane wave associated to a neutral particle we havee^± i q(𝒫_3̂x) =e^± i (𝒫_3̂q)x ,withq_∥^μ 𝒫_3̂⟶ q_∥μ , q_⊥^μ 𝒫_3̂⟶ q_⊥^μ .The wavefunctions of charged particles can be written in terms of the functions ℱ_Q(x,q̅) discussed in App. <ref>. In this case we haveℱ_Q(𝒫_3̂x,q̅)= ℱ_Q(x,𝒫_3̂q̅),with 𝒫_3̂q̅=(q^0, ℓ, χ,-q^3), independently of the chosen gauge.It is easy to see that the transformation 𝒫_3̂ is equivalent to a parity transformation (denoted by 𝒫) followed by a rotation of angle π around the axis 3, i.e., 𝒫_3̂=R_3̂(π) 𝒫. Therefore, the action of the transformation 𝒫_3̂ on meson fields can be obtain as a combination of these two operations. For sigma and pion mesons we have𝒫_3̂ (x) 𝒫_3̂^-1=(𝒫_3̂x) , b=0,1,2,3 ,𝒫_3̂ (x) 𝒫_3̂^-1= -(𝒫_3̂x) , b=0,1,2,3 ,while for vector and axial vector fields we get𝒫_3̂ ρ_b∥^μ(x) 𝒫_3̂^-1=ρ_b∥μ(𝒫_3̂x) ,𝒫_3̂ ρ_b⊥^μ(x) 𝒫_3̂^-1= ρ_b⊥^μ(𝒫_3̂x) , b=0,1,2,3 ,𝒫_3̂ a_b∥^μ(x) 𝒫_3̂^-1= -a_b∥μ(𝒫_3̂x) ,𝒫_3̂ a_b⊥^μ(x) 𝒫_3̂^-1= -a_b⊥^μ(𝒫_3̂x) , b=0,1,2,3 .We emphasize that Eqs. (<ref>-<ref>) are valid for both neutral and charged mesons.In the case of fermionic fields there is an ambiguity, since one can take a rotation of angle π or -π. One has𝒫_3̂ ψ_f(x) 𝒫_3̂^-1= ±i η_f 𝒫 ψ_f(𝒫_3̂x) ,where 𝒫=Σ^3 γ^0, with Σ^3=iγ^1γ^2. Anyway, since in our calculations quark fields always appear in bilinear operators, we can choose all fermionic phases in such a way that ±i η_f=1. It is important to notice that the fermion propagator S_f(x,x') satisfiesS_f(𝒫_3̂x,𝒫_3̂x')= 𝒫 S_f(x,x') 𝒫^† .§.§ Particle states under reflection at the plane perpendicular to the magnetic field In terms of creation and annihilation operators, the fields describing neutral scalar and vector mesons can be written ass_b(x) =∫d^3q/(2π)^32E_s [ a_s_b(q⃗) e^-iq x+a_s_b^†(q⃗) e^iq x ] , v_b^μ(x) =∫d^3q/(2π)^32E_v ∑_c=1^3 [ a_v_b(q⃗,c) e^-iq x ϵ^μ(q⃗,c)+ a_v_b^†(q⃗,c) e^iq x ϵ^μ(q⃗,c)^∗ ] ,where q^0=E=√(q⃗^ 2+m^2), s=σ,π and v=ρ,a, while b=0 (b=3) for isoscalar (isovector) states. The polarization vectors ϵ^μ(q⃗,c) are given in Eqs. (<ref>); as stated, we can also define a “longitudinal” polarization ϵ^μ(q⃗,L) given by Eq. (<ref>), which can be obtained from a derivative of the scalar field.In the case of the scalar and pseudoscalar fields, the action of 𝒫_3̂ yieldss_b(𝒫_3̂x) =∫d^3q/(2π)^32E_s[ a_s_b(q⃗) e^-iq 𝒫_3̂x+ a_s_b^†(q⃗) e^iq 𝒫_3̂x] =∫d^3q/(2π)^32E_s [ a_s_b(𝒫_3̂q⃗) e^-iq x+ a_s_b^†(𝒫_3̂q⃗) e^iq x] ,where we have used Eq. (<ref>) followed by a change q^3→-q^3 in the integral. Then, from Eqs. (<ref>) and (<ref>) we conclude𝒫_3̂a_σ_b^†(q⃗) 𝒫_3̂^-1= a_σ_b^†(𝒫_3̂q⃗) , 𝒫_3̂a_π_b^†(q⃗) 𝒫_3̂^-1= -a_π_b^†(𝒫_3̂q⃗) . In the case of vector and axial vector fields, we have to consider the behavior of the polarization vectors under the 𝒫_3̂ transformation. From Eqs. (<ref>) and (<ref>) we haveϵ^μ(𝒫_3̂q⃗,c)={[ ϵ_∥μ(q⃗,c)+ϵ_⊥^μ(q⃗,c)forc=1,3,L ,;-ϵ_∥μ(q⃗,c)forc=2 . ].Using these relations together with Eq. (<ref>) one hasv_b∥^μ(𝒫_3̂x) =∫d^3q/(2π)^32E_v∑_c=1,3,L[ a_v_b(𝒫_3̂q⃗,c) e^-iqx ϵ_∥μ(q⃗,c)+ a_v_b^†(𝒫_3̂q⃗,c) e^iqx ϵ_∥μ(q⃗,c)^∗]+ ∫d^3q/(2π)^32E_v [ -a_v_b(𝒫_3̂q⃗,2) e^-iqx ϵ_∥μ(q⃗,2) -a_v_b^†(𝒫_3̂q⃗,2)e^iqx ϵ_∥μ(q⃗,2)^∗] ,v_b⊥^μ(𝒫_3̂x) =∫d^3q/(2π)^32E_v∑_c=1,3,L[ a_v_b(𝒫_3̂q⃗,c) e^-iqx ϵ_⊥^μ(q⃗,c)+ a_v_b^†(𝒫_3̂q⃗,c)e^iqx ϵ_⊥^μ(q⃗,c)^∗] .This leads a to a different behavior of creation operators depending on the polarization state, namely𝒫_3̂ a_ρ_b^†(q⃗,c) 𝒫_3̂^-1 = a_ρ_b^†(𝒫_3̂q⃗,c) ,𝒫_3̂ a_a_b^†(q⃗,c) 𝒫_3̂^-1 = -a_a_b^†(𝒫_3̂q⃗,c) for c=1,3,L ; 𝒫_3̂ a_ρ_b^†(q⃗,c) 𝒫_3̂^-1 = -a_ρ_b^†(𝒫_3̂q⃗,c) ,𝒫_3̂ a_a_b^†(q⃗,c) 𝒫_3̂^-1 = a_a_b^†(𝒫_3̂q⃗,c) forc=2 . This analysis can be extended to charged scalar and vector mesons. A detailed description of charged meson fields can be found in Ref. <cit.>. Briefly, for s=σ,π and v=ρ,a one has (as in the main text, we consider positively charged mesons)s(x) =_{q̅_E} 1/2E_s [a_s^+(q̆) ℱ_e(x,q̅) +a_s^-(q̆)^† ℱ_-e(x,q̅)^∗] , v^μ(x) =_{q̅_E} 1/2E_v ∑_c=1^3 [a_v^+(q̆,c) W_e^μ(x,q̅,c)+ a_v^-(q̆,c)^† W_-e^μ(x,q̅,c)^∗] ,where q̆=(ℓ,χ,q^3) and W_e^μ(x,q̅,c)=ℝ^μν(x,q̅)ϵ_ν(ℓ,q^3,c), with ℝ^μν(x,q̅) and ϵ^μ(ℓ,q^3,c) given by Eqs. (<ref>) and (<ref>), respectively. We have also used the notation_{q̅_E} ≡ _q̅2 π δ(q^0-E) ,where E=√(m^2+B_e(2ℓ+1)+(q^3)^2). As stated, for ℓ≥ 0 one can also define a “longitudinal” polarization vector, given by Eq. (<ref>). Taking into account Eq. (<ref>) and the explicit forms of the polarization vectors, one can show the relationsW^μ(𝒫_3̂x,q̅,c)= {[ W_∥μ(x,𝒫_3̂q̅,c)+W_⊥^μ(x,𝒫_3̂q̅,c)forc=1,3,L ,;-W_∥μ(x,𝒫_3̂q̅,c)forc=2 . ].Taking into account these equations together with Eq. (<ref>) for the case of scalar and pseudoscalar particles, we obtain𝒫_3̂ a_σ^Q † (q̆)𝒫_3̂^-1 = a_σ^Q †(𝒫_3̂q̆) ,𝒫_3̂a_π^Q †(q̆)𝒫_3̂^-1 = -a_π^Q †(𝒫_3̂q̆) ; 𝒫_3̂a_ρ^Q †(q̆,c)𝒫_3̂^-1 = a_ρ^Q †(𝒫_3̂q̆,c) , 𝒫_3̂a_a^Q †(q̆,c)𝒫_3̂^-1 = -a_a^Q †(𝒫_3̂q̆,c)forc=1,3,L ; 𝒫_3̂a_ρ^Q †(q̆,c)𝒫_3̂^-1 = -a_ρ^Q †(𝒫_3̂q̆,c) , 𝒫_3̂a_a^Q †(q̆,c)𝒫_3̂^-1 = a_a^Q †(𝒫_3̂q̆,c)forc=2 . These transformation laws indicate how meson states transform under 𝒫_3̂, namely𝒫_3̂ |M(q)⟩= 𝒫_3̂ a_M^†(q⃗)|0⟩= η_𝒫_3^M |M(𝒫_3̂q)⟩for neutral mesons , 𝒫_3̂|M(q̅)⟩= 𝒫_3̂ a_M^†(q̆)|0⟩= η_𝒫_3^M |M(𝒫_3̂q̅)⟩for charged mesons ,withη_𝒫_3^M={[1for M=, ρ_b,1, ρ_b,3, ρ_b,L, a_b,2 ,; -1for M=, a_b,1, a_b,3, a_b,L, ρ_b,2 . ].Here the index b runs from 0 to 3, covering both charged and neutral mesons.The fact that our system is invariant under the reflection in the plane perpendicular to the magnetic field implies that particles with different parity phase η_𝒫_3^M cannot mix. §.§ Box structure of meson mass matrices We outline here how the previous assertion is realized in our model. The masses of charged and neutral mesons are obtained by equations of the form G=0, where G_MM'= (2g_M)^-1 δ_MM'-J_MM'. From Eqs. (<ref>), (<ref>) and (<ref>-<ref>), it is seen that the matrices J can be written in terms of the functionsΣ_MM'^ff'(q) = -i N_c ∫d^4p/(2π)^4 _D[i S̃^f(p_∥^+,p_⊥^+) Γ^M'i S̃^f'(p_∥^-,p_⊥^-) Γ^M] ,[notice that for charged particles 𝒥_MM'(q) = 2 Σ_MM'^ud(q), see Eq. (<ref>)]. Now, if the system is invariant under a reflection at the plane perpendicular to the axis 3, the solutions of G=0 should be invariant under the change q→𝒫_3̂q and q̅→𝒫_3̂q̅ for neutral and charged mesons, respectively. Performing such a transformation on the functions Σ_MM'^ff'(q_∥,q_⊥) one hasΣ_MM'^ff'(𝒫_3̂q) =-i N_c ∫d^4p/(2π)^4_D[i S̃^f(𝒫_3̂p_∥^+,p_⊥^+) Γ^M'i S̃^f'(𝒫_3̂p_∥^-,p_⊥^-) Γ^M] ,where a change p^3→ -p^3 has been performed in the integral. Taking into account the result in Eq. (<ref>) we getΣ_MM'^ff'(𝒫_3̂q) =-i N_c ∫d^4p/(2π)^4_D[i S̃^f(p_∥^+,p_⊥^+) Γ̅^M'i S̃^f'(p_∥^-,p_⊥^-) Γ̅^M] ,where we have definedΓ̅^M = 𝒫^† Γ^M 𝒫 .For the cases of our interest we have𝒫^† 1 𝒫 = 1 , 𝒫^† γ^μ 𝒫 = γ_∥μ+γ_⊥^μ , 𝒫^† iγ^5 𝒫 = -iγ^5 , 𝒫^† γ^μγ^5 𝒫 = -(γ_∥μ+γ_⊥^μ)γ^5 .For neutral and charged mesons, the above changes are complemented by the transformations of the polarization vectors and the functions W_Q^μ(x,q̅,c), respectively [see Eqs. (<ref>) and (<ref>)]. In this way it is easy to see that for η_𝒫_3^M≠η_𝒫_3^M' one has Σ_MM'^ff'(𝒫_3̂q) = -Σ_MM'^ff'(q), and consequently Σ_MM'^ff'(q)=0 and J_MM'=0.§ FUNCTIONS F_Q(X,Q̅) IN STANDARD GAUGESIn this appendix we quote the expressions for the functions (x,q̅) in the standard gauges SG, LG1 and LG2. As in the main text, we choose the axis 3 in the direction of the magnetic field, and use the notation B_Q=|Q B|, s=(QB).It is worth pointing out that the functions (x,q̅) can be determined up to a global phase, which in general can depend on ℓ. In the following expressions for SG, LG1 and LG2 the corresponding phases have been fixed by requiring (x,q̅) to satisfy Eqs. (<ref>) and (<ref>), with f_Q,ℓℓ '(t_⊥) given by Eq. (<ref>). §.§ Symmetric gauge In the SG we take χ=n, where n is a nonnegative integer. Thus, the set of quantum numbers used to characterize a given particle state is q̅=(q^0,ℓ,n,q^3). In addition, we introduce polar coordinates r,φ to denote the vector x⃗_⊥=(x^1,x^2) that lies in the plane perpendicular to the magnetic field. The functions F_Q(x,q̅) in this gauge are given by(x,q̅)^ (SG) = √(2π) e^-i(q^0 x^0-q^3x^3)e^-is(ℓ-n)φ R_ℓ,n(r) ,whereR_ℓ,n(r) = N_ℓ,n ξ^(ℓ-n)/2 e^-ξ/2 L_n^ℓ-n(ξ) ,with ξ=B_Q r^2/2. Here we have used the definition N_ℓ,n=(B_Q n!/ℓ!)^1/2, while L_j^m(x) are generalized Laguerre polynomials. §.§ Landau gauges LG1 and LG2 For the gauges LG1 and LG2 we take χ=q^j with j=1 and j=2, respectively. Thus, we have q̅=(q^0,ℓ,q^j,q^3). The corresponding functions (x,q̅) are given by(x,q̅)^ (LG1)= (-is)^ℓ N_ℓ e^-i(q^0 x^0-q^1x^1-q^3x^3) D_ℓ(ρ_s^(1)) , (x,q̅)^ (LG2)= N_ℓ e^-i(q^0 x^0-q^2x^2-q^3x^3) D_ℓ(ρ_s^(2)) ,where ρ_s^(1)=√(2B_Q) (x^2+s q^1/B_Q), ρ_s^(2)=√(2B_Q) (x^1-s q^2/B_Q) and N_ℓ=(4π B_Q)^1/4/√(ℓ!). The cylindrical parabolic functions D_ℓ(x) in the above equations are defined asD_ℓ(x)=2^-ℓ/2 e^-x^2/4 H_ℓ(x/√(2)) ,where H_ℓ(x) are Hermite polynomials, with the standard convention H_-1(x)=0.§ CHARGED MESON POLARIZATION FUNCTIONSWe quote here our results for the polarization functions of charged mesons. Starting from Eq. (<ref>) and using Eqs. (<ref>) we getJ_ss'(q̅,q̅') =∫d^4t/(2π)^4J_ss'(t)h_e(q̅,q̅',t) , J_sv^μ^μ(q̅,q̅') =∫d^4t/(2π)^4 ∑_λ J_sv^α^α(t)(Υ_λ)_α^ μ h_e(q̅,q̅'_λ,t) ,J_v^μs^μ(q̅,q̅') =∫d^4t/(2π)^4 ∑_λ(Υ_λ)^μ_ αJ_v^αs^α(t) h_e(q̅_λ,q̅',t) ,J_v^μv'^ν^μν(q̅,q̅') =∫d^4t/(2π)^4 ∑_λ,λ' (Υ_λ)^μ_ αJ_v^αv'^β^αβ(t)(Υ_λ')_β^νh_e(q̅_λ,q̅'_λ',t) ,where s,s'=σ,π and v,v'=ρ,a. Here we have definedh_Q(q̅,q̅',t) =∫ d^4x d^4x'F_Q(x,q̅)^ ∗F_Q(x',q̅') e^iΦ_Q(x,x') e^-it(x-x') .As shown in Ref. <cit.>, explicit calculations in any of the standard gauges lead toh_Q(q̅,q̅',t) =δ_χχ'(2π)^4 δ^(2)(q_∥-q_∥^')(2π)^2 δ^(2)(q_∥-t_∥) f_Q,ℓℓ '(t_⊥) ,where δ_χχ' stands for δ_nn', δ(q^1-q^'1) and δ(q^2-q^'2) for SG, LG1 and LG2, respectively, whilef_Q,ℓℓ '(t_⊥)= 4π(-i)^ℓ+ℓ'/B_Q √(ℓ!/ℓ^'!) (2 t⃗_⊥^ 2/B_Q)^ℓ'-ℓ/2 L_ℓ^ℓ^'-ℓ(2 t⃗_⊥^ 2/B_Q)e^-t⃗_⊥^ 2/B_Q e^is(ℓ-ℓ')φ_⊥ .We recall that here B_Q=|QB| and s=(QB). Since we are considering positively charged mesons, we have Q=e, e being the proton charge. This implies that B_Q=e|B| and s=(B) for all considered mesons.As mentioned in the main text, using Eqs. (<ref>) and (<ref>), and after a somewhat long but straightforward calculation, one can show thatJ_MM'(q̅,q̅')= (2π)^4δ(q^0-q^' 0) δ_ℓℓ' δ_χχ' δ(q^3-q^' 3)J_MM'(ℓ,q_∥) ,where the function J_MM'(ℓ,q_∥) can be written in general asJ_MM'(ℓ,q_∥)=∑_i=1^n_mm'd_mm',i(ℓ,q_∥^2)ℙ_MM'^(i)(Π) .Here, m(m')=π,ρ,a correspond to M(M')=π,ρ^μ,a^μ. The Lorentz structure is carried out by the set of functions ℙ_MM'^(i)(Π), where the four-vector Π^μ is given byΠ^μ =(q^0,i√(B_M/2)(√(ℓ+1)-√(ℓ)), -s√(B_M/2)(√(ℓ+1)+√(ℓ)),q^3) .In turn, the coefficients d_mm',i(ℓ,q_∥^2) can be expressed asd_mm',i(ℓ,q_∥^2)=N_c/4π^2∫_0^∞dz ∫_-1^1dξ e^-zϕ^ud(ξ,q_∥^2)/α_+(α_-/α_+)^ℓβ_mm',i(ℓ,q_∥^2,ξ,z) ,where ϕ^ff'(ξ,q^2) is given by Eq. (<ref>), and we have introduced the definitionst_u=tanh[(1-ξ)zB_u/2] , t_d=tanh[(1+ξ)zB_d/2] ,together withα_± =t_u/B_u+t_d/B_d± B_et_u/B_ut_d/B_d . The terms of the sum in Eq. (<ref>) for each MM', as well as the explicit form of the corresponding functions β_mm',i(ℓ,q_∥^2,ξ,z) are listed in what follows. Notice that the number of terms, n_mm', depends on the mm' combination. In addition, for ℓ=0 and ℓ=-1 some of the coefficients d_mm',i(ℓ,q_∥^2) are zero; therefore, for each function β_mm',i(ℓ,q_∥^2,ξ,z) we explicitly indicate the range of values of ℓ to be taken into account. For brevity, the arguments of d_mm',i and β_mm',i are omitted.For the ππ polarization function one has only a scalar contribution, i.e., n_ππ=1. Thus,J_ππ(ℓ,q_∥) = d_ππ,1 ;the corresponding function β_mm',1 is given byβ_ππ,1= (1-t_ut_d)[M_uM_d+1/z+(1-ξ^2)q_∥^2/4]+(1-t_u^2)(1-t_d^2) α_-+ℓ(α_--α_+)/α_+α_- , [ ℓ≥ 0 ] . For the ρ^μρ^ν polarization function we find 7 terms, namelyJ_ρ^μρ^ν^μν(ℓ,q_∥) = d_ρρ,1 η_∥^μν+d_ρρ,2 η_⊥^μν+d_ρρ,3 Π_∥^μ Π_∥^ν *+ d_ρρ,4 Π_⊥^μ Π_⊥^ν *+d_ρρ,5(Π_∥^μ Π_⊥^ν *+Π_⊥^μ Π_∥^ν *)-d_ρρ,6 is F̂^μν+d_ρρ,7 is(F̂^μ_ α Π_⊥^α Π_∥^ν *+Π_∥^μ Π_⊥^α *F̂_α^ ν) ;the corresponding functions β_ρρ,i areβ_ρρ,1 = ψ_1^+ ,β_ρρ,2 = ψ_2^++ψ_3^++(2ℓ+1) ψ_4 , β_ρρ,3 = ψ_5 ,β_ρρ,4 = 2 ψ_4/B_e , β_ρρ,5 = ψ_6^++ψ_7^+ ,β_ρρ,6 = ψ_2^+-ψ_3^++ ψ_4 ,β_ρρ,7 = - ψ_6^++ψ_7^+ ,whereψ_1^±= -(1-t_ut_d)[± M_uM_d+(1-ξ^2)q_∥^2/4]- α_-+ℓ(α_--α_+)/α_+α_- (1-t_u^2)(1-t_d^2) ,[ ℓ≥ 0 ] , ψ_2^±= -1/2 α_-/α_+(1+t_u) (1+t_d) [± M_uM_d+1/z+(1-ξ^2)q_∥^2/4] , [ ℓ≥-1 ] , ψ_3^±= -1/2 α_+/α_- (1-t_u) (1-t_d) [± M_uM_d+1/z+(1-ξ^2)q_∥^2/4] , [ ℓ≥ 1 ] , ψ_4=α_+-α_-/2α_+α_- (1-t_u^2)(1-t_d^2) , [ ℓ≥ 1 ] , ψ_5=1-ξ^2/2 (1-t_ut_d) , [ ℓ≥ 0 ] , ψ_6^±=1/2α_+[1+ξ/2 t_u (1+t_u) (1-t_d^2)/B_u±1-ξ/2 t_d(1+t_d) (1-t_u^2)/B_d] , [ ℓ≥ 0] , ψ_7^±=1/2α_-[1+ξ/2 t_u (1-t_u) (1-t_d^2)/B_u±1-ξ/2 t_d (1-t_d) (1-t_u^2)/B_d] ,[ ℓ≥ 1 ] . For the a^μa^ν polarization function we haveJ_a^μa^ν^μν(ℓ,q_∥) = d_aa,1 η_∥^μν+ d_aa,2 η_⊥^μν+d_aa,3 Π_∥^μ Π_∥^ν *+ d_aa,4 Π_⊥^μ Π_⊥^ν *+d_aa,5(Π_∥^μ Π_⊥^ν *+Π_⊥^μ Π_∥^ν*)- d_aa,6 is F̂^μν+d_aa,7 is(F̂^μ_ α Π^α_⊥ Π_∥^ν *+Π_∥^μ Π_⊥^α * F̂_α^ ν) ;the functions β_aa,i are in this case given byβ_aa,1=ψ_1^- ,β_aa,2=ψ_2^-+ψ_3^-+(2ℓ+1) ψ_4 , β_aa,3=ψ_5,β_aa,4=2 ψ_4/B_e , β_aa,5=ψ_6^++ψ_7^+ ,β_aa,6=ψ_2^--ψ_3^-+ ψ_4 ,β_aa,7=- ψ_6^++ψ_7^+ . For the πρ^μ and ρ^μπ polarization functions we obtainJ_πρ^μ^μ(ℓ,q_∥) = J_ρ^μπ^μ(ℓ,q_∥)^ ∗=d_πρ,1s F̂̃̂^μ_ α Π_∥^α * ;the function β_πρ,1 readsβ_πρ,1 =-i/2(t_u-t_d)[(M_u+M_d)-ξ(M_u-M_d)] , [ ℓ≥ 0 ] . For the π a^μ and a^μπ polarization functions we haveJ_π a^μ^μ(ℓ,q_∥) = J_a^μπ^μ(ℓ,q_∥)^ ∗=d_π a,1 Π_∥^μ ∗+d_π a,2 Π_⊥^μ ∗- d_π a,3is F̂^μ_ α Π_⊥^α ∗ ;the functions β_π a,i are given byβ_π a,1 = - i/2 (1-t_ut_d)[(M_u+M_d)-ξ(M_u-M_d)] , [ ℓ≥ 0 ] ,β_π a,2 = ψ_8+ψ_9 , β_π a,3 = - ψ_8+ψ_9 ,whereψ_8= -i/2α_+ [M_u t_u (1+t_u)(1-t_d^2)/B_u+M_d t_d (1+t_d) (1-t_u^2)/B_d] , [ ℓ≥ 0 ] , ψ_9= -i/2α_- [M_u t_u (1-t_u)(1-t_d^2)/B_u+M_d t_d (1-t_d) (1-t_u^2)/B_d] , [ ℓ≥ 1 ] . Finally, for the a^μρ^ν and ρ^μa^ν polarization functions we getJ_a^μρ^ν^μν(ℓ,q_∥) =J_ρ^νa^μ^νμ(ℓ,q_∥)^ * = d_aρ,1 s F̂̃̂^μν+ d_aρ,2 s(F̂̃̂^μ_ α Π_∥^α Π_∥^ν *- Π_∥^μ Π_∥^ α *F̂̃̂_α^ ν)+ d_aρ,3 s(F̂̃̂^μ_ α Π_∥^α Π_⊥^ν *-Π_⊥^μ Π_∥^α *F̂̃̂_α^ ν)+ d_aρ,4 i(F̂̃̂^μ_ α Π_∥^α Π_⊥^β * F̂_β^ ν-F̂^μ_ α Π_⊥^α Π_∥^β * F̂̃̂_β^ ν) ;the corresponding coefficients β_aρ,i areβ_aρ,1 = -M_uM_d(t_u-t_d) , [ ℓ≥ 0 ] , β_aρ,2 = 1-ξ^2/4(t_u-t_d) , [ ℓ≥ 0 ] , β_aρ,3 = ψ_6^--ψ_7^- ,β_aρ,4 = - ψ_6^--ψ_7^- . § MATRIX ELEMENTS OF 𝐉^ MAG(ℓ,M^2) FOR ℓ=0 In this appendix we list the elements of the matrix 𝐉^ mag(ℓ,m^2) for ℓ=0, i.e., the case considered in Eq. (<ref>). The expressions are given in terms of the coefficients b_mm',i^ud,unreg(q^2) given in App. <ref> and the coefficients d_mm',i(ℓ,q_∥^2) quoted in App. <ref>. In the expressions below, it is understood that they are evaluated at q^2=m^2 and (ℓ,q_∥^2)=(0,m^2+B_e), respectively.We obtain (for m^2>0)𝐉_ππ^ mag= d_ππ,1-2 b_ππ,1^ud,unreg , 𝐉_πρ_2^ mag=𝐉_ρ_2π^ mag^ *=-s m_⊥d_πρ,1 , 𝐉_π a_L^ mag=𝐉_a_Lπ^ mag^ *= 1/m(m_⊥^2d_π a,1-2B_ed_π a,2-2 m^2 b_π a,1^ud,unreg), 𝐉_π a_1^ mag=𝐉_a_1π^ mag^ *=-i √(B_e) m_⊥/m(d_π a,1-2 d_π a,2) ,𝐉_ρ_2ρ_2^ mag= - d_ρρ,1+2 b_ρρ,1^ud,unreg , 𝐉_a_Lρ_2^ mag=𝐉_ρ_2a_L^ mag^*= -s m_⊥/m(- d_aρ,1+m_⊥^2d_aρ,2-2B_ed_aρ,3) , 𝐉_a_1ρ_2^ mag=𝐉_ρ_2a_1^ mag^*= -i s √(B_e)/m(- d_aρ,1+m_⊥^2d_aρ,2-2m_⊥^2d_aρ,3) ,𝐉_a_La_L^ mag=1/m^2(m_⊥^2 d_aa,1-2 B_e d_aa,2+m_⊥^4 d_aa,3- 4 m_⊥^2 B_e d_aa,5-2 m^2 b_aa,2^ud,unreg ) , 𝐉_a_La_1^ mag=𝐉_a_1a_L^ mag^*= -i√(B_e) m_⊥/m^2(d_aa,1-2d_aa,2+m_⊥^2d_aa,3-2 (m_⊥^2+B_e)d_aa,5) ,𝐉_a_1a_1^ mag=B_e/m^2(d_aa,1- 2 m_⊥^2/B_ed_aa,2+ m_⊥^2d_aa,3-4m_⊥^2d_aa,5+ 2 m^2/B_e b_aa,1^ud,unreg) . 1 Kharzeev:2012ph D. E. Kharzeev, K. Landsteiner, A. Schmitt and H. U. Yee,Lect. Notes Phys. 871, 1-11 (2013) [https://arxiv.org/abs/1211.6245arXiv:1211.6245 [hep-ph]].Andersen:2014xxa J. O. Andersen, W. R. Naylor and A. Tranberg,Rev. Mod. Phys. 88, 025001 (2016) [https://arxiv.org/abs/1411.7176arXiv:1411.7176 [hep-ph]].Miransky:2015ava V. A. Miransky and I. A. Shovkovy,Phys. Rept. 576, 1-209 (2015) [https://arxiv.org/abs/1503.00732arXiv:1503.00732 [hep-ph]].Vachaspati:1991nm T. Vachaspati,Phys. Lett. B 265, 258-261 (1991) Grasso:2000wj D. Grasso and H. R. Rubinstein,Phys. Rept. 348, 163-266 (2001) [https://arxiv.org/abs/astro-ph/0009061astro-ph/0009061 [astro-ph]].Skokov:2009qp V. Skokov, A. Y. Illarionov and V. Toneev,Int. J. Mod. Phys. A 24, 5925-5932 (2009) [https://arxiv.org/abs/0907.1396arXiv:0907.1396 [nucl-th]].Voronyuk:2011jd V. Voronyuk, V. D. Toneev, W. Cassing, E. L. Bratkovskaya, V. P. Konchakovski and S. A. Voloshin,Phys. Rev. C 83, 054911 (2011) [https://arxiv.org/abs/1103.4239arXiv:1103.4239 [nucl-th]].Duncan:1992hi R. C. Duncan and C. Thompson,Astrophys. J. Lett. 392, L9 (1992). Kouveliotou:1998ze C. Kouveliotouet al.,Nature 393, 235-237 (1998). Kharzeev:2007jp D. E. Kharzeev, L. D. McLerran and H. J. Warringa,Nucl. Phys. A 803, 227-253 (2008) [https://arxiv.org/abs/0711.0950arXiv:0711.0950 [hep-ph]].Fukushima:2008xe K. Fukushima, D. E. Kharzeev and H. J. Warringa,Phys. Rev. D 78, 074033 (2008) [https://arxiv.org/abs/0808.3382arXiv:0808.3382 [hep-ph]].Kharzeev:2015znc D. E. Kharzeev, J. Liao, S. A. Voloshin and G. Wang,Prog. Part. Nucl. Phys. 88, 1-28 (2016) [https://arxiv.org/abs/1511.04050arXiv:1511.04050 [hep-ph]].Gusynin:1995nb V. P. Gusynin, V. A. Miransky and I. A. Shovkovy,Nucl. Phys. B 462, 249-290 (1996) [https://arxiv.org/abs/hep-ph/9509320arXiv:hep-ph/9509320].Bali:2011qj G. S. Bali, F. Bruckmann, G. Endrodi, Z. Fodor, S. D. Katz, S. Krieg, A. Schafer and K. K. Szabo,JHEP 02, 044 (2012) [https://arxiv.org/abs/1111.4956arXiv:1111.4956 [hep-lat]].Andersen:2021lnk J. O. Andersen,Eur. Phys. J. A 57, 189 (2021)[https://arxiv.org/abs/2102.13165arXiv:2102.13165 [hep-ph]].Fayazbakhsh:2012vr S. Fayazbakhsh, S. Sadeghian and N. Sadooghi,Phys. Rev. D 86, 085042 (2012) [https://arxiv.org/abs/1206.6051arXiv:1206.6051 [hep-ph]].Fayazbakhsh:2013cha S. Fayazbakhsh and N. Sadooghi,Phys. Rev. D 88, 065030 (2013)[https://arxiv.org/abs/1306.2098arXiv:1306.2098 [hep-ph]].Liu:2014uwa H. Liu, L. Yu and M. Huang,Phys. Rev. D 91, 014017 (2015)[https://arxiv.org/abs/1408.1318arXiv:1408.1318 [hep-ph]].Avancini:2015ady S. S. Avancini, W. R. Tavares and M. B. Pinto,Phys. Rev. D 93, 014010 (2016)[https://arxiv.org/abs/1511.06261arXiv:1511.06261 [hep-ph]].Zhang:2016qrl R. Zhang, W. j. Fu and Y. x. Liu,Eur. Phys. J. C 76, 307 (2016)[https://arxiv.org/abs/1604.08888arXiv:1604.08888 [hep-ph]].Avancini:2016fgq S. S. Avancini, R. L. S. Farias, M. Benghi Pinto, W. R. Tavares and V. S. Timóteo,Phys. Lett. B 767, 247-252 (2017) [https://arxiv.org/abs/1606.05754arXiv:1606.05754 [hep-ph]].Mao:2017wmq S. Mao and Y. Wang,Phys. Rev. D 96, 034004 (2017)[https://arxiv.org/abs/1702.04868arXiv:1702.04868 [hep-ph]].GomezDumm:2017jij D. Gómez Dumm, M. F. Izzo Villafañe and N. N. Scoccola,Phys. Rev. D 97, 034025 (2018)[https://arxiv.org/abs/1710.08950arXiv:1710.08950 [hep-ph]].Wang:2017vtn Z. Wang and P. Zhuang,Phys. Rev. D 97, 034026 (2018)[https://arxiv.org/abs/1712.00554arXiv:1712.00554 [hep-ph]].Liu:2018zag H. Liu, X. Wang, L. Yu and M. Huang,Phys. Rev. D 97, 076008 (2018) [https://arxiv.org/abs/1801.02174arXiv:1801.02174 [hep-ph]].Coppola:2018vkw M. Coppola, D. Gomez Dumm and N. N. Scoccola,Phys. Lett. B 782, 155-161 (2018) [https://arxiv.org/abs/1802.08041arXiv:1802.08041 [hep-ph]].Mao:2018dqe S. Mao,Phys. Rev. D 99, 056005 (2019)[https://arxiv.org/abs/1808.10242arXiv:1808.10242 [nucl-th]].Avancini:2018svs S. S. Avancini, R. L. S. Farias and W. R. Tavares,Phys. Rev. D 99, 056009 (2019)[https://arxiv.org/abs/1812.00945arXiv:1812.00945 [hep-ph]].Coppola:2019uyr M. Coppola, D. Gomez Dumm, S. Noguera and N. N. Scoccola,Phys. Rev. D 100, 054014 (2019)[https://arxiv.org/abs/1907.05840arXiv:1907.05840 [hep-ph]].Cao:2019res G. Cao,Phys. Rev. D 100, 074024 (2019)[https://arxiv.org/abs/1906.01398arXiv:1906.01398 [nucl-th]].Cao:2021rwx G. Cao,Eur. Phys. J. A 57, 264 (2021)[https://arxiv.org/abs/2103.00456arXiv:2103.00456 [hep-ph]].Sheng:2021evj B. k. Sheng, X. Wang and L. Yu,Phys. Rev. D 105, 034003 (2022)[https://arxiv.org/abs/2110.12811arXiv:2110.12811 [hep-ph]].Avancini:2021pmi S. S. Avancini, M. Coppola, N. N. Scoccola and J. C. Sodré,Phys. Rev. D 104, 094040 (2021)[https://arxiv.org/abs/2109.01911arXiv:2109.01911 [hep-ph]].Xu:2020yag K. Xu, J. Chao and M. Huang,Phys. Rev. D 103, 076015 (2021)[https://arxiv.org/abs/2007.13122arXiv:2007.13122 [hep-ph]].Lin:2022ied F. Lin, K. Xu and M. Huang,Phys. Rev. D 106, 016005 (2022)[https://arxiv.org/abs/2202.03226arXiv:2202.03226 [hep-ph]].Kamikado:2013pya K. Kamikado and T. Kanazawa,JHEP 03, 009 (2014) [https://arxiv.org/abs/1312.3124arXiv:1312.3124 [hep-ph]].Ayala:2018zat A. Ayala, R. L. S. Farias, S. Hernández-Ortiz, L. A. Hernández, D. M. Paret and R. Zamora,Phys. Rev. D 98, 114008 (2018)[https://arxiv.org/abs/1809.08312arXiv:1809.08312 [hep-ph]].Ayala:2020dxs A. Ayala, J. L. Hernández, L. A. Hernández, R. L. S. Farias and R. Zamora,Phys. Rev. D 103, 054038 (2021)[https://arxiv.org/abs/2011.03673arXiv:2011.03673 [hep-ph]].Das:2019ehv A. Das and N. Haque,Phys. Rev. D 101,074033 (2020)[https://arxiv.org/abs/1908.10323arXiv:1908.10323 [hep-ph]].Wen:2023qcz R. Wen, S. Yin, W. j. Fu and M. Huang,Phys. Rev. D 108, 076020 (2023)[https://arxiv.org/abs/2306.04045arXiv:2306.04045 [hep-ph]].Agasian:2001ym N. O. Agasian and I. A. Shushpanov,JHEP 10, 006 (2001) [https://arxiv.org/abs/hep-ph/0107128arXiv:hep-ph/0107128 [hep-ph]].Andersen:2012zc J. O. Andersen,JHEP 10, 005 (2012) [https://arxiv.org/abs/1205.6978arXiv:1205.6978 [hep-ph]].Colucci:2013zoa G. Colucci, E. S. Fraga and A. Sedrakian,Phys. Lett. B 728, 19-24 (2014) [https://arxiv.org/abs/1310.3742arXiv:1310.3742 [nucl-th]].Orlovsky:2013gha V. D. Orlovsky and Y. A. Simonov,JHEP 09, 136 (2013) [https://arxiv.org/abs/1306.2232arXiv:1306.2232 [hep-ph]].Andreichikov:2016ayj M. A. Andreichikov, B. O. Kerbikov, E. V. Luschevskaya, Y. A. Simonov and O. E. Solovjeva,JHEP 05, 007 (2017) [https://arxiv.org/abs/1610.06887arXiv:1610.06887 [hep-ph]].Simonov:2015xta Y. A. Simonov,Phys. Atom. Nucl. 79, 455-460 (2016)[https://arxiv.org/abs/1503.06616arXiv:1503.06616 [hep-ph]].Andreichikov:2018wrc M. A. Andreichikov and Y. A. Simonov,Eur. Phys. J. C 78, 902 (2018) [https://arxiv.org/abs/1805.11896arXiv:1805.11896 [hep-ph]].Dominguez:2018njv C. A. Dominguez, M. Loewe and C. Villavicencio,Phys. Rev. D 98, 034015 (2018)[https://arxiv.org/abs/1806.10088arXiv:1806.10088 [hep-ph]].Luschevskaya:2014lga E. V. Luschevskaya, O. E. Solovjeva, O. A. Kochetkov and O. V. Teryaev,Nucl. Phys. B 898, 627-643 (2015) [https://arxiv.org/abs/1411.4284arXiv:1411.4284 [hep-lat]].Luschevskaya:2015bea E. V. Luschevskaya, O. A. Kochetkov, O. V. Teryaev and O. E. Solovjeva,JETP Lett. 101, 674-678 (2015).Bali:2017ian G. S. Bali, B. B. Brandt, G. Endrődi and B. Gläßle,Phys. Rev. D 97, 034505 (2018)[https://arxiv.org/abs/1707.05600arXiv:1707.05600 [hep-lat]].Ding:2020hxw H. T. Ding, S. T. Li, A. Tomiya, X. D. Wang and Y. Zhang,Phys. Rev. D 104, 014505 (2021)[https://arxiv.org/abs/2008.00493arXiv:2008.00493 [hep-lat]].Ding:2022tqn H. T. Ding, S. T. Li, J. H. Liu and X. D. Wang,Phys. Rev. D 105, 034514 (2022)[https://arxiv.org/abs/2201.02349arXiv:2201.02349 [hep-lat]]. Kawaguchi:2015gpt M. Kawaguchi and S. Matsuzaki,Phys. Rev. D 93, 125027 (2016)[https://arxiv.org/abs/1511.06990arXiv:1511.06990 [hep-ph]].Chernodub:2011mc M. N. Chernodub,Phys. Rev. Lett. 106, 142003 (2011) [https://arxiv.org/abs/1101.0117arXiv:1101.0117 [hep-ph]].Ghosh:2020qvg S. Ghosh, A. Mukherjee, N. Chaudhuri, P. Roy and S. Sarkar,Phys. Rev. D 101, 056023 (2020)[https://arxiv.org/abs/2003.02024arXiv:2003.02024 [hep-ph]].Ghosh:2016evc S. Ghosh, A. Mukherjee, M. Mandal, S. Sarkar and P. Roy,Phys. Rev. D 94, 094043 (2016)[https://arxiv.org/abs/1612.02966arXiv:1612.02966 [nucl-th]].Avancini:2022qcp S. S. Avancini, R. L. S. Farias, W. R. Tavares and V. S. Timóteo,Nucl. Phys. B 981, 115862 (2022) [https://arxiv.org/abs/2202.03328arXiv:2202.03328 [hep-ph]].Hidaka:2012mz Y. Hidaka and A. Yamamoto,Phys. Rev. D 87, 094502 (2013)[https://arxiv.org/abs/1209.0007arXiv:1209.0007 [hep-ph]].Luschevskaya:2012xd E. V. Luschevskaya and O. V. Larina,Nucl. Phys. B 884, 1-16 (2014) [https://arxiv.org/abs/1203.5699arXiv:1203.5699 [hep-lat]].Luschevskaya:2016epp E. V. Luschevskaya, O. E. Solovjeva and O. V. Teryaev,JHEP 09, 142 (2017) [https://arxiv.org/abs/1608.03472arXiv:1608.03472 [hep-lat]].Tiburzi:2008ma B. C. Tiburzi,Nucl. Phys. A 814, 74 (2008) [https://arxiv.org/abs/0808.3965arXiv:0808.3965 [hep-ph]].Deshmukh:2017ciw A. Deshmukh and B. C. Tiburzi,Phys. Rev. D 97,014006 (2018) [https://arxiv.org/abs/1709.04997arXiv:1709.04997 [hep-ph]].Taya:2014nha H. Taya,Phys. Rev. D 92, 014038 (2015) [https://arxiv.org/abs/1412.6877arXiv:1412.6877 [hep-ph]].Haber:2014ula A. Haber, F. Preis and A. Schmitt,Phys. Rev. D 90, 125036 (2014) [https://arxiv.org/abs/1409.0425arXiv:1409.0425 [nucl-th]].Mukherjee:2018ebw A. Mukherjee, S. Ghosh, M. Mandal, S. Sarkar and P. Roy,Phys. Rev. D 98, 056024 (2018) [https://arxiv.org/abs/1809.07028arXiv:1809.07028 [hep-ph]].He:2016oqk B. R. He,Phys. Lett. B 765, 109 (2017) [https://arxiv.org/abs/1609.09055arXiv:1609.09055 [hep-ph]].Dominguez:2020sdf C. A. Dominguez, L. A. Hernandez, M. Loewe, C. Villavicencio and R. Zamora, [https://arxiv.org/abs/2008.10742arXiv:2008.10742 [hep-ph]].Endrodi:2019whh G. Endrodi and G. Markó,JHEP 08, 036 (2019) [https://arxiv.org/abs/1905.02103arXiv:1905.02103 [hep-lat]]. Coppola:2020mon M. Coppola, D. Gomez Dumm and N. N. Scoccola,Phys. Rev. D 102, 094020 (2020)[https://arxiv.org/abs/2009.14105arXiv:2009.14105 [hep-ph]].Carlomagno:2022inu J. P. Carlomagno, D. Gomez Dumm, S. Noguera and N. N. Scoccola,Phys. Rev. D 106, 074002 (2022)[https://arxiv.org/abs/2205.15928arXiv:2205.15928 [hep-ph]].Carlomagno:2022arc J. P. Carlomagno, D. Gomez Dumm, M. F. I. Villafañe, S. Noguera and N. N. Scoccola,Phys. Rev. D 106, 094035 (2022)[https://arxiv.org/abs/2209.10679arXiv:2209.10679 [hep-ph]].Vogl:1991qt U. Vogl and W. Weise,Prog. Part. Nucl. Phys. 27, 195-272 (1991). Klevansky:1992qe S. P. Klevansky,Rev. Mod. Phys. 64, 649-708 (1992). Hatsuda:1994pi T. Hatsuda and T. Kunihiro,Phys. Rept. 247, 221-367 (1994) [https://arxiv.org/abs/hep-ph/9401310arXiv:hep-ph/9401310 [hep-ph]]. Klimt:1989pm S. Klimt, M. F. M. Lutz, U. Vogl and W. Weise,Nucl. Phys. A 516, 429-468 (1990). Bijnens:1995ww J. Bijnens,Phys. Rept. 265, 369-446 (1996) [https://arxiv.org/abs/hep-ph/9502335arXiv:hep-ph/9502335 [hep-ph]].Coppola:2018ygv M. Coppola, D. Gomez Dumm, S. Noguera and N. N. Scoccola,Phys. Rev. D 99, 054031 (2019)[https://arxiv.org/abs/1810.08110arXiv:1810.08110 [hep-ph]].Coppola:2019wvh M. Coppola, D. Gomez Dumm, S. Noguera and N. N. Scoccola,Phys. Rev. D 101, 034003 (2020)[https://arxiv.org/abs/1910.10814arXiv:1910.10814 [hep-ph]]. GomezDumm:2023owj D. Gomez Dumm, S. Noguera and N. N. Scoccola,Phys. Rev. D 108, 016012 (2023)[https://arxiv.org/abs/2306.04128arXiv:2306.04128 [hep-ph]].Allen:2015paa P. G. Allen, A. G. Grunfeld and N. N. Scoccola,Phys. Rev. D 92, 074041 (2015)[https://arxiv.org/abs/1508.04724arXiv:1508.04724 [hep-ph]].Avancini:2019wed S. S. Avancini, R. L. S. Farias, N. N. Scoccola and W. R. Tavares,Phys. Rev. D 99, 116002 (2019)[https://arxiv.org/abs/1904.02730arXiv:1904.02730 [hep-ph]].Menezes:2008qt D. P. Menezes, M. Benghi Pinto, S. S. Avancini, A. Perez Martinez and C. Providencia,Phys. Rev. C 79, 035807 (2009) [https://arxiv.org/abs/0811.3361arXiv:0811.3361 [nucl-th]].Miransky:2002rp V. A. Miransky and I. A. Shovkovy,Phys. Rev. D 66, 045006 (2002) [https://arxiv.org/abs/hep-ph/0205348arXiv:hep-ph/0205348 [hep-ph]].Ayala:2014iba A. Ayala, M. Loewe, A. J. Mizher and R. Zamora,Phys. Rev. D 90, 036001 (2014)[https://arxiv.org/abs/1406.3885arXiv:1406.3885 [hep-ph]].Farias:2014eca R. L. S. Farias, K. P. Gomes, G. I. Krein and M. B. Pinto,Phys. Rev. C 90, 025203 (2014)[https://arxiv.org/abs/1404.3931arXiv:1404.3931 [hep-ph]].Ferreira:2014kpa M. Ferreira, P. Costa, O. Lourenço, T. Frederico and C. Providência,Phys. Rev. D 89, 116011 (2014)[https://arxiv.org/abs/1404.5577arXiv:1404.5577 [hep-ph]].Wakamatsu:2022pqo M. Wakamatsu and A. Hayashi,Eur. Phys. J. A 58, 121 (2022)[https://arxiv.org/abs/2202.03592arXiv:2202.03592 [quant-ph]].Pagura:2016pwr V. P. Pagura, D. Gomez Dumm, S. Noguera and N. N. Scoccola,Phys. Rev. D 95, 034013 (2017) [https://arxiv.org/abs/1609.02025arXiv:1609.02025 [hep-ph]].GomezDumm:2017iex D. Gomez Dumm, M. F. Izzo Villafañe, S. Noguera, V. P. Pagura and N. N. Scoccola,Phys. Rev. D 96, 114012 (2017)[https://arxiv.org/abs/1709.04742arXiv:1709.04742 [hep-ph]].Bali:2012zg G. S. Bali, F. Bruckmann, G. Endrodi, Z. Fodor, S. D. Katz and A. Schafer,Phys. Rev. D 86, 071502 (2012) [https://arxiv.org/abs/1206.4205arXiv:1206.4205 [hep-lat]].Borsanyi:2010cj S. Borsanyi, G. Endrodi, Z. Fodor, A. Jakovac, S. D. Katz, S. Krieg, C. Ratti and K. K. Szabo,JHEP 11, 077 (2010) [https://arxiv.org/abs/1007.2580arXiv:1007.2580 [hep-lat]]. gradshteynI. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products (Academic, London, 1996).
http://arxiv.org/abs/2312.16675v1
{ "authors": [ "Máximo Coppola", "Daniel Gomez Dumm", "Santiago Noguera", "Norberto N. Scoccola" ], "categories": [ "hep-ph", "hep-lat" ], "primary_category": "hep-ph", "published": "20231227184113", "title": "Masses of the magnetized pseudoscalar and vector mesons in an extended NJL model: the role of axial vector mesons" }
TG Zhao, ZY Zhao, CP Li, DX Li: Spectral approximation of ψ-FDEs ] Spectral approximation of ψ-fractional differential equation based on mapped Jacobi functions TG Zhao, ZY Zhao, CP Li, DX Li: Spectral approximation of ψ-FDEs ] Tinggang Zhao^1, Zhenyu Zhao^1, Changpin Li^2, Dongxia Li^2 ^1School of Mathematics and Statistics, Shandong University of Technology, Zibo, 255000, China^2Department of Mathematics, Shanghai University, Shanghai, 200444, Chinae-mail:[email protected](T.Z); [email protected](Z.Z); [email protected](C.L); [email protected](D.L) Manuscript received XXXXXXXX 20XX Fractional calculus with respect to function ψ, also named as ψ-fractional calculus, generalizes the Hadamard and the Riemann-Liouville fractional calculi, which causes challenge in numerical treatment. In this paper we study spectral-type methods using mapped Jacobi functions (MJFs) as basis functions and obtain efficient algorithms to solve ψ-fractional differential equations. In particular, we setup the Petrov-Galerkin spectral method and spectral collocation method for initial and boundary value problems involving ψ-fractional derivatives. We develop basic approximation theory for the MJFs and conduct the error estimates of the derived methods. We also establish a recurrence relation to evaluate the collocation differentiation matrix for implementing the spectral collocation algorithm. Numerical examples confirm the theoretical results and demonstrate the effectiveness of the spectral and collocation methods.Keywords: Fractional calculus; Spectral method; Mapped Jacobi function; ψ-fractional differential equation.AMS Subject Classification: 65F60; 65D32; 65M12; 35K55[ [===== § INTRODUCTIONIn recent decades, fractional calculus has gained more and more attention due to its extensive applications in the various fields of applied sciences, e.g. mechanics, microchemistry, engineering, biology, and computer science <cit.>. Till now,there exist several different definitions of fractional calculus, such asGrünwald-Letnikov, Riemann-Liouville, Caputo, Riesz and Hadamardderivatives/integrals <cit.>. It is noticed that most of the work is devoted to the issues related to Riemann-Liouville, Caputo, and Riesz derivatives. Another interesting fractional operator, sometimes collectively called ψ-fractional calculus, is given by fractional integration and differentiation of a function with respect to another function. This class was first proposed and motivated by Osler in <cit.>, as a generalization of fractional calculus, which includes Hadamard and the classical Riemann-Liouville fractional calculi as its special cases. Hadamard fractional calculus (the case of ψ=log(x) in the ψ-fractional calculus) which was introduced in <cit.> has been found to be useful in some practical problems related to mechanics and engineering, e.g., the Lomnitz logarithmic creep law of special substances <cit.>. Recently, Caputo-Hadamard derivative is applied to studying the dynamic evolution of COVID-19 caused by the Omicron variant via the Caputo-Hadamard fractional SEIR model <cit.>.The existing numerical methods to deal with the ψ-fractional derivative are much less than those to solve fractional differential equations (FDEs) with the Riemann-Liouville, Caputo and Riesz derivatives (see <cit.> and an exhaustive list of the references). Almeida et al. <cit.> studied the existence and uniqueness of the solution to the initial value problem of ψ-Caputo-type nonlinear FDEs and developed the Picard iteration method for solving numerically the problem. Li et al. <cit.> considered the Caputo-Hadamard fractional partial differential equation with finite difference method and the local discontinuous Galerkin method. Fan et al. <cit.> proposed the numerical formulas for approximating the Caputo-Hadamard fractional derivatives. More recently, Zaky et al. <cit.> applied the logarithmic Jacobi collocation method for the Caputo-Hadamard fractional nonlinear ordinary differential equations. Zhao et al. <cit.> presented an efficient spectral collocation method for the Caputo-Hadamard FDEs. Fan et al. <cit.> studied the ψ-Caputo derivative and presented several numerical schemes based on the finite difference method to discretize the fractional derivative. On the other hand, Li and Li <cit.> studied the stability of the solution to the Hadamard FDE,where the decay law of the solution is also established. Li and Li <cit.> obtained the finite time blow-up and global existence of the solution to Cauchy problem of the semilinear time-space fractional diffusion equation with time ψ-Caputo derivative. Li and Li <cit.> studied the blow-up of the solution to a semilinear time-space fractional diffusion equation, where the time derivative is the Caputo-type derivative with the exponential kernel, called the exponential Caputo derivative.From a series of works on Hadamard type derivatives <cit.>, approximating this kind of fractional derivatives consults logarithmic function. In fact, applying such an idea to Caputo-type fractional differential equations and Volterra integral equations with weak singularities may achieve nice results <cit.>. In this paper, motivated by the above cited works, we aim to develop a high accuracy spectral and spectral collocation method which uses the mapped Jacobi functions (MJFs) as the basis functions for solving fractional differential equations with ψ-fractional derivatives. The main contributions of this work are as follows: ∙We prove the well-posedness of four kinds of the initial and boundary value problems of the ψ-fractional ordinary differential equation.∙We define the MJFs and derive the approximate results of orthogonal projection and interpolation based on the MJFs.∙We conduct the error estimates of the presented Petrov-Galerkin numerical schemes based on the approximate results.∙We derive the differentiation matrix of the fractional ψ-fractional derivative and give a fast and stable evaluation method based on recurrence relationship.∙We confirm the theoretical results and demonstrate the effectiveness of the proposed methods by solving the initial and boundary value problems. The paper is organized as follows. In Section <ref>, we recall the definitions and properties of ψ-fractional calculus and present the well-posedness of the initial and boundary problems involving ψ-fractional derivatives. The definition of the MJFs is given in Section <ref>. We also build its properties and approximate results in the section. We analyze the Petrov-Galerkin spectral method and spectral collocation method in Section <ref>. We consider in parallel the initial value problems and the boundary value problems. Numerical experiments are presented in Section <ref>. The theoretical results about the presented approaches are confirmed by the numerical experiments, as well as the effectiveness of the presented approaches. Some conclusions are displayed in Section <ref>.Throughout this paper, μ is always a positive real number, a,b are two real numbers satisfying a<b. ψ is an increasing real C^m(m=⌈μ⌉) function on [a,b] such that ψ'>0 for all x∈ [a,b]. Denote ψ_a=ψ(a), ψ_b=ψ(b),κ=2/ψ_b-ψ_a. We use the mappings(x):=κ(ψ(x)-ψ_a)-1=-κ(ψ_b-ψ(x))+1,x∈[a,b]to map [a,b] to I:=[-1,1]. It is easy to knowds=κψ'(x)dx.Symbol δ_ψ^n (n, a non-negative integer number) is used for a generalized differential operator defined byδ^0_ψ f(x)=f(x),δ^1_ψ=1/ψ'(x)d/dx,δ^n_ψ=δ^1_ψ(δ^n-1_ψ).§ PRELIMINARIES §.§ The ψ-fractional calculus Let p≥1 and L^p_ψ(a,b) is the p-integrable space which includes the Lebesgue measurable functions on the finite interval (a,b) with respect to measure dψ, i.e.,L^p_ψ(a,b):={ u:∫_a^b |u(x)|^p dψ<∞}.L^p_ψ(a,b) is a Banach space with normu_L^p_ψ=(∫_a^b|u(x)|^p dψ)^1/p.When p=2, space L^2_ψ(a,b) turns out a Hilbert space with inner product(u,v)_ψ=∫_a^b u(x)v(x) dψ.We also use L^2_ω(a,b) for weighted Hilbert space with inner product(u,v)_ω=∫_a^b u(x)v(x)ω(x) dx,and norm u_ω as usual.Denote by C[a,b] all the continuous functions in [a,b] and AC[a,b] all the absolutely continuous functions in [a,b].For n ∈ℕ^+, we define AC^n_ψ as AC_ψ^n[a,b]:={u:[a,b]→ℝ,δ_ψ^nu(x)∈ AC[a,b]}. <cit.> The left-sided and right-sided ψ-fractional integral of a given function f(x) with order μ > 0 are defined, respectively, as_ψ D_a,x^-μf(x)= 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ)) ^μ-1f(τ) dτ, x∈ (a,b), _ψ D_x,b^-μf(x)= 1/Γ(μ)∫_x^bψ'(τ)(ψ(τ)-ψ(x)) ^μ-1f(τ) dτ, x∈ (a,b).In the above definition, the condition f(x)∈ L^1_ψ(a,b) is often presumed albeit sufficient but applicable.We remark the choices of ψ in the above definition as follows. The fractional operators _ψ D_a,x^-μ and _ψ D_x,b^-μ given by Definition <ref> generalize several existing fractional integral operators: ∙ ψ(x)=x is the classical Riemann-Liouville-type fractional integral.∙ ψ(x)=log(x) is the Hadamard fractional integral<cit.>.∙ ψ(x)=exp(x) is the fractional integral with exponential kernel <cit.>. <cit.> The following semigroup property is valid: for μ,ν>0_ψ D^-μ_a,x (_ψ D^-ν_a,xf(x))=_ψ D^-(μ+ν)_a,xf(x),_ψ D^-μ_x,b (_ψ D^-ν_x,bf(x))=_ψ D^-(μ+ν)_x,bf(x).<cit.> The following results are valid: for μ>0, γ>-1_ψ D^-μ_a,x(ψ(x)-ψ_a)^γ = Γ(γ+1)/Γ(γ+μ+1)(ψ(x)-ψ_a)^γ+μ, _ψ D^-μ_x,b(ψ_b-ψ(x))^γ = Γ(γ+1)/Γ(γ+μ+1)(ψ_b-ψ(x))^γ+μ.<cit.> The left-sided and right-sided ψ-Riemann-Liouville fractional derivative (simply, ψ-derivative) of a given function f(x) with order μ > 0 are defined for x∈[a,b], respectively, as_ψ D^μ_a,xf(x)= δ^n_ψ[_ψ D^-(n-μ)_a,xf(x)] =1/Γ(n-μ)δ^n_ψ∫_a^xψ'(τ)(ψ(x)-ψ(τ)) ^n-μ-1f(τ)dτ, _ψ D^μ_x,bf(x)=(-1)^nδ^n_ψ[_ψ D^-(n-μ)_x,bf(x)] =(-1)^n/Γ(n-μ)δ^n_ψ∫_x^bψ'(τ)(ψ(τ)-ψ(x)) ^n-μ-1f(τ)dτ,where μ∈(n-1, n), n∈ℕ^+.<cit.> The left-sided and right-sided ψ-Caputo fractional derivative of a given function f(x) with order μ∈(n-1,n), n∈ℕ^+ are defined for x∈[a,b], respectively, as_Cψ D^μ_a,xf(x)=  _ψ D^-(n-μ)_a,x[ δ^n_ψ f(x)] =1/Γ(n-μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ)) ^n-μ-1δ^n_ψ f(τ)dτ, _Cψ D^μ_x,bf(x)=   (-1)^n_ψ D^-(n-μ)_a,x[δ^n_ψ f(x)] =(-1)^n/Γ(n-μ)∫_x^bψ'(τ)(ψ(τ)-ψ(x)) ^n-μ-1δ^n_ψ f(τ)dτ.In Definitions <ref> and <ref>, the sufficient condition f(x)∈ AC^n_ψ[a,b] is often assumed.<cit.> The following link between _Cψ D^μ_a,x and _ψ D^μ_a,x holds:_Cψ D^μ_a,xf(x)= _ψ D^μ_a,x[f(x)-∑_k=0^n-1δ^k_ψf(a)/k!(ψ(x)-ψ_a)^k], x∈[a,b],where μ∈(n-1, n),  δ^k_ψ f(a) exists for k=0,1,⋯,n-1 due to f(x)∈ AC^n_ψ[a,b]. Similarly, there holds for x∈[a,b],_Cψ D^μ_x,bf(x)= _ψ D^μ_x,b[f(x)-∑_k=0^n-1(-1)^kδ^k_ψ f(b)/k!(ψ_b-ψ(x))^k].<cit.> For f∈ C[a,b] and μ>0, there hold_ψ D^μ_a,x(_ψ D^-μ_a,xf(x))=f(x),_ψ D^μ_x,b(_ψ D^-μ_x,bf(x))=f(x),and for f∈ AC_ψ^n[a,b],_ψ D^-μ_a,x(_ψ D^μ_a,xf(x)) =f(x) -∑_k=0^n-1[_ψ D^μ-k-1_a,xf](a)/Γ(μ-k)(ψ(x)-ψ_a)^μ-k-1, _ψ D^-μ_x,b(_ψ D^μ_x,bf(x)) =f(x) -∑_k=0^n-1[_ψ D^μ-k-1_x,bf](a)/Γ(μ-k)(ψ_b-ψ(x))^μ-k-1.<cit.> For f∈ C[a,b] and μ>0, there hold_Cψ D^μ_a,x(_ψ D^-μ_a,xf(x))=f(x),_Cψ D^μ_x,b(_ψ D^-μ_x,bf(x))=f(x),and for f∈ AC_ψ^n[a,b],_ψ D^-μ_a,x(_Cψ D^μ_a,xf(x)) =f(x) -∑_k=0^n-1[δ^k_ψf](a)/k!(ψ(x)-ψ_a)^k, _ψ D^-μ_x,b(_Cψ D^μ_x,bf(x)) =f(x) -∑_k=0^n-1[(-1)^kδ^k_ψf](a)/k!(ψ_b-ψ(x))^k.For some special cases of ψ, we can obtain different fractional derivatives. For example, if ψ(x)=x, then Definitions <ref> and <ref> reduce to the Riemann-Liouville and Caputo fractional derivatives of order μ, respectively (see <cit.>). In the case ψ(x)=log(x), Definitions <ref> and <ref> reduce to the Hadamard and Caputo-Hadamard fractional derivatives, respectively (see <cit.>). §.§ Problems and well-posednessThe ordinary differential equations of the initial value problem (IVP) and boundary value problem (BVP) play a fundamental role in theory as well asapplications. In this work, we concern about the following four kinds of problems listed as(P1) The initial value problem with ψ-derivative:{[ _ψ D^μ_a,x u(x)=f(x,u),x∈(a,b],n-1<μ<n,;[_ψ D^k-(n-μ)_a,x u](a)=g_k,k=0,1,⋯,n-1. ]. (P2) The initial value problem with ψ-Caputo derivative:{[ _Cψ D^μ_a,x u(x)=f(x,u),x∈(a,b],n-1<μ<n,; [δ_ψ^ku](a)=c_k,k=0,1,⋯,n-1. ]. (P3) The boundary value problem with ψ-derivative:{[ _ψ D_a,x^μ u(x)=f(x,u), x∈(a,b), 1<μ<2,; [_ψ D_a,x^-(2-μ)u](a)= [_ψ D_a,x^-(2-μ)u](b)=0. ]. (P4) The boundary value problem with ψ-Caputo derivative:{[ _Cψ D_a,x^μ u(x)=f(x,u), x∈(a,b), 1<μ<2,; u(a)=u(b)=0. ]. Although problems we considered here only involve the left-sided type differential operators, most of our results are also valid for the right-sided type ones.The existence and uniqueness of the above-mentioned problems is derived in two steps: (1) The problems are converted into four equivalent integral equations. (2) Each equivalent integral equation is proved to possess a unique solution by using the fixed point theorem. Some hypotheses on f are always indispensable, which are corresponding to various kinds of problems. (H1)Let h^*>a, K>0, andG:={(x,y): a<x≤ h^*, |(ψ(x)-ψ_a)^n-μy-∑_k=0^n-1(ψ(x)-ψ_a)^k g_k/Γ(μ-n+k+1)|< K    x=a,y∈ℝ}.Assume that f:G→ℝ is continuous and bounded in G with M:=sup_(x,z)∈ G|f(x,z)|. Moreover, there exists a constant C_L>0 such that|f(x,y_1)-f(x,y_2)|< C_L|y_1-y_2|,∀(x,y_1),(x,y_2) ∈ G.Let h being an arbitrary positive number satisfying the constrainta<h<min{ψ^-1(( Γ(2μ-n+1)/Γ(μ-n+1)C_L)^1/μ +ψ_a),ψ^-1((Γ(μ+1)K/M)^1/n+ψ_a )},and h:=min{h^*,h}.(H2)Let h^*>a, K>0, andG:={(x,y): a≤ x≤ h^*, |y-∑_k=0^n-1(ψ(x)-ψ_a)^k c_k/k!|< K }.Assume that f:G→ℝ is continuous and bounded in G with M:=sup_(x,z)∈ G|f(x,z)|. Moreover, there exists a constant C_L>0 such that|f(x,y_1)-f(x,y_2)|< C_L|y_1-y_2|,∀(x,y_1),(x,y_2) ∈ G.Let h being an arbitrary positive number satisfying the constrainta<h<min{ψ^-1((Γ(μ+1)/C_L)^1/μ+ψ_a),ψ^-1((Γ(μ+1)K/M)^1/μ+ψ_a)},and h:=min{h^*,h}.(H3)Assume that f:[a,b]×ℝ→ℝ is continuous and satisfies a Lipschitz condition with Lipschitz constant C_L with respect to the second variable|f(x,y_1)-f(x,y_2)|≤ C_L|y_1-y_2|,∀(x,y_1),(x,y_2) ∈ [a,b]×ℝ,whereC_L<κ^μΓ(μ+1)/2^μ-1(2+μ). (H4)Assume that f:[a,b]×ℝ→ℝ is continuous and satisfies a Lipschitz condition with Lipschitz constant C_L with respect to the second variable|f(x,y_1)-f(x,y_2)|≤ C_L|y_1-y_2|,∀(x,y_1),(x,y_2) ∈ [a,b]×ℝ,whereC_L<κ^μΓ(μ+1)/2^μ+1.The following statements are ture: (i) Assume the hypothesis (H1) holds. The function u(x)∈ C(a,h] solves the IVP (P1) in (<ref>) if and only if u(x) is the solution of the Volterra integral equation of the second kindu(x)=∑_k=0^n-1g_k(ψ(x)-ψ_a)^μ-n+k/Γ(μ-n+k+1)+ 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ. (ii)Assume the hypothesis (H2) holds. The function u(x)∈ C[a,h] solves the IVP (P2) in (<ref>) if and only if u(x) is the solution of the integral equationu(x)=∑_k=0^n-1c_k(ψ(x)-ψ_a)^k/k!+ 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ. (iii)Assume the hypothesis (H3) holds. The function u(x)∈ C[a,b] solves the BVP (P3) in (<ref>) if and only if u(x) is the solution of the Fredholm integral equationu(x) = -κ(ψ(x)-ψ_a)^μ-1/2Γ(μ)∫_a^bψ'(τ)(ψ_b-ψ(τ))f(τ,u(τ)) dτ + 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ. (iv)Assume the hypothesis (H4) holds.Then u(x)∈ C[a,b] is the solution of BVP (P4) in (<ref>) if and only if u(x) is the solution of the Fredholm integral equationu(x) = -κ(ψ(x)-ψ_a)/2Γ(μ)∫_a^bψ'(τ)(ψ_b-ψ(τ))^μ-1f(τ,u(τ)) dτ + 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ.  (i) If u is a continuous solution of the IVP (<ref>) then we define z(x):=f(x,u(x)). By assumption, z is a continuous function and z(x)=_ψ D_a,x^μu(x)=δ_ψ^n [_ψ D_a,x^-(n-μ)]u(x). Thus, _ψ D_a,x^-(n-μ)u(x)∈ C^n(a,h]. Performing the integral operator _ψ D_a,x^-μ on both sides of the first equation of (<ref>), making use of the initial values and Lemma <ref>, we have the integral equation (<ref>). Note that for k< n,_ψ D_a,x^μ(ψ(x)-ψ_a)^μ-n+k =δ_ψ^n[ _ψ D_a,x^-(n-μ)(ψ(x)-ψ_a)^μ-n+k]=Γ(μ-n+k+1)/Γ(k+1)δ_ψ^n (ψ(x)-ψ_a)^k=0.Then, perform the derivative operator _ψ D_a,x^μ on both sides of the equation (<ref>) to obtain_ψ D_a,x^μu(x)=_ψ D_a,x^μ[ _ψ D_a,x^-μf(x,u(x))]=f(x,u).We next need to verify the initial value conditions. For m=0,1,⋯,n-1, performing _ψ D_a,x^μ-n+m on both sides of the equation (<ref>) and letting x approach to a, one has[_ψ D_a,x^μ-n+mu](a)=g_m.Then statement (i) is proved.(ii) If u is a continuous solution of the IVP (<ref>) then we define z(x):=f(x,u(x)), and z is a continuous function. Performing the operator _ψ D_a,x^-μ on both sides of the first equation of (<ref>), making use of the initial values and Lemma <ref>, we have the integral equation (<ref>).An application of the differential operator _ψ D_a,x^μ to both sides of (<ref>) yields the first equation of (<ref>). The initial value conditions are also fulfilled.(iii) Performing the integral operator _ψ D_a,x^-(2-μ) on both sides of (<ref>) yields_ψ D_a,x^-(2-μ) u(x) = -κ(ψ(x)-ψ_a)/2∫_a^bψ'(τ)(ψ_b-ψ(τ))f(τ,u(τ)) dτ + ∫_a^xψ'(τ)(ψ(x)-ψ(τ))f(τ,u(τ)) dτ.Then we have [_ψ D_a,x^-(2-μ) u](a)= [_ψ D_a,x^-(2-μ) u](b)=0. Moreover, an application of the differential operator _ψ D_a,x^μ to both sides of (<ref>) yields (<ref>). Thus, u in (<ref>) solves the boundary value problem (<ref>).On the other hand,if u solves the differential equation (<ref>) with the condition [_ψ D_a,x^-(2-μ) u](a)=0, then we know from (i) in Theorem <ref> that it also satisfiesu(x)=c(ψ(x)-ψ_a)^μ-1+1/Γ(μ)∫_a^xψ'(τ) (ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτwith some undetermined constant c. We take x=b and utilize [_ψ D_a,x^-(2-μ) u](b)=0 to work outc=-κ/2Γ(μ)∫_a^bψ'(τ) (ψ(b)-ψ(τ))f(τ,u(τ)) dτ.Then we obtain the Fredholm integral equation (<ref>).(iv) If u is continuous and solves the Fredholm integral equation (<ref>), it is clear the u(a)=u(b)=0 because of κ=2/(ψ_b-ψ_a). An application of the differential operator _ψ D_a,x^μ to both sides of (<ref>) yields the first equation of (<ref>).Similarly, if u solves the differential equation (<ref>) with the condition u(a)=0, then we know from (ii) in Theorem <ref> that it also satisfiesu(x)=d(ψ(x)-ψ_a)+1/Γ(μ)∫_a^xψ'(τ) (ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτwith some undetermined constant d. We take x=b and utilize u(b)=0 to work outd=-κ/2Γ(μ)∫_a^bψ'(τ) (ψ(b)-ψ(τ))^μ-1f(τ,u(τ)) dτ.Then we obtain the Fredholm integral equation (<ref>). The following statements are ture: (i)Under the hypothesis (H1), the equation (<ref>) possesses a uniquely solution u∈ C(a,h].(ii)Under the hypothesis (H2), the equation (<ref>) possesses a uniquely solution u∈ C[a,h].(iii)Under the hypothesis (H3), the equation (<ref>) possesses a uniquely solution u∈ C[a,b].(iv)Under the hypothesis (H4), the equation (<ref>) possesses a uniquely solution u∈ C[a,b].(i) We define the setB:={v∈ C(a,h]: sup_a< x≤ h|(ψ(x)-ψ_a)^n-μv(x)-∑_k=0^n-1g_k(ψ(x)-ψ_a)^k/Γ(μ-n+k+1)|<K}and on this set we define the operator A byAu:=∑_k=0^n-1g_k(ψ(x)-ψ_a)^μ-n+k/Γ(μ-n+k+1) + 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ.Then for any u∈ B,Au∈ C(a,h]. Moreover, for x∈(a,h], we have|(ψ(x)-ψ_a)^n-μ A u(x)-∑_k=0^n-1g_k(ψ(x)-ψ_a)^k/Γ(μ-n+k+1)|= |(ψ(x)-ψ_a)^n-μ/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ| ≤ (ψ(x)-ψ_a)^n-μ/Γ(μ) M ∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1 dτ =(ψ(x)-ψ_a)^nM/Γ(μ+1)< K.This shows that Au∈ B if u∈ B, i.e., the operator A maps the set B into itself.We introduce a new setB:={v∈ C(a,h]: sup_a<x≤ h|(ψ(x)-ψ_a)^n-μv(x)|<∞},and on this set we define a norm ·_B byv_B:= sup_a<x≤ h|(ψ(x)-ψ_a)^n-μv(x)|.Then B is a normed linear space and B is a complete subset of this space. We use A to rewrite the integral equation (<ref>) as Au=u. Hence, in order to prove the desired result, it is sufficient to show that the operator A has a unique fixed point.Let v_1,v_2∈ B, for j≥1 we haveA^jv_1- A^jv_2_B=sup_a<x≤ h| (ψ(x)-ψ_a)^n-μ( A A^j-1v_1- A A^j-1v_2)|= sup_a<x≤ h|(ψ(x)-ψ_a)^n-μ/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1[f(τ, A^j-1v_1(τ)) -f(τ, A^j-1v_2(τ))] dτ| ≤ sup_a<x≤ h(ψ(x)-ψ_a)^n-μ/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1|f(τ, A^j-1v_1(τ)) -f(τ, A^j-1v_2(τ))| dτ ≤ sup_a<x≤ hC_L(ψ(x)-ψ_a)^n-μ/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1| A^j-1v_1(τ) - A^j-1v_2(τ)| dτ ≤ C_L A^j-1v_1- A^j-1v_2_B/Γ(μ)sup_a<x≤ h(ψ(x)-ψ_a)^n-μ∫_a^x ψ'(τ)(ψ(x)-ψ(τ))^μ-1(ψ(τ)-ψ_a)^μ-n dτ= C_L(ψ(h)-ψ_a)^μΓ(μ-n+1)/Γ(2μ-n+1) A^j-1v_1- A^j-1v_2_B.By induction, we conclude thatA^jv_1- A^jv_2_B≤(C_L(ψ(h)-ψ_a)^μΓ(μ-n+1)/Γ(2μ-n+1))^jv_1-v_2_B.From (H1), we check thath<ψ^-1((Γ(2μ-n+1)/Γ(μ-n+1)C_L)^1/μ +ψ_a)⇒ψ(h)-ψ_a <(Γ(2μ-n+1)/Γ(μ-n+1)C_L)^1/μ.This ensures thatC_L(ψ(h)-ψ_a)^μΓ(μ-n+1)/Γ(2μ-n+1)<1.Then, we know that A is a contraction. Thus an application of the fixed point theorem yields the existence and the uniqueness of the solution of the integral equation (<ref>).(ii)For this case, we define the setB:={v∈ C[a,h]: sup_a≤ x≤ h|v(x)-∑_k=0^n-1c_k(ψ(x)-ψ_a)^k/k!|<K}and on this set we define the operator A byAu:=∑_k=0^n-1c_k(ψ(x)-ψ_a)^k/k! + 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ.Then for any u∈ B,Au∈ C[a,h]. Moreover, for x∈[a,h], we have| A u(x)-∑_k=0^n-1c_k(ψ(x)-ψ_a)^k/k!| =|1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ| ≤ M/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1 dτ =(ψ(x)-ψ_a)^μM/Γ(μ+1)< K.This shows that the operator A maps the set B into itself. Let v_1,v_2∈ B, for j≥1 we haveA^jv_1- A^jv_2=sup_a≤ x≤ h| ( A A^j-1v_1- A A^j-1v_2)|= sup_a≤ x≤ h|1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1[f(τ, A^j-1v_1(τ)) -f(τ, A^j-1v_2(τ))] dτ| ≤ sup_a≤ x≤ h1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1|f(τ, A^j-1v_1(τ)) -f(τ, A^j-1v_2(τ))| dτ ≤ sup_a≤ x≤ hC_L/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1| A^j-1v_1(τ) - A^j-1v_2(τ)| dτ ≤ C_L A^j-1v_1- A^j-1v_2/Γ(μ)sup_a≤ x≤ h∫_a^x ψ'(τ)(ψ(x)-ψ(τ))^μ-1 dτ= C_L(ψ(h)-ψ_a)^μ/Γ(μ+1) A^j-1v_1- A^j-1v_2.Now we conclude thatA^jv_1- A^jv_2≤(C_L(ψ(h)-ψ_a)^μ/Γ(μ+1))^jv_1-v_2.The hypothesis (H2) ensures that C_L(ψ(h)-ψ_a)^μ/Γ(μ+1)<1. Then, we know A is a contraction. Thus an application of the fixed point theorem yields the existence and the uniqueness of the solution of the integral equation (<ref>).(iii) For this case, we define the operator A byAu: = -κ(ψ(x)-ψ_a)^μ-1/2Γ(μ)∫_a^bψ'(τ)(ψ_b-ψ(τ))f(τ,u(τ)) dτ + 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ.Then for any u∈ C[a,b],Au∈ C[a,b]. Let v_1,v_2∈ C[a,b], we haveκ(ψ(x)-ψ_a)^μ-1/2Γ(μ)∫_a^bψ'(τ)(ψ_b-ψ(τ))|f(τ,v_1(τ))- f(τ,v_2(τ))| dτ ≤   κ(ψ(x)-ψ_a)^μ-1C_L/2Γ(μ)v_1-v_2∫_a^bψ'(τ)(ψ_b-ψ(τ)) dτ≤(ψ_b-ψ_a)^μC_L/2Γ(μ)v_1-v_2,and1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1|f(τ,v_1(τ))- f(τ,v_2(τ))| dτ ≤   C_L/Γ(μ)v_1-v_2∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1 dτ≤(ψ_b-ψ_a)^μC_L/Γ(μ+1)v_1-v_2.So, we haveAv_1- Av_2=max_a≤ x≤ b| Av_1- Av_2| ≤C_L(ψ_b-ψ_a)^μ(μ+2)/2Γ(μ+1)v_1-v_2.The hypothesis (H3) ensures that C_L(ψ_b-ψ_a)^μ(μ+2)/2Γ(μ+1)<1. Then, we know A is a contraction. Thus an application of the fixed point theorem yields the existence and the uniqueness of the solution of the integral equation (<ref>).(iv) We define the operator A byAu: = -κ(ψ(x)-ψ_a)/2Γ(μ)∫_a^bψ'(τ)(ψ_b-ψ(τ))^μ-1f(τ,u(τ)) dτ + 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1f(τ,u(τ)) dτ.Then for any u∈ C[a,b],Au∈ C[a,b]. Let v_1,v_2∈ C[a,b], similar to the above process, we haveAv_1- Av_2=max_a≤ x≤ b| Av_1- Av_2|≤  2C_L(ψ_b-ψ_a)^μ/Γ(μ+1)v_1-v_2.The hypothesis (H4) ensures that 2C_L(ψ_b-ψ_a)^μ/Γ(μ+1)<1. Then, we know A is a contraction. Thus an application of the fixed point theorem yields the existence and the uniqueness of the solution of the integral equation (<ref>). Under the assumptions (H1), let u(x) and v(x) be the solutions of the Cauchy type problem (<ref>) with different initial datum {g_k}_k=0^n-1 and {g_k}_k=0^n-1. Then|u(x)-v(x)|≤∑_k=0^n-1|g_k-g_k| (ψ(x)-ψ_a)^μ-n+k E_μ,μ-n+k+1(C_L(ψ(x)-ψ_a)^μ).And under the assumptions (H2), if u(x) and v(x) be the solutions of the Cauchy type problem (<ref>) with different initial datum {c_k}_k=0^n-1 and {c_k}_k=0^n-1, then|u(x)-v(x)|≤∑_k=0^n-1|c_k-c_k| (ψ(x)-ψ_a)^kE_μ,k+1(C_L(ψ(x)-ψ_a)^μ),where E_μ,ν(s)=∑_m=0^∞s^m/Γ(μ m+ν) is the two-parameter Mittag-Leffler function. From Theorem <ref> we know|u(x)-v(x)| ≤ ∑_k=0^n-1|g_k-g_k| (ψ(x)-ψ_a)^μ-n+k/Γ(μ-n+k+1) +C_L/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ)) ^μ-1|u(τ)-v(τ)| dτ.Denote by η(x)=|u(x)-v(x)|, andξ(x)=∑_k=0^n-1|g_k-g_k| (ψ(x)-ψ_a)^μ-n+k/Γ(μ-n+k+1).Then one hasη(x)≤ξ(x)+ C_L/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ)) ^μ-1η(τ) dτ.From Theorem A.1 in Appendix A with C=C_L/Γ(μ), one hasη(x)≤ξ(x)+ ∫_a^x∑_m=1^∞(C_L)^m/Γ(mμ)ψ'(τ)(ψ(x)-ψ(τ))^mμ-1ξ(τ) dτ, x∈ [a,b].Then the desired result (<ref>) comes from Lemma <ref>.By similar way in the above proof, with replace ofξ(x)=∑_k=0^n-1|c_k-c_k|(ψ(x)-ψ_a)^k/Γ(k+1),the desired estimate (<ref>) is obtained.In the case f(x,u)= f(x), letH^1_0,ψ(a,b):={v: δ^1_ψv∈ L^2_ψ(a,b), v(a)=v(b)=0},and H^-1_ψ(a,b) be its dual space. A weak form of (<ref>) is to find v:=_ψ D_a,x^-(2-μ)u∈ H^1_0,ψ(a,b) such that(δ^1_ψv, δ^1_ψw)_ψ=-(f,w)_ψ,∀ w∈ H^1_0,ψ(a,b).It is well known that for any f∈ H^-1_ψ(a,b), (<ref>) admits a unique solution v∈ H^1_0,ψ(a,b). Then we can recover u uniquely from u=_ψ D_a,x^2-μv, thanks to Lemma <ref>. Furthermore, we have the dependence estimateu_ψ≤ Cv_ψ≤ C|v|_1,ψ≤ Cf_ψ.§ MAPPED JACOBI FUNCTIONS AND ITS APPROXIMATION §.§ Mapped Jacobi functions (MJFs)In this subsection, we introduce the MJFs and derive some properties for use. Denote ℙ_N(I) as the space of all algebraic polynomials with degree at most N defined on I. Recall the Pochhammer symbol, for c∈ℝ and j∈ℕ, which is defined by(c)_0=1, (c)_j=:c(c+1)⋯(c+j-1)=Γ(c+j)/Γ(c), j>0. The Jacobi polynomials P^α,β_n(s), s∈ I with parameters α,β∈ℝ (<cit.>) is defined asP_n^α,β(s)= (α+1)_n/n!+∑_j=1^n-1(n+α+β+1)_j(α+j+1)_n-j/ j!(n-j)!(s-1/2)^j+(n+α+β+1)_n/n!(s-1/2)^n, n≥1,and P_0^α,β(s)=1. It is worth noting that the degenerate cases of the Jacobi polynomials when -(α+β+1)∈ℕ^+. For the Jacobi polynomials with one or both parameters being negative integers, the transformation formulas are valid <cit.>: * For β∈ℝ, l∈ℕ^+, n≥ l≥1, P_n^-l,β(s)=c_n^l,β(s-1/2)^lP_n-l^l,β(s),c_n^l,β=(n-l)!(β+n-l+1)_l/n!. * For α∈ℝ, m∈ℕ^+, n≥ m≥1,P_n^α,-m(s)=c_n^m,α(s+1/2)^mP_n-m^α,m(s). * For l, m∈ℕ^+, n≥ l+m≥2,P_n^-l,-m(s)=(s-1/2)^l(s+1/2)^mP_n-l-m^l,m(s).The well-known three-term recurrence relationship of the Jacobi polynomials P^α,β_n(s) with parameters α,β∈ℝ is fulfilled for -(α+β+1)∉ℕ^+:P_n+1^α,β(s) =(A_n^α,β s-B_n^α,β) P_n^α,β(s)-C_n^α,βP_n-1^α,β(s), n≥1,P_0^α,β(s) =1,P_1^α,β(s)=α+β+2/2s+α-β/2,where{[ A_n^α,β=(2n+α+β+1)(2n+α+β+2)/2(n+1)(n+α+β+1),; B_n^α,β =(β^2-α^2)(2n+α+β+1)/2(n+1)(n+α+β+1)(2n+α+β),; C_n^α,β =(n+α)(n+β)(2n+α+β+2)/(n+1)(n+α+β+1)(2n+α+β). ]. For α,β>-1, the classical Jacobi polynomials are orthogonal with respect to the Jacobi weight function ω^α,β(s)=(1-s)^α(1+s)^β, namely,∫_-1^1P^α,β_n(s)P^α,β_m(s)ω^α,β(s) ds =γ_n^α,βδ_mn,whereγ_n^α,β=2^α+β+1Γ(n+α+1)Γ(n+β+1)/(2n+α+β+1)n!Γ(n+α+β+1),and δ_mn is the Kronecker symbol, i.e.,δ_mn={[ 1, ifm=n,; 0, otherwise. ].Some additional properties of Jacobi polynomials can be referred to <cit.>.Define the mapped Jacobi functions (MJFs) byJ^α,β_n,ψ(x)=:P^α,β_n(s(x)) = P^α,β_n(κ(ψ(x)-ψ_a)-1),n=0,1,⋯,for x∈[a,b]. Some useful properties are given in the following. The MJFs are generated by the three-term recurrence relation asJ^α,β_n+1,ψ(x) = (A_n^α,βκ(ψ(x)-ψ_a)-(A_n^α,β+B_n^α,β))J^α,β_n,ψ(x)-C_n^α,βJ^α,β_n-1,ψ(x) = ( -A_n^α,βκ(ψ_b-ψ(x))+(A_n^α,β-B_n^α,β))J^α,β_n,ψ(x)-C_n^α,βJ^α,β_n-1,ψ(x),J^α,β_1,ψ(x) =A_0^α,βκ(ψ(x) -ψ_a)-(A_0^α,β+B_0^α,β)= -A_0^α,βκ(ψ_b-ψ(x))+(A_0^α,β-B_0^α,β), J^α,β_0,ψ(x) = 1.  The three-term recurrence relation (<ref>) is a straightforward result from the corresponding three-term recurrence relation of Jacobi polynomials with the variable transform (<ref>). The MJFs fulfill the derivative relations:δ^1_ψ J^α,β_n,ψ(x)=κ(n+α+β+1)/2J^α+1,β+1_n-1,ψ(x), n≥1.Furthermore, we have for n≥ k,δ^k_ψJ^α,β_n,ψ(x)=d_n,k^α,β J^α+k,β+k_n-k,ψ(x), d_n,k^α,β=κ^k Γ(n+k+α+β+1)/2^kΓ(n+α+β+1).And for n≥1,J^α,β_n,ψ(x)=δ^1_ψ(a_n^α,β J^α,β_n-1,ψ(x)+b_n^α,β J^α,β_n,ψ(x)+c_n^α,β J^α,β_n+1,ψ(x)),where{[ a_n^α,β =-2(n+α)(n+β)/κ(n+α+β)(2n+α+β)(2n+α+β+1),; b_n^α,β=2(α-β)/κ(2n+α+β)(2n+α+β+2),; c_n^α,β=2(n+α+β+1)/κ(2n+α+β+1)(2n+α+β+2). ].   The equality (<ref>) comes fromd/ dsP_n^α,β(s)=n+α+β+1/2 P_n-1^α+1,β+1(s), n≥1,and (<ref>). We can use (<ref>) iteratively to generate (<ref>).The equality (<ref>) comes from for n≥1,P_n^α,β(s)= d/ ds(A_nP_n-1^α,β(s)+B_nP_n^α,β(s) +C_nP_n+1^α,β(s)),where{[A_n =-2(n+α)(n+β)/(n+α+β)(2n+α+β)(2n+α+β+1),;B_n = 2(α-β)/(2n+α+β)(2n+α+β+2),;C_n = 2(n+α+β+1)/(2n+α+β+1)(2n+α+β+2). ].The proof is completed. For α,β>-1, the MJFs are orthogonal as∫_a^bJ^α,β_n,ψ(x)J^α,β_m,ψ(x) ϖ^α,β(x) dx=γ_n^α,βδ_nm,where ϖ^α,β(x)=κ^α+β+1(ψ_b-ψ(x))^α(ψ(x)-ψ_a)^βψ'(x).  We derive from (<ref>) that∫_a^bJ^α,β_n,ψ(x)J^α,β_m,ψ(x) ϖ^α,β(x) dx=∫_-1^1P^α,β_n(s)P^α,β_m(s) ω^α,β(s) ds.Then we have the orthogonality (<ref>) from (<ref>). Gauss-MJF-type quadratures hold for α,β>-1. Let {s_j^α,β,ω_j^α,β}_j=0^N be the Gauss-Jacobi-Lobatto nodes and weights. Denote{x_j^α,β=ψ^-1((s_j^α,β+1)/κ+ψ_a),ϖ_j^α,β=ω_j^α,β}_j=0^N.Then,∫_a^b p(x)ϖ^α,β(x) dx=∑_j=0^Np(x_j^α,β)ϖ_j^α,β ,∀ p(x)∈ F_2N-1^ψ,whereF_N^ψ={1,ψ(x), (ψ(x))^2 ,⋯, (ψ(x))^N}.  Since p(ψ^-1(s+1/κ+ψ_a))∈ℙ_2N-1(I) if p(x)∈ F^ψ_2N-1, we can obtain from Gauss-Jacobi-Lobatto quadrature∫_a^b p(x)ϖ^α,β(x) dx =∫_-1^1p(ψ^-1((s+1)/κ+ψ_a))ω^α,β(s) ds=∑_j=0^Np(ψ^-1((s_j^α,β+1)/κ+ψ_a))ω_j^α,β= ∑_j=0^Np(x_j^α,β)ϖ_j^α,β.This ends the proof.The following equality is useful: For α,β>-1,∑_j=0^N(J_N,ψ^α,β(x_j^α,β))^2ϖ_j^α,β =(2+α+β+1/N)γ_N^α,β. §.§ Fractional calculus of the MJFsWe will derive formulas to evaluate the ψ-fractional calculus of the MJFs in this subsection. In effect, we have for n=0,1,⋯,_ψ D_a,x^-μ J_n,ψ^α,β(x)= κ^-μ  D_-1,s^-μ P_n^α,β(s),_ψ D_a,x^μ J_n,ψ^α,β(x)= κ^μ  D_-1,s^μ P_n^α,β(s),where D_-1,x^-μ and D_-1,x^μ are the classical Riemann-Liouville integral and derivative defined asD_-1,s^-μf(s)=1/Γ(μ)∫_-1^s(s-t)^μ-1f(t) dt,D_-1,s^μf(s)= d^n/ ds^n( D_-1,s^-(n-μ)f(s)), n-1<μ<n. Let S_n^-=_ψ D_a,x^-μJ_n,ψ^α,β(x) and S_n^+=_ψ D_x,b^-μJ_n,ψ^α,β(x). Then there holds the three-term recurrence relation as follows: for n>0,S_n+1^-= A_n^α,β(ψ(x)-ψ_a)S_n^--B_n,-^α,β S_n^-+C_n^α,βS_n-1^-+A_n^α,βF_n,-^α,β, S_n+1^+=-A_n^α,β(ψ_b-ψ(x))S_n^+-B_n,+^α,β S_n^++C_n^α,βS_n-1^++A_n^α,βF_n,+^α,β,with the starting terms asS_0^- = (ψ(x)-ψ_a)^μ/Γ(μ+1),S_0^+ =(ψ_b-ψ(x))^μ/Γ(μ+1),S_1^- = κ A_0^α,β/Γ(μ+2)(ψ(x)-ψ_a)^μ+1-A_0^α,β+B_0^α,β/Γ(μ+1) (ψ(x)-ψ_a)^μ, S_1^+ =-κ A_0^α,β/Γ(μ+2)(ψ_b-ψ(x))^μ+1+A_0^α,β-B_0^α,β/Γ(μ+1) (ψ_b-ψ(x))^μ,where E_n^α,β=1+μκ A_n^α,β c_n^α,β,A_n^α,β = κ A_n^α,β/E_n^α,β, B_n,-^α,β = -μκ A_n^α,β b_n^α,β+A_n^α,β+B_n^α,β/E_n^α,β,   B_n,+^α,β = μκ A_n^α,β b_n^α,β-A_n^α,β+B_n^α,β/E_n^α,β, C_n^α,β = -μκ A_n^α,β a_n^α,β+C_n^α,β/E_n^α,β,  F_n,-^α,β= (ψ(x)-ψ_a)^μ/Γ(μ)(a_n^α,βJ_n-1,ψ^α,β(a) +b_n^α,βJ_n,ψ^α,β(a)+ c_n^α,βJ_n+1,ψ^α,β(a)), F_n,+^α,β= (ψ_b-ψ(x))^μ/Γ(μ)(a_n^α,βJ_n-1,ψ^α,β(b) +b_n^α,βJ_n,ψ^α,β(b)+ c_n^α,βJ_n+1,ψ^α,β(b)),and A_n^α,β,B_n^α,β,C_n^α,β, a_n^α,β,b_n^α,β,c_n^α,β as in Eqs. (<ref>) and (<ref>).  Noting that from (<ref>)ψ'(x)J_n,ψ^α,β(x)=(a_n^α,βJ_n-1,ψ^α,β(x) +b_n^α,βJ_n,ψ^α,β(x)+ c_n^α,βJ_n+1,ψ^α,β(x))',we obtain∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ J_n,ψ^α,β(τ) dτ = μ∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1(a_n^α,βJ_n-1,ψ^α,β(τ) +b_n^α,βJ_n,ψ^α,β(τ)+ c_n^α,βJ_n+1,ψ^α,β(τ)) dτ -(ψ(x)-ψ_a)^μ(a_n^α,βJ_n-1,ψ^α,β(a) +b_n^α,βJ_n,ψ^α,β(a)+ c_n^α,βJ_n+1,ψ^α,β(a)).By making use of notation F_n,-^α,β, one has from above equality_ψ D_a,x^-μ[(ψ(x)-ψ_a)J_n,ψ^α,β(x)]=(ψ(x)-ψ_a)S_n^--1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ J_n,ψ^α,β(τ) dτ=(ψ(x)-ψ_a)S_n^--μ(a_n^α,βS_n-1^-+b_n^α,βS_n^- +c_n^α,βS_n+1^-)+F_n,-^α,β.Then by the three-term recurrence relation, one hasS_n+1^-=_ψ D_a,x^-μ[J_n+1,ψ^α,β(x)]= κ A_n^α,β _ψ D_a,x^-μ[(ψ(x)-ψ_a)J_n,ψ^α,β(x)]-(A_n^α,β+B_n^α,β) S_n^--C_n^α,β S_n-1^-= κ A_n^α,β(ψ(x)-ψ_a)S_n^--(μκ A_n^α,β b_n^α,β +A_n^α,β+B_n^α,β)S_n^--(μκ A_n^α,βa_n^α,β+C_n^α,β)S_n-1^- -μκ A_n^α,βc_n^α,β S_n+1^-+κ A_n^α,β F_n,-^α,β.This is the first equality.Similarly, one has1/Γ(μ)∫_x^bψ'(τ)(ψ(τ)-ψ(x))^μ J_n,ψ^α,β(τ) dτ =-μ(a_n^α,βS_n-1^++b_n^α,βS_n^++c_n^α,βS_n+1^+)+F_n,+^α,β,and_ψ D_x,b^-μ[(ψ_b-ψ(x))J_n,ψ^α,β(x)]=(ψ_b-ψ(x))S_n^+-1/Γ(μ)∫_x^bψ'(τ) (ψ(τ)-ψ(x))^μJ_n,ψ^α,β(τ) dτ=(ψ_b-ψ(x))S_n^++μ(a_n^α,βS_n-1^++b_n^α,βS_n^+ +c_n^α,βS_n+1^+)-F_n,+^α,β.Then, one hasS_n+1^+= _ψ D_x,b^-μ[J_n+1,ψ^α,β(x)]= -κ A_n^α,β _ψ D_x,b^-μ[(ψ_b-ψ(x))J_n,ψ^α,β(x)]+(A_n^α,β -B_n^α,β)S_n^+-C_n^α,β S_n-1^+=-κ A_n^α,β(ψ_b-ψ(x))S_n^+-(μκ A_n^α,β b_n^α,β -A_n^α,β+B_n^α,β)S_n^+-(μκ A_n^α,βa_n^α,β+C_n^α,β)S_n-1^+ -μκ A_n^α,βc_n^α,β S_n+1^++κ A_n^α,β F_n,+^α,β.This is the second equality. The starting terms can be obtained by direct calculations. For n≥0, if α∈ℝ,β>-1, then_ψ D_a,x^-μ[(ψ(x)-ψ_a)^β J_n,ψ^α,β(x)]=Γ(n+β+1)/Γ(n+β+μ+1) (ψ(x)-ψ_a)^β+μJ_n,ψ^α-μ,β+μ(x),and if α>-1,β∈ℝ, then_ψ D_x,b^-μ[(ψ_b-ψ(x))^α J_n,ψ^α,β(x)]=Γ(n+α+1)/Γ(n+α+μ+1) (ψ_b-ψ(x))^α+μJ_n,ψ^α+μ,β-μ(x).   We derive from (<ref>) and (2.33) in <cit.> that_ψ D_a,x^-μ [(ψ(x)-ψ_a)^β J_n,ψ^α,β(x)]= 1/Γ(μ)∫_a^xψ'(τ)(ψ(x)-ψ(τ))^μ-1[(ψ(τ)-ψ_a)^β J_n,ψ^α,β(τ)] dτ= 1/κ^β+μΓ(μ)∫_-1^s(s-t)^μ-1(1+t)^β P_n^α,β(t) dt= Γ(n+β+1)/Γ(n+β+μ+1)(1+s/κ)^β+μ P_n^α-μ,β+μ(s)= Γ(n+β+1)/Γ(n+β+μ+1) (ψ(x)-ψ_a)^β+μJ_n,ψ^α-μ,β+μ(x).The second equality can be obtained similarly.For n≥0, if α∈ℝ,β-μ>-1, then_ψ D_a,x^μ[(ψ(x)-ψ_a)^β J_n,ψ^α,β(x)]=Γ(n+β+1)/Γ(n+β-μ+1) (ψ(x)-ψ_a)^β-μJ_n,ψ^α+μ,β-μ(x),and if α-μ>-1,β∈ℝ, then_ψ D_x,b^μ[(ψ_b-ψ(x))^α J_n,ψ^α,β(x)]=Γ(n+α+1)/Γ(n+α-μ+1) (ψ_b-ψ(x))^α-μJ_n,ψ^α-μ,β+μ(x).   By acting operators_ψ D_a,x^μ and _ψ D_x,b^μ on the equalities in Theorem <ref>, the desired results can be obtained from Lemma <ref>. The following orthogonality is also valid: For α+μ>-1, β-μ>-1,∫_a^b_ψ D_a,x^μ[(ψ(x)-ψ_a)^β J_n,ψ^α,β(x)]_ψ D_a,x^μ[(ψ(x)-ψ_a)^β J_m,ψ^α,β(x)]ϖ^α+μ,-β+μ(x)dx= γ̃_n,μ^α,βδ_mn.And for α-μ>-1,β+μ>-1,∫_a^b_ψ D_x,b^μ[(ψ_b-ψ(x))^α J_n,ψ^α,β(x)] _ψ D_x,b^μ[(ψ_b-ψ(x))^α J_m,ψ^α,β(x)]ϖ^-α+μ,β+μ(x)dx= γ̃_n,μ^β,αδ_mn,whereγ̃_n,μ^α,β=κ^2(μ-β)2^α+β+1Γ^2(n+β+1)Γ(n+α+μ+1)/(2n+α+β+1)n!Γ(n+α+β+1) Γ(n+β-μ+1).   The result is derived directly from (<ref>) and Theorem <ref>. Another attractive property of MJFs is that they are eigenfunctions of fractional Sturm-Liouville-type equations. Let w^α,β(x)=(ψ_b-ψ(x))^α(ψ(x)-ψ_a)^β. To show this, we define the fractional Sturm-Liouville-type operators:^+ℒ_α,β^2μ,ψ u(x) := w^0,-β(x) _ψ D_a,x^μ{w^-α+μ,β+μ(x)_ψ D_x,b^μ w^α,0(x) u(x)},^-ℒ_α,β^2μ,ψ u(x) := w^-α,0(x) _ψ D_x,b^μ{w^α+μ,-β+μ(x)_ψ D_a,x^μ w^0,β(x) u(x)}.Let n∈ℕ^+. For α>μ-1,β>-1,^+ℒ_α,β^2μ,ψ J_n,ψ^α,β(x)=λ_n,μ^α,βJ_n,ψ^α,β(x),and for α>-1,β>μ-1,^-ℒ_α,β^2μ,ψ J_n,ψ^α,β(x)=λ_n,μ^β,αJ_n,ψ^α,β(x),whereλ_n,μ^β,α=Γ(n+α+1)Γ(n+β+μ+1)/Γ(n+α-μ+1)Γ(n+β+1).   The desired results can be derived directly from Theorem <ref>. §.§ Approximation by the MJFs§.§.§ Projection approximationWe introduce the (N+1)-dimensional space of the MJFs as:F^ψ_N:={J_0,ψ^α,β,J_1,ψ^α,β,⋯, J_N,ψ^α,β}.It is clear that the above space of the MJFs is consistent with the space {(ψ(x))^i, i=0,1,⋯,N} in Theorem <ref>, so we use the same notation for it.Let α,β>-1 and ϖ^α,β(x)=κ^α+β+1(ψ_b-ψ(x))^α(ψ(x)-ψ_a)^βψ'(x). For any function u∈ L^2_ϖ^α,β(a,b), denote 𝐏_N^α,β: L^2_ϖ^α,β(a,b)→ F_N^ψ the orthogonal projection such that(u-𝐏_N^α,β u,v)_ϖ^α,β=0,∀ v∈ F_N^ψ. We reprensent the orthogonal projection 𝐏_N^α,β as𝐏_N^α,β u(x)=∑_k=0^N u_k J_k,ψ^α,β(x),u_k=∫_a^b u(x) J_k,ψ^α,β(x) ϖ^α,β(x) dx/J_k,ψ^α,β^2_ϖ^α,βwith the truncated erroru(x)-𝐏_N^α,β u(x)=∑_k=N+1^∞u_k J_k,ψ^α,β(x). To better describe the projection error u(x)-𝐏_N^α,βu(x), we need the non-uniformly mapped Jacobi-weighted Sobolev space:B^m_α,β(a,b):={v: δ^k_ψ v∈ L^2_ϖ^α+k,β+k(a,b), 0≤ k≤ m}equipped with the semi-norm and norm|u|_B^m_α,β=δ^m_ψ u_ϖ^α+m,β+m, u_B^m_α,β=∑_k=0^m(|u|^2_B^k_α,β)^1/2,where u_ϖ^α,β=(∫_a^b|u|^2ϖ^α,β(x) dx)^1/2. Let α,β>-1 and k,m,N∈ℕ. For any function u∈ B^m_α,β(a,b) and 0≤ m≤m=min{m,N+1}, we have the following estimateδ^k_ψ(u-𝐏_N^α,β u)_ϖ^α+k,β+k≲ N^k-mδ_ψ^m,ψu_ϖ^α+m,β+m.  From (<ref>), for u∈ L^2_ϖ^α,β(a,b), u(x)=∑_j=0^∞u_j J_j,ψ^α,β(x), we haveδ^k_ψ u(x)=∑_j=k^∞u_jd_j,k^α,β J_j-k,ψ^α+k,β+k(x).Thenδ^k_ψ(u-𝐏_N^α,β u)^2_ϖ^α+k,β+k=∑_j=N+1^∞ |u_j|^2 (d_j,k^α,β)^2γ_j-k^α+k,β+k= ∑_j=N+1^∞ |u_j|^2 (d_j,k^α,β)^2γ_j-k^α+k,β+k/(d_j,m^α,β)^2γ_j-m^α+m,β+m(d_j,m^α,β)^2γ_j-m^α+m,β+m ≤ max{(d_j,k^α,β)^2γ_j-k^α+k,β+k/(d_j,m^α,β)^2 γ_j-m^α+m,β+m}∑_j=N+1^∞ |u_j|^2 (d_j,m^α,β)^2 γ_j-m^α+m,β+m ≤ κ^2(k-m)Γ(N-m+2)Γ(N+k+α+β+2)/Γ(N-k+2)Γ(N+m+α+β+2)δ^m_ψu^2_ϖ^α+m,β+m.Making use of Γ(n+ν)/Γ(n+ϑ)≤ c n^ν-ϑ (see <cit.>) gives the desired result. With aid of Theorem <ref>, we can derive the error estimate involving fractional derivative. LetB^ν_α,β(a,b):={v: _ψ D_a,x^μ v∈ L^2_ϖ^α+μ,-β+μ(a,b), 0≤μ≤ν}.Let u∈B^ν_α,0(a,b). For 0<μ≤ν<1, there holds_ψ D_a,x^μ(u-𝐏_N^α,0u)_ϖ^α+μ,μ≲ N^μ-ν_ψ D_a,x^νu_ϖ^α+ν,ν.   From (<ref>) and Theorem <ref>, one has_ψ D_a,x^μ(u-𝐏_N^α,0u) =∑_k=N+1^∞u_kΓ(k+1)/Γ(k-μ+1) (ψ(x)-ψ_a)^-μ J_k,ψ^α+μ,-μ(x).Then_ψ D_a,x^μ(u-𝐏_N^α,0u)^2_ϖ^α+μ,μ=∑_k=N+1^∞ |u_k|^2 Γ^2(k+1)/Γ^2(k-μ+1)κ^2μγ_k^α+μ,-μ=κ^2(μ-ν)∑_k=N+1^∞κ^2ν|u_k|^2Γ^2(k+1)/Γ^2(k-ν+1)γ_k^α+ν,-νΓ(k-ν+1)Γ(k+α+μ+1)/Γ(k-μ+1)Γ(k+α+ν+1) ≤ κ^2(μ-ν)Γ(N-ν+2)Γ(N+α+μ+2)/Γ(N-μ+2)Γ(N+α+ν+2)∑_k=N+1^∞κ^2ν|u_k|^2Γ^2(k+1)/Γ^2(k-ν+1)γ_k^α+ν,-ν ≤Cκ^2(μ-ν)N^2(μ-ν)_ψ D_a,x^νu^2_ϖ^α+ν,ν.Then we complete the proof.Let -1<ρ<1 andE_N^ρ,ψ:=(ψ(x)-ψ_a)^-ρF_N^ψ={ (ψ(x)-ψ_a)^-ρϕ | ϕ∈ F_N^ψ}.For u∈ L^2_ϖ^ρ,ρ(a,b), define a projection operator 𝐏_N^ρ,ρ:L^2_ϖ^ρ,ρ(a,b)→ E_N^ρ,ψ as(u-𝐏_N^ρ,ρu, v)_ϖ^ρ,ρ=0, ∀ v∈ E_N^ρ,ψ.Then we have the following error estimate.Let l,m∈ℕ^+, l≤ m   ρ>0. If u∈ L^2_ϖ^ρ,ρ(a,b) and _ψ D_a,x^-ρu∈ B_0,0^m(a,b), then for 1≤ m≤ N+1,_ψ D_a,x^l-ρ(u-𝐏_N^ρ,ρu)_ϖ^l,l≲ N^l-m_ψ D_a,x^m-ρu_ϖ^m,m.   Since the set {(ψ(x)-ψ_a)^-ρJ_j,ψ^ρ,-ρ}_j=0^∞ is a complete orthogonal basis of the space L^2_ϖ^ρ,ρ(a,b), we haveu-𝐏_N^ρ,ρu=∑_k=N+1^∞u_k (ψ(x)-ψ_a)^-ρJ_k,ψ^ρ,-ρ(x)withu_k=∫_a^bu(x)(ψ(x)-ψ_a)^-ρJ_k,ψ^ρ,-ρ(x) ϖ^ρ,ρ(x) dx/(ψ(x)-ψ_a)^-ρJ_k,ψ^ρ,-ρ(x)^2_ϖ^ρ,ρ.Hence, by Theorems <ref> and <ref>, we obtain_ψ D_a,x^m-ρ(u-𝐏_N^ρ,ρu)= δ^m_ψ[_ψ D_a,x^-ρ(u-𝐏_N^ρ,ρu)] =∑_k=N+1^∞u_k κ^m(k+m)!Γ(k+ρ+1)/2^m(k!)^2J_k-m,ψ^m,m(x).Seth_k^l,m=κ^2(l-m)(k+l)!(k-m)!/(k+m)!(k-l)!.Then, we have_ψ D_a,x^l-ρ(u-𝐏_N^ρ,ρu)^2 _ϖ^l,l =∑_k=N+1^∞|u_k|^2 κ^2m((k+m)!)^2Γ^2(k+ρ+1)/2^2m(k!)^4γ_k-m^m,mh_k^l,m≤max_k≥ N+1{h_k^l,m}_ψ D_a,x^m-ρu^2_ϖ^m,m≤ cκ^2(l-m)N^2(l-m)_ψ D_a,x^m-ρu^2_ϖ^m,m.The proof is completed. Consider the H^1 projection operator 𝐏_N^1,α,β: H^1_0,ψ→ F_N^ψ∩ H^1_0,ψ such that(δ^1_ψ(𝐏_N^1,α,βu-u), δ^1_ψv)_ϖ^α,β=0, ∀ v∈ F_N^ψ∩ H^1_0,ψ. Let -1<α,β<1. If u∈ H^1_0,ψ(a,b) and δ^1_ψu∈ B_α,β^m-1(a,b), then for 1≤ m≤ N+1,𝐏_N^1,α,βu-u_1,ϖ^α,β≲ N^1-mδ^m_ψu_ϖ^α+m-1,β+m-1.   Denote by v(x)=δ^1_ψu(x). Settingϕ(x)=∫_a^x 𝐏_N-1^α,βv dψ-ψ(x)-ψ_a/ψ_b-ψ_a∫_a^b𝐏_N-1^α,βv dψ,we have ϕ∈ F_N^ψ∩ H^1_0,ψ andδ^1_ψϕ=𝐏_N-1^α,βv-κ/2∫_a^b𝐏_N-1^α,βv dψ.Hence, by the triangle inequality, one hasv-δ^1_ψϕ_ϖ^α,β≤v-𝐏_N-1^α,βv_ϖ^α,β+ κ√(γ_0^α,β)/2| ∫_a^b𝐏_N-1^α,βv dψ|.Due to u(±1)=0, we derive for -1<α,β<1,|∫_a^b𝐏_N-1^α,βv dψ| =|∫_a^b𝐏_N-1^α,βv-vdψ|≤√(γ_0^-α,-β)v-𝐏_N-1^α,βv_ϖ^α,β.Then, by Theorem <ref> we haveδ^1_ψ(𝐏_N^1,α,βu-u)_ϖ^α,β≤v-δ^1_ψϕ_ϖ^α,β≤ cv-𝐏_N-1^α,βv_ϖ^α,β≲ N^1-mδ^m_ψu_ϖ^α+m-1,β+m-1.The desired estimate follows from the Poincaré inequality. §.§.§ Interpolation approximationLet {x_j^α,β}_j=0^N be the mapped Gauss-Lobatto nodes defined in (<ref>). For function u(x)∈ C[a,b], we define the interpolation operator 𝐈_N^α,β:C[a,b]→ F_N^ψ at given nodes {x^α,β_j}_j=0^N such that𝐈_N^α,β u(x) = u(x), x=x^α,β_j, j=0,1,⋯,N. Let α,β>-1 and k,m,N∈ℕ.For u∈ B^m_α,β(a,b) (m≥1) and 0≤ m≤m=min{m,N+1}, thenδ^k_ψ(u-𝐈_N^α,β u)_ϖ^α+k,β+k≲ N^k-mδ^m_ψu_ϖ^α+m,β+m.   Note that 𝐈_N^α,β[𝐏_N^α,βu]=𝐏_N^α,βu. Hence one hasδ^k_ψ(u-𝐈_N^α,β u)_ϖ^α+k,β+k≤δ^k_ψ(u-𝐏_N^α,β u)_ϖ^α+k,β+k +δ^k_ψ[𝐈_N^α,β(u-𝐏_N^α,β u)]_ϖ^α+k,β+k.From Theorems A.2, A.3, and A.4 in Appendix A, we haveδ^k_ψ[𝐈_N^α,β(u-𝐏_N^α,β u)]_ϖ^α+k,β+k≲ N^k-mδ^m_ψu_ϖ^α+m,β+m.Combining Theorem <ref> with the above estimates, we obtain the desired result. § SPECTRAL AND COLLOCATION METHOD BASED ON THE MJFS§.§ Fractional IVP§.§.§ Petrov-Galerkin spectral methodFor simplification, we first consider the linear IVP with 0<μ<1:_ψ D_a,x^μu(x)=f(x), x∈(a,b],_ψ D_a,x^-(1-μ)u(a)=g_0.We assume that g_0=0 without loss of generality. (If g_0≠0, letu(x)=v(x)+g_0(ψ(x)-ψ_a)^μ-1/Γ(μ), then _ψ D_a,x^-(1-μ)v(a)=0).It is clear that _ψ D_a,x^-(1-μ)ϕ(a)=0 for every ϕ∈ E_N^-μ,ψ. The Petrov-Galerkin spectral scheme of (<ref>) reads as: to find u_N∈ E_N^-μ,ψ such that(_ψ D_a,x^μu_N, v_N)_ϖ^0,0=(𝐈_N^0,0f,v_N)_ϖ^0,0,∀ v_N∈ F_N^ψ.Let u and u_N be the solutions of (<ref>) and (<ref>), respectively. If f∈ C[a,b]∩ B^m-1_-1,-1(a,b)(m≥1), then we have that for m≤ N+1,_ψ D_a,x^μ(u-u_N)_ϖ^0,0+ (u-u_N)_ϖ^0,0≲ N^-mδ^m_ψf_ϖ^m,m.   It is clear that the set of {(ψ(x)-ψ_a)^μ J_n,ψ^-μ,μ(x)}_n=0^∞ forms a complete orthogonal basis of L^2_ϖ^-μ,-μ(a,b). For u∈ L^2_ϖ^-μ,-μ(a,b), the projection error u-𝐏_N^-μ,-μu isu-𝐏_N^-μ,-μu=∑_n=N+1^∞u_n (ψ(x)-ψ_a)^μ J_n,ψ^-μ,μ(x)withu_n=∫_a^bu(x)(ψ(x)-ψ_a)^μ J_n,ψ^-μ,μ(x)ϖ^-μ,-μ(x) dx/(ψ(x)-ψ_a)^μ J_n,ψ^-μ,μ(x)^2_ϖ^-μ,-μ.By acting operator _ψ D_a,x^μ on the projection error and utilizing Theorem <ref>, we have_ψ D_a,x^μ(u-𝐏_N^-μ,-μu) =∑_n=N+1^∞u_nΓ(n+μ+1)/n!J_n,ψ^0,0(x).Then we have(_ψ D_a,x^μ(u-𝐏_N^-μ,-μu),v)_ϖ^0,0=0, ∀ v∈ F_N^ψ.Since _ψ D_a,x^μu=f, it comes out(f-_ψ D_a,x^μ𝐏_N^-μ,-μu,v)_ϖ^0,0=0, ∀ v∈ F_N^ψ,and we know _ψ D_a,x^μ𝐏_N^-μ,-μu= 𝐏_N^0,0f. We also have _ψ D_a,x^μu_N=𝐈_N^0,0f by (<ref>). Therefore, we get_ψ D_a,x^μ(𝐏_N^-μ,-μu-u_N)= 𝐏_N^0,0f-𝐈_N^0,0f.Making use of the triangle inequality leads to_ψ D_a,x^μ(u-u_N)_ϖ^0,0 ≤ _ψ D_a,x^μ(u-𝐏_N^-μ,-μu)_ϖ^0,0 + _ψ D_a,x^μ(𝐏_N^-μ,-μu-u_N)_ϖ^0,0 ≤ _ψ D_a,x^μ(u-𝐏_N^-μ,-μu)_ϖ^0,0 +f-𝐏_N^0,0f_ϖ^0,0+ f-𝐈_N^0,0f_ϖ^0,0.From approximation results in Theorems <ref>, <ref> and <ref>, we estimate the three items in the right-hand side of (<ref>) as follows:_ψ D_a,x^μ(u-𝐏_N^-μ,-μu)_ϖ^0,0 ≲ N^-m_ψ D_a,x^m+μu_ϖ^m,m= N^-mδ^m_ψf_ϖ^m,m, f-𝐏_N^0,0f_ϖ^0,0 ≲ N^-mδ^m_ψf_ϖ^m,m,f-𝐈_N^0,0f_ϖ^0,0 ≲ N^-mδ^m_ψf_ϖ^m,m.Then, we conclude_ψ D_a,x^μ(u-u_N)_ϖ^0,0≲ N^-mδ^m_ψf_ϖ^m,m.From Lemma <ref> and the boundness of operator _ψ D_a,x^-μ, we obtainu-u_N_ϖ^0,0= _ψ D_a,x^-μ(_ψ D_a,x^μ (u-u_N))_ϖ^0,0≤ c_ψ D_a,x^μ (u-u_N)_ϖ^0,0.The proof is completed by combining estimates (<ref>) and (<ref>). In order to implement the Petrov-Galerkin scheme (<ref>), we expand the approximate solution asu_N(x)=∑_k=0^Nu_k(ψ(x)-ψ_a)^μ J_k,ψ^-μ,μ(x).Inserting u_N in(<ref>) and taking v_N=J_n,ψ^0,0(x), n=0,1,⋯,N, one hasu_k=(2k+1)k!/2Γ(k+μ+1) (𝐈_N^0,0f,J_k,ψ^0,0)_ϖ^0,0, k=0,1,⋯,N.We approximate the integral in the right-hand side of (<ref>) by the Gauss-MJF-type quadrature in Theorem <ref> and obtain the approximate numerical solution u_N by (<ref>). It is easy to implement theGalerkin spectral scheme of (<ref>), say, to find u_N∈ F_N^ψ such that(_ψ D_a,x^μu_N, v_N)_ϖ^α+μ,0=(𝐈_N^α+μ,0f,v_N)_ϖ^α+μ,0,∀ v_N∈ F_N^ψ.We can obtain an error estimate for the Galerkin spectral scheme (<ref>) in an analogous way. The scheme (<ref>) achieves the convergence rate that depends only on the regularity of f (see Theorem <ref>). However, the convergence rate of the scheme (<ref>) depends on the regularity of the exact solution u. Now we consider the IVP with ψ-Caputo derivative for 0≤μ≤1:_Cψ D_a,x^μu(x)=f(x), x∈(a,b], u(a)=0.Noting the initial condition u(a)=0 and Lemma <ref>, we know_Cψ D_a,x^μu(x)=_ψ D_a,x^μu(x).Hence, the above analysis is valid also for this case by replace of F_N^ψ with F_N^ψ∩{ϕ | ϕ(a)=0}. It is easy to generalize the previous analysis to the higher-order case μ∈(n-1,n),n>1. We omit the details.§.§.§ Spectral collocation methodFor nonlinear problem, collocation method is more popular than the Galerkin method. Let {x_j^α,β}_j=0^N be the collocation points as defined in (<ref>). Now consider the nonlinear IVP (P1) with 0<μ<1:_ψ D_a,x^μu(x)=f(x,u), x∈(a,b],_ψ D_a,x^-(1-μ)u(a)=0.We present the spectral collocation scheme for (<ref>) as: to find u_N∈ F_N^ψ such that_ψ D_a,x^μ u_N(x_j^α,β)=f(x_j^α,β,u_N(x_j^α,β)), j=1,2,⋯,Nand_ψ D_a,x^-(1-μ)u_N(x_0^α,β)=0.Let us introduce the mapped Lagrange interpolation basis functions asL_j(x)=∏_i≠ jψ(x)-ψ(x_i^α,β)/∏_i≠ jψ(x_j^α,β)-ψ(x_i^α,β),j=0,1,⋯,N,which satisfies L_j(x_i^α,β)=δ_ji. The differentiation matrix of the ψ-fractional derivative (DMFD) is given by[𝐃^μ,ψ]_(N+1)× (N+1):=([ _ψ D_a,x^μL_j](x_k^α,β))_k ,j=0^N.The above matrix has to be revised according to the initial condition.Let 𝐛=[_ψ D_a,x^-(1-μ)L_0(a),⋯, _ψ D_a,x^-(1-μ)L_N(a)]. We replace the first row of 𝐃^μ,ψ with 𝐛. Then, using the same notation, the discrete scheme (<ref>) is in matrix-vector form𝐃^μ,ψ𝐮=f(𝐱,𝐮)with 𝐱=[x_0^α,β,x_1^α,β,⋯,x_N^α,β]^T, 𝐮=u_N(𝐱) (The first component of f is set to zero). The system (<ref>) is nonlinear, some kinds of iterative techniques can be employed to solve it. To evaluate the entries of DMFD, we first note thatL_j(x)=∑_i=0^N l_ijJ_i,ψ^α,β(x)withl_kj=J_k,ψ^α,β(x_j^α,β)ϖ_j/γ_k^α,β, k=0,1,⋯,N-1;l_Nj=J_N,ψ^α,β(x_j^α,β)ϖ_j/(2+α+β+1/N)γ_N^α,β.Then we need to evaluate [ _ψ D_a,x^μJ_j,ψ^α,β](x_k^α,β), which can be done efficiently by the relations in Section <ref>. For the nonlinear IVP (P2) with the Caputo derivative (0<μ<1):_Cψ D_a,x^μu(x)=f(x,u), x∈(a,b], u(a)=0,the DMFD alters to[𝐃^μ,ψ]_N× N:=([ _Cψ D_a,x^μL_j](x_k^α,β))_k ,j=1^N.The number of unknowns is N as 𝐮=u_N(𝐱) with 𝐱=[x_1^α,β,⋯,x_N^α,β]^T. It is possible to implement the spectral collocation scheme for (<ref>) as: to find u_N∈ E_N^-μ,ψ such that_ψ D_a,x^μu_N(x_j^-μ,μ) =f(x_j^-μ,μ,u_N(x_j^-μ,μ)), j=1,2,⋯,Nand_ψ D_a,x^-(1-μ)u_N(x_0^-μ,μ)=0.In this circumstance, the mapped Lagrange interpolation basis functions should be replaced byL_j(x)=(ψ(x)-ψ_a)^μ/(ψ(x_j^-μ,μ)-ψ_a)^μ∏_i≠ jψ(x)-ψ(x_i^-μ,μ)/∏_i≠ jψ(x_j^-μ,μ)-ψ(x_i^-μ,μ),j=0,⋯,N.It is clear that the spectral collocation scheme (<ref>) should achieve the same convergence rate as the Petrov-Galerkin scheme (<ref>). §.§ Fractional BVP§.§.§ Petrov-Galerkin spectral methodWe consider the linear BVP (P3) with f(x,u)=f(x), that is,{[_ψ D^μ_a,x u(x)=f(x),x∈(a,b),1<μ<2,; [_ψ D^-(2-μ)_a,x u](a)=[_ψ D^-(2-μ)_a,x u](b)=0. ]. Denote the discrete space as:U_N:={ϕ=(ψ(x)-ψ_a)^μ-1φ:φ∈ F_N-1^ψ_ψ D_a,x^-(2-μ)ϕ(b)=0 }.Let V_N^0:=F_N^ψ∩ H^1_0,ψ(a,b). The Petrov-Galerkin spectral scheme of (<ref>) is to find u_N∈ U_N such that ∀ w_N∈ V_N^0(_ψ D_a,x^μ-1u_N, δ^1_ψw_N)_ϖ^0,0=-(f, w_N)_ϖ^0,0.In the view of (<ref>), the scheme (<ref>) can be equivalently reformulted as to find v_N:=_ψ D_a,x^-(2-μ)u_N∈ V_N^0 such that(δ^1_ψ v_N, δ^1_ψw_N)_ϖ^0,0=-(f, w_N)_ϖ^0,0.Let u and u_N be the solution of (<ref>) and (<ref>), respectively. If v=_ψ D_a,x^-(2-μ)u∈ H^1_0,ψ(a,b) and δ^1_ψv∈ B^m-1_0,0(a,b), then we have_ψ D_a,x^μ-1(u-u_N)_ϖ^0,0≲ N^1-m_ψ D_a,x^m+μ-2u_ϖ^m-1,m-1.Moreover, if f∈ B_0,0^m-2(a,b)(m≥2), then we get _ψ D_a,x^μ-1(u-u_N)_ϖ^0,0≲ N^1-mδ^m-2_ψf_ϖ^m-1,m-1.   Using the standard argument for error analysis of Galerkin approximation, we know thatδ^1_ψ(v-v_N)_ϖ^0,0=inf_v^*∈ V_N^0δ^1_ψ(v-v^*)_ϖ^0,0.By the approximation result of Theorem <ref>, we haveδ^1_ψ(v-v_N)_ϖ^0,0≤ C N^1-mδ^m_ψ v _ϖ^m-1,m-1.Noting that v=_ψ D_a,x^-(2-μ)u and v_N=_ψ D_a,x^-(2-μ)u_N, we obtain the estimate (<ref>).By δ^m-2_ψf=_ψ D_a,x^m+μ-2u, the estimate (<ref>) follows immediately from (<ref>). For implementing the discrete scheme (<ref>), setϕ_n(x)=(ψ(x)-ψ_a)^μ-1J_n,ψ^1-μ,μ-1(x),n=1,2,⋯.Since J_n,ψ^-1,1(x)=-κ(n+1)/2n(ψ_b-ψ(x))J_n-1,ψ^1,1(x), we have from Theorem <ref>_ψ D_a,x^-(2-μ)ϕ_n(x) =Γ(n+μ)/(n+1)!(ψ(x)-ψ_a)J_n,ψ^-1,1(x)=-κΓ(n+μ)/2n n!(ψ_b-ψ(x))(ψ(x)-ψ_a)J_n-1,ψ^1,1(x). It verifies that {ϕ_i}_i=1^N-1 consists a basis of U_N. SinceJ_n,ψ^-1,-1(x)=-κ^2/4(ψ_b-ψ(x))(ψ(x)-ψ_a)J_n-2,ψ^1,1(x), then V_N^0={J_n,ψ^-1,-1,2≤ n≤ N}. Let u_N=∑_i=1^N-1u_i ϕ_i(x), w_N=J_n,ψ^-1,-1(n=2,⋯,N), and insert them in (<ref>). Noting that δ_ψ^1J_n,ψ^-1,-1=κ(n-1)/2J_n-1,ψ^0,0 (n≥2), we haveu_i=-(2i+1) (i-1)!/κΓ(i+μ)(f,J_i+1,ψ^-1,-1)_ϖ^0,0,for i=1,2,⋯,N-1. We note that using the MJFs as basis functions, the matrix of the linear system is diagonal. §.§.§ Spectral collocation methodFollowing the similar argument in Subsection <ref>, the collocation method for boundary value problem needs to replace the fractional differential operator in equation by differentiation matrix corresponding to fractional differential operator with consideration of boundary value conditions. Consider abstract boundary value problemF(_ψ D^μ_1u,_ψ D^μ_2u,⋯, _ψ D^μ_ku, u,x)=0,x∈(a,b)with μ_1>μ_2≥⋯≥μ_k, μ_1∈(1,2). The derivative operator _ψ D^μ_k(1≤ i≤ k) can be _ψ D_a,x^μ,_ψ D_x,b^μ, _Cψ D_a,x^μ, or _Cψ D_x,b^μ. The boundary value conditions are given as ℬ_au(a)=ℬ_bu(b)=0, where ℬ_a and ℬ_b are certain kinds of boundary value operators. Using the differentiation matrix, the discrete scheme of (<ref>) takes the following formF(𝐃^μ_1𝐮,𝐃^μ_2𝐮,⋯, 𝐃^μ_k𝐮, 𝐮,𝐱)=0,which will be solved for unknowns 𝐮. § NUMERICAL VALIDATION We are now in the position of numerical tests. In the following numerical simulations, we consider various choices of ψ as in Remark <ref>. The main purpose of the examples is to check the convergence behaviour of the numerical solution. To measure the accuracy of the presented method if the exact solution is known, the errors are defined byerr(N)=max{|u_N(x_i)-u(x_i)|},where x_i=a+(b-a)i/500 (i=0,⋯,500) for the Petrov-Galerkin method and x_i=x_i^α,β (i=0,⋯,N) for the spectral collocation method, u_N(x) and u(x) are the numerical and the exact solutions, respectively. All of the computations are performed by Matlab R2020a on laptop with AMD Ryzen 7 5800H with Radeon Graphics, 3.20 GHz. §.§ Fractional IVPsLet 0<μ<1. In this section, we apply the MJFs spectral and collocation method to the IVPs of linear and nonlinear fractional ODEs. Consider the linear FODE (<ref>) with 0<μ<1. We choose f(x) such that the problem satisfies one of the following four cases:C11.u(x)=(ψ(x)-ψ_a)^μexp(ψ(x)-ψ_a).C12.u(x)=(ψ(x)-ψ_a)^μsin(πκ(ψ(x)-ψ_a)).C13.u(x)=(ψ(x)-ψ_a) exp(ψ(x)-ψ_a).C14.u(x)=(ψ(x)-ψ_a)sin(πκ(ψ(x)-ψ_a)). We first apply the Petrov-Galerkin scheme (<ref>) to solve the problem in this example. The u(x) in C11 and C12 leads to a smooth f(x), but in C13 and C14 causes a low regularity of f(x). According to Theorem <ref>, the predicted convergence rate is “spectral accuracy" for C11 and C12, while it is limited for C13 and C14. The numerical errors of our tests are plotted in Figures <ref>-<ref>. The “spectral accuracy" is observed from Figures <ref>-<ref>, whilst the convergence rate is finite in Figures <ref>-<ref>, which agree with the theoretical results. We next apply the spectral collocation scheme (<ref>) to solve the same problem for comparison. The parameters α and β in the scheme (<ref>) are always zero for simplification. The numerical errors of our tests are plotted in Figures <ref>-<ref>. The convergence behaviour of the spectral collocation scheme is pretty interesting: the convergence rate is finite in Figures <ref>-<ref> for the low regularity of the exact solution u(x) (C11 and C12), whilst the “spectral accuracy" is achieved for the smooth exact solution u(x) (C13 and C14) in Figures <ref>-<ref>.For 0<μ<1, consider_Cψ D_a,x^μu(x)=g(x)-u^2, x∈(a,b], u(a)=0,where g(x) is chosen such that the problem satisfies the following two cases:C21.u(x)=∑_k=1^30Γ(k-μ+1)(ψ(x)-ψ_a)^k/k!.C22.u(x)=120(ψ(x)-ψ_a)^5+μ/Γ(μ+6)+ Γ(μ+4)(ψ(x)-ψ_a)^3+2μ/Γ(2μ+4) +Γ(2μ+3)(ψ(x)-ψ_a)^2+3μ/Γ(3μ+3). The Petrov-Galerkin spectral scheme is awkward to solve the above problem because of nonlinearity. Here we apply spectral collocation scheme (<ref>) to solve it. The exact solution u∈ F_30^ψ in C21, so we take N = 30 and apply the Newton method to solve the nonlinear system with total 10 iterations. We list the numerical errors for various ψ(x) and μ in Table <ref>. For the C22, we only list the numerical errors with ψ(x)=x^2.3, x∈ [0,1] and different μ in Table <ref>. For other ψ(x), the numerical results are similar. §.§ Fractional BVPsLet 1<μ<2. In this subsection, we apply the MJFs spectral and collocation method to the BVPs with the ψ-fractional derivative. As a standard example, we consider ψ-fractional Helmholtz equationλ^2 u(x) - _ψ D_a,x^μu(x)=f(x),x∈ (a,b).The homogenous Dirichlet conditionu(a)=u(b)=0 is used here in order to ensure the wellposedness of the above model problem.In the Petrov-Galerkin scheme (<ref>), denoting the mass and stiff matrix by 𝐌=(m_ij) and 𝐒=(s_ij) withm_ij =(ϕ_i,J_j+1,ψ^-1,-1)_ϖ^0,0, i,j=1,⋯,N-1, s_ij =(_ψ^R D_a,x^μ-1ϕ_i,δ^1_ψJ_j+1,ψ^-1,-1)_ϖ^0,0,i,j=1,⋯,N-1,we have the discrete representation(λ^2 𝐌 +𝐒)𝐮=𝐟,where 𝐟=[(f,J_2,ψ^-1,-1)_ϖ^0,0,⋯,(f,J_N,ψ^-1,-1)_ϖ^0,0]^T and 𝐮= [u_1,⋯,u_N-1]^T. As stated in Subsection <ref>, the stiff matrix is diagonal with s_ii=κΓ(i+μ)/(2i+1)Γ(i). However, the mass matrix is full, which can be computed exactly by the Gauss-MJF-type quadrature in Theorem <ref>.In the spectral collocation scheme (<ref>), making use of the differentiation matrix 𝐃^μ, we have the discrete representation(λ^2 𝐈 -𝐃^μ)𝐮=𝐟,where 𝐈 is the identity matrix, 𝐮=u(𝐱), 𝐟= f(𝐱) and 𝐱=[x_1^α,β,⋯,x_N-1^α,β]^T.Consider the following three cases:C31. Solution: u(x)=(ψ(x)-ψ_a)^μ-1∑_k=1^30J_k,ψ^1-μ,μ-1(x)/k!.C32. Smooth solution: u(x)=sin(κπ(ψ(x)-ψ_a)).C33. Solution of low regularity: u(x)=(ψ_b-ψ(x))(ψ(x)-ψ_a)^2μ.It verifies that u(a)=0,u(b)≠0 and _ψ D_a,x^-(2-μ)u(a)=_ψ D_a,x^-(2-μ)u(b)=0 in the case C31. Hence the Petrov-Galerkin scheme is used to solve the problem (<ref>) with λ=0 for the case C31. The numerical errors are listed in Table <ref>. It is shown that the spectral accuracy is achieved for various fractional order μ.We use the spectral collocation scheme to solve the problem (<ref>) for the cases C32 and C33 by taking different values of N,α,β,μ,λ.The numerical errors are listed in Tables <ref>-<ref>. It is shown that the spectral accuracy is achieved in the case C32 for different values of parameters since the solution is smooth. However, the finite convergent order is observed from Table <ref> for the case C33 corresponding to the low regularity of the solution.The interesting facts about the convergence behaviour of the spectral method and collocation method are confirmed by the numerical tests, which are summarized below: ∙The exponential convergence is demonstrated for both the Petrov-Galerkin method and the spectral collocation method but in the different settings.∙The function ψ(x) has little effect on the convergence behaviour of our numerical method. The differentiation matrices of the spectral collocation for various options of ψ(x) are almost the same.∙The well-designed Petrov-Galerkin spectral method can achieve both the exponential convergence and the well-conditioned discrete system. The spectral collocation method can achieve the same convergence rate as the Petrov-Galerkin spectral method. ∙The influence of parameters α,β of the spectral collocation scheme upon the computational accuracy is not obvious.§ CONCLUSIONS The main reason to consider the ψ-fractional differential equations lies in: (i) We have a more general approach to solve various problems by the choices of ψ(x). (ii) We can discover the inner structure of various fractional order differential equations through numerical simulation. (iii) We can use these tools to explain and predict natural phenomena in the real world.In this paper, we deal with the spectral approximation to the fractional calculus with respect to function ψ, also named as ψ-fractional calculus, which generalizes the Hadamard and the common-used Riemann-Liouville fractional calculi. We consider spectral-type methods with mapped Jacobi functions as basis functions and obtain efficient algorithms to solve ψ-fractional differential equations. In particular, we setup the Petrov-Galerkinspectral method and spectral collocation method for initial and boundary value problems involving Rieman-Liouville and Caputo ψ-fractional derivatives. We develop basic approximation theory for the MJFs and conduct the error estimates of the presented methods. We also derive a recurrence relation to evaluate the collocation differentiation matrix for implementing the spectral collocation algorithm. Numerical examples confirm the theoretical results and demonstrate the effectiveness of the spectral and collocation methods.The demand of the flexibility of the numerical algorithm pushes the multi-domain method into consideration <cit.>. The presented method can be designed into a multi-domain version. Besides, we note that it is not difficult to employ the spectral collocation method to solve the variable-order fractional differential equations with more general form <cit.>. We will conduct the error analysis of the method for solving the variable-order (non-constant case) ψ-fractional differential problem in our further work. Meanwhile, since the well-posedness and regularity of the variable-order fractional differential equations are still open <cit.>, we also expect to discuss them in the future plan. § ACKNOWLEDGEMENT C. L. was partly supported by the National Natural Science Foundation of China under Grant No. 12271339.§ APPENDIX AWe first give the generalized Gronwall's inequality.Theorem A.1.Assume that ξ(x) is a nonnegative function locally integrable on [a,b], C>0 is a constant, and η(x) is continuous, non-negative such thatη(x)≤ξ(x)+C∫_a^xψ'(s)(ψ(x)-ψ(s))^μ-1η(s) ds, x∈[a,b],Then for x∈[a,b], there holdsη(x)≤ξ(x)+ ∫_a^x∑_n=1^∞(CΓ(μ))^n/Γ(nμ)ψ'(τ)(ψ(x)-ψ(τ))^nμ-1ξ(τ) dτ.In particular, if ξ(x) is be a nondecreasing function on [a,b], thenη(x)≤ξ(x)E_μ(CΓ(μ)(ψ(x)-ψ_a)^μ),where E_β(s):=∑_k=0^∞s^k/Γ(1+kβ) denotes the Mittag-Leffler function.   The theorem is a special case λ=0 of Theorem 2.3 in <cit.>. We estimate the projection error at the endpoints as follows.Theorem A.2.Let α,β>-1 and u∈ B^m_α,β(a,b), we have* If α+1<m≤ N+1, then|(u-𝐏_N^α,βu)(b)|≤ cm^-1/2N^1+α-mδ^m_ψ u_ϖ^α+m,β+m; * If β+1<m≤ N+1, then|(u-𝐏_N^α,βu)(a)|≤ cm^-1/2N^1+β-mδ^m_ψ u_ϖ^α+m,β+m, where c is a positive constant independent of m,N and u.   For n≥ k, denote h_n,k^α,β=(d_n,k^α,β)^2γ_n-k^α+k,β+k and let m=min{m,N+1}, by Cauchy-Schwarz inequality we have|(u-𝐏_N^α,βu)(b)|=|∑_n=N+1^∞u_nJ_n,ψ^α,β(b)|≤∑_n=N+1^∞|u_n||J_n,ψ^α,β(b)| ≤ (∑_n=N+1^∞|J_n,ψ^α,β(b)|^2(h_n,m^α,β)^-1)^1/2(∑_n=N+1^∞|u_n|^2h_n,m^α,β)^1/2 ≤ (∑_n=N+1^∞|J_n,ψ^α,β(b)|^2(h_n,m^α,β)^-1)^1/2δ^m_ψu_ϖ^α+m,β+m.Since J_n,ψ^α,β(b)=P_n^α,β(1), along the same arguments as Lemma 3.10 in <cit.> (page 135), we have∑_n=N+1^∞|J_n,ψ^α,β(b)|^2(h_n,m^α,β)^-1≤ c m^-1N^2(1+α-m).This gives the first inequality. Since J_n,ψ^α,β(a)=P_n^α,β(-1), we derive the second inequality as above. The stability of the interpolation operator is given in the following theorem.Theorem A.3. For any v(x)∈ C[a,b]∩ B^1_α,β(a,b), we have𝐈_N^α,βv_ϖ^α,β≲(N^-α-1|v(b)|+N^-β-1|v(a)|+v_ϖ^α,β+ κ^-2 N^-1|v|_B^1_α,β).  Note that x(s)=ψ^-1(s+1/κ+ψ_a) and let v(s)=v(x(s)). Then𝐈_N^α,βv(x)=I_N^α,βv(s):=∑_j=0^N v(s_j^α,β)l_j(s), s∈[-1,1],with I_N^α,β the Jacobi-Gauss-Lobatto interpolation operator andl_j(s)=∏_i=0, i≠ j^Ns-s_i^α,β/s_ j^α,β-s_i^α,β, j=0,1,⋯,Nthe standard Lagrange interpolation basis functions at nodes {s_j^α,β}_j=0^N.Because of∫_a^b (v(x))^2ϖ^α,β(x) dx=∫_-1^1 (v(s))^2ω^α,β(s) ds,this means that v_ϖ^α,β=v_ω^α,β. Moreover, by∫_-1^1( dv/ ds)^2ω^α,β(s) ds= ∫_a^b(κ^-1δ^1_ψv)^2ϖ^α,β(x) dx=1/κ^2δ^1_ψv_ϖ^α,β,we have δ^1_ψ v_ϖ^α,β=κ^2∂_sv_ω^α,β.Now it follows from Lemma 3.11 in <cit.> thatI_N^α,βv(s)_ω^α,β ≲ N^-α-1|v(1)|+N^-β-1|v(-1)|+ v_ω^α,β+ N^-1|v|_1,ω^α+1,β+1≲ N^-α-1|v(b)|+N^-β-1|v(a)|+ v_ϖ^α,β+κ^-2 N^-1|v|_B^1_α,β.This ends the proof. Theorem A.4. Let α,β>-1 and ϕ∈ F_N^ψ. Then we haveδ^m_ψϕ_ϖ^α+m,β+m≲ N^mϕ_ϖ^α,β. Letϕ(x)=∑_k=0^N ϕ_kJ_k,ψ^α,β(x),ϕ_k=∫_a^b ϕ(x) J_k,ψ^α,β(x) ϖ^α,β(x) dx/J_k,ψ^α,β^2_ϖ^α,β.Thenϕ^2_ϖ^α,β=∑_k=0^N ϕ^2_kγ_k^α,β,andδ^m_ψϕ^2_ϖ^α+m,β+m=∑_k=m^N ϕ^2_k(d_k,m^α,β)^2γ_k-m^α+m,β+m =∑_k=m^N ϕ^2_k γ_k^α,β(d_k,m^α,β)^2γ_k-m^α+m,β+m/γ_k^α,β≤κ^2m(2N+α+β+1)N!Γ(N+m+α+β+1)/(2(N-m)+α+β+1)(N-m)!Γ(N+α+β+1)∑_k=m^N ϕ^2_k γ_k^α,β≲ N^2mϕ^2_ϖ^α,β.The proof is completed. 99AlmMM18Almeida R , Malinowska AB , Monteiro MTT. Fractional differential equations with a Caputo derivative with respect to a kernel function and their applications.Mathematical Methods in the Applied Sciences, 41(1): 336–352, 2018.CaiKL22Cai M, Karniadakis GE, Li CP. Fractional SEIR model and data-driven predictions of COVID-19 dynamics of Omicron variant.Chaos, 32(7): 071101, 2022.CheS22Chen S, Shen J. Log orthogonal functions: approximation properties and applications. IMA Journal of Numerical Analysis, 42(1): 712–743, 2022. CheS23Chen S, Shen J. Log orthogonal functions in semi-infinite intervals: approximation results and applications.SIAM Journal on Numerical Analysis, 61(1): 110–134, 2023.CheSW16Chen S, Shen J, Wang LL. Generalized Jacobi functions and their applications to fractional differential equations.Mathematics of Computation, 85(300): 1603–1638, 2016.CheSZZ20Chen S, Shen J, Zhang ZM, Zhou Z. A spectrally accurate approximation to subdiffusion equations using the log orthogonal functions. SIAM Journal on Scientific Computing, 42(2): A849–A877, 2020. DenHWX20Deng WH, Hou R, Wang WL, Xu PB. Modeling Anomalous Diffusion: From Statistics to Mathematics. World Scientific, Singapore, 2020.DieFFL05Diethelm K, Ford NJ, Freed AD, Luchko Y. Algorithms for the fractional calculus: A selection of numerical methods. Computer Methods in Applied Mechanics and Engineering, 194(6-8): 743–773, 2005.DinLY17Ding HF, Li CP, Yi Q. A new second-order midpoint approximation formula for Riemann-Liouville derivative: algorithm and its application. IMA Journal of Applied Mathematics, 82(5): 909–944, 2017. FahFRS21Fahad HM, Fernandez A, Rehman M, Siddiqi M. Tempered and Hadamard-type fractional calculus with respect to functions. Mediterranean Journal of Mathematics, 18(4): 143, 2021.FanLL22Fan EY, Li CP, Li ZQ. Numerical approaches to Caputo-Hadamard fractional derivatives with applications to long-term integration of fractional differential systems.Communications on Nonlinear Science and Numerical Simulation, 106: 106096, 2022. FanLS23Fan EY, Li CP, Stynes M. Discretised general fractional derivative. Mathematics and Computers in Simulation, 208: 501–534, 2023. GarMS17Garra R, Mainardi F, Spada G. A generalization of the Lomnitz logarithmic creep law via Hadamard fractional calculus. Chaos, Solitons Fractals, 102: 333–338, 2017.Had92Hadamard J. Essai sur létude des fonctions, données par leur développement de Taylor. Journal de Mathématiques Pures et Appliquées, 8: 101–186, 1892. Hil00Hilfer R(ed). Applications of Fractional Calculus in Physics. World Scientific, Singapore, 2000. KilST06Kilbas AA, Srivastava HM, Trujillo JJ. Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam, 2006. KucMFF22Kucche KD, Mali AD, Fernandez A, Fahad HM. On tempered Hilfer fractional derivatives with respect to functions and the associated fractional differential equations. Chaos, Solitons and Fractals, 163: 112547, 2022.LiC19Li CP, Cai M.Theory and Numerical Approximations of Fractional Integrals and Derivatives. SIAM, Philadelphia, USA, 2019. LiL21Li CP, Li ZQ. Stability and logarithmic decay of the solution to Hadamard-type fractional differential equation. Journal of Nonlinear Science, 31(2): 31, 2021. LiL22Li CP, Li ZQ. The finite-time blow-up for semilinear fractional diffusion equations with time ψ-Caputo derivative. Journal of Nonlinear Science, 32(6): 82, 2022. LiL22JMSLi CP, Li ZQ. On blow-up for a time-space fractional partial differential equation with exponential kernel in temporal derivative. Journal of Mathematical Sciences, 266: 381–394, 2022. LiL23Li CP, Li ZQ. Stability and ψ-algebraic decay of the solution to ψ-fractional differential system. International Journal of Nonlinear Science and Numerical Simulation, 24(2): 695–733, 2023. LiLW20Li CP, Li ZQ, Wang Z. Mathematical analysis and the local discontinuous Galerkin method for Caputo-Hadamard fractional partial differential equation.Journal of Scientific Computing, 85(2): 41, 2020. LiZ15Li CP, Zeng FH. Numerical Methods for Fractional Calculus. Chapman and Hall/CRC Press, USA, 2015. LiZL12Li CP, Zeng FH, Liu FW. Spectral approximations to the fractional integral and derivative. Fractional Calculus and Applied Analysis, 15(3): 383–406, 2012. Lom56Lomnitz C. Creep measurements in igneous rocks. Journal of Geology, 64(5): 473–479, 1956.Osl70Osler TJ. Leibniz rule for fractional derivatives generalized and an application to infinite series. SIAM Journal on Applied Mathematics, 18(3): 658–674, 1970. Pod99Podlubny I. Fractional Differential Equations. Academic Press, San Diego, USA, 1999.SheTW11Shen J, Tang T, Wang LL. Spectral Methods: Algorithms, Analysis and Applications. Springer-Verlag, Berlin, 2011. Sze75Szegő G. Orthogonal Polynomials, (4th ed). American Mathematical Society, Providence, USA, 1975. YanT21Yang Y, Tang ZY. Mapped spectral collocation methods for Volterra integral equations with noncompact kernels. Applied Numerical Mathematics, 160: 166–177, 2021.ZakHS22Zaky MA, Hendy AS, Suragan D. Logarithmic Jacobi collocation method for Caputo-Hadamard fractional differential equations. Applied Numerical Mathematics, 181: 326–346, 2022. ZenLLBTA14Zeng FH, Liu FW, Li CP, Burrage K, Turner I, Anh V. A Crank-Nicolson ADI spectral method for the two-dimensional Riesz space fractional nonlinear reaction-diffusion equation. SIAM Journal on Numerical Analysis, 52(6): 2599–2622, 2014. ZhaLL23Zhao TG, Li CP, Li DX. Efficient spectral collocation method for fractional differential equation with Caputo-Hadamard derivative.Fractional Calculus and Applied Analysis, 26(6): 2903–2927, 2023.ZhaoMK19Zhao TG, Mao ZP, Karniadakis GE. Multi-domain spectral collocation method for variable-order nonlinear fractional differential equations. Computer Methods in Applied Mechanics and Engineering, 348: 377–395, 2019. ZhaZ23Zhao TG, Zhao LJ. Jacobian spectral collocation method for spatio-temporal coupled Fokker-Planck equation with variable-order fractional derivative.Communications on Nonlinear Science and Numerical Simulation, 124: 107305, 2023. ZhaWX13Zhao XD, Wang LL, Xie ZQ. Sharp error bounds for Jacobi expansions and Gegenbauer-Gauss quadrature of analytic functions. SIAM Journal on Numerical Analysis, 51(3): 1443–1469, 2013. ZheW22Zheng XC, Wang H. Analysis and discretization of a variable-order fractional wave equation.Communications on Nonlinear Science and Numerical Simulation, 104: 106047, 2022.
http://arxiv.org/abs/2312.16426v1
{ "authors": [ "Tinggang Zhao", "Zhenyu Zhao", "Changpin Li", "Dongxia Li" ], "categories": [ "math.NA", "cs.NA", "65F60, 65D32, 65M12, 35K55", "G.1.2; G.1.9" ], "primary_category": "math.NA", "published": "20231227062002", "title": "Spectral approximation of $ψ$-fractional differential equation based on mapped Jacobi functions" }
LLM-SAP: Large Language Model Situational Awareness Based Planning Aldo Vera================================================================== We consider (annealed) large deviation principles for component empirical measures of several families of marked sparse random graphs, including (i) uniform graphs on n vertices with a fixed degree distribution; (ii) uniform graphs on n vertices with a fixed number of edges; (iii) Erdős-Rényi G(n, c/n) random graphs. Assuming that edge and vertex marks are independent, identically distributed, and take values in a finite state space, we show that the large deviation rate function admits a concise representation as a sum of relative entropies that quantify the cost of deviation of a probability measure on marked rooted graphs from certain auxiliary independent and conditionally independent versions. The proof exploits unimodularity, the consequent mass transport principle, and random tree labelings to express certain combinatorial quantities as expectations with respect to size-biased distributions, and to identify unimodular extensions with suitable conditional laws. We also illustrate how this representation can be used to establish Gibbs conditioning principles that provide insight into the structure of marked random graphs conditioned on a rare event. Additional motivation for this work arises from the fact that such a representation is also useful for characterizing the annealed pressure of statistical physics models with general spins, and large deviations of evolving interacting particle systems on sparse random graphs. Keywords: Marked random graph, large deviations principle, component empirical measure, unimodular measure, relative entropy, random regular graph, configuration model, random graph, Gibbs conditioning principleMSC 2020 subject classifications: Primary 60F10, 05C80 § INTRODUCTION §.§ Context and description of results Large deviation principles (LDPs) characterize the exponential decay rate of a sequence of probabilities of rare events in a Polish space 𝒮 in terms of an optimization problem involving an associated function from 𝒮 to[0,∞] called the rate function. An LDP with a tractable rate function facilitates the computation of rare event probabilities and the study of limit laws conditioned on a rare event. Such conditional limit laws, often referred to asGibbs conditioning principles, provide insight into the most likely way in which a rare event happens. The focus of this work is onLDPs for the neighborhood (and more general component) empirical measures of sequences of sparse random graphs with independent and identically distributed (i.i.d.) vertex and edge marks taking values in a finite state space. We consider three classes of random graph sequences: the sequence comprised of graphs generated by the configuration model (i.e., uniform random graphs with a given empirical degree distribution), which includes random regular graphs as a special case, thesequence comprised of uniform random graphs with a fixed number of edges, and thesequence consisting of sparse G(n,c/n) graphs (see Section <ref> for definitions). Our main result shows that in each case, the rate function of the LDP for the neighborhood and component empirical measures takes a similar succinct form. Such a representation has many applications, as elaborated in Section <ref>.To describe our results, fix κ≥ 2 and let G_n be the random κ-regular graph on n vertices equipped with i.i.d. vertex marks {X_v}_v ∈ G_ntaking values in a finite set 𝒮 with common marginal ν. Let L_n represent the (random) marked neighborhood empirical measure, definedbyL_n := 1/n∑_v ∈ G_nδ_(X_v, X_∂ v),where X_∂ v := {X_u, u ∼ v }, andu ∼ v indicates thatu and v are neighbors in G_n. As is well known by Sanov's theorem <cit.>, the sequence of empirical measures of just the vertex marks 1/n∑_v ∈ G_nδ_X_v, n ∈, satisfies an LDP with rate function equal to H(·ν), the relative entropy functional with respect to ν. The challenge in proving an LDP for the sequence {L_n} arises from the dependencies between X_∂ v, v ∈ V, caused by overlaps in vertex neighborhoods, which depend on the random graph topology. In a seminal work <cit.>, inspired by the well-known configuration model used to generate random graphs from a given degree sequence, Bordenave and Caputo introduced a novel colored configuration model and used combinatorial arguments to establish LDPs for the component empirical measure of unmarked ,andrandom graph sequences. Their approach was adapted to prove LDPs in the more complicated setting of marked graphs (with discrete i.i.d. marks) for theandrandom graph sequences in <cit.>, and subsequently for a broad class of uniform marked random graphs, including the, bipartite and stochastic block model, in <cit.>. For random κ-regular graphs, it is shown in <cit.> that the sequence{L_n} satisfies an LDP on 𝒮×𝒮^κ, representing the space of marks on κ-stars (see Figure <ref>), with a rate function 𝒥 that takes the following form for probability measures μ on 𝒮×𝒮^κ satisfying certain symmetry conditions:𝒥(μ) =s̅(κ)+H(μ^o ν) +H(μ^o) - H(μ)+ κ/2 H(μ^1,o) +∑_x, x' ∈𝒮_μ [ log E_1(x, x')()!].Here s̅(κ) := κ (logκ - 1)/2 - log ( κ!), μ^o represents the root mark marginal, μ^1,o represents the marginal on a (uniformly chosen) neighbor of the root and the root, Hrepresents the entropy functional, H(·ν) is the relative entropy functional mentioned above, and E_1(x,x')() is a certain combinatorial quantity that is a function of the marked κ-star . It is worth emphasizing that the form of the rate function is more complicated in the marked setting. Indeed, the rate function for the neighborhood empirical measure on unmarked graphs is rather simple:for κ-regular graphs, it is degenerate (taking the value zero at the point mass on κ and infinity otherwise) and for other graph ensembles, it is equal to relative entropy with respect to the true root degree distribution, with appropriate constraints. The expressions for the rate functions for marked ,andmodels obtained in <cit.> and <cit.> have a similar complexity as that in (<ref>) for κ-regular graphs, though with an extra term that accounts for randomness in the root degree distribution, and the precise form depending on the graph sequence. In the presence of edge marks and when considering the full component empirical measure, further additional combinatorial terms arise in the rate functions of <cit.>. Such descriptions make it challenging to analyze optimization problems involving the rate function to compute or gain insight into probabilities of rare events. In this article, we show that in fact the rate function admits a more tractable expression, involving only relative entropies. As a special case of our main result (Theorem <ref>) we showthat the rate function 𝒥 in(<ref>) associated with the κ-regular random graph with i.i.d. vertex marks takes the following simple form: forμ∈𝒫(𝒮×𝒮^κ) such that both the second marginal of μ and the marginal μ^1,o are symmetric, we show 𝒥 (μ) = 1/2 [ H (μ)+H (μ) ],where(x_o, (x_u, u ∼ o))= ν(x_o) ∏_u ∼ oμ^1 (x_u) and (x_o, (x_u, u ∼ o))= ν(x_o) ∏_u ∼ oμ^1|o (x_u|x_o). The form in (<ref>) provides an intuitive interpretation of the rate function as the cost of μ deviating from its product and conditionally independent versions. Since the vertex mark empirical measure 1/n∑_v ∈ G_nδ_X_v is a projection of the neighborhood empirical measure L_n onto the root mark, one should be able to apply the contraction principle to immediately recover Sanov's theorem from the LDP for the neighborhood empirical measure.While this is indeed the case when the rate function is of the form (<ref>), it is not as apparent given the form in (<ref>). Further, the expression for the rate function remains the same as in (<ref>) in the presence of edge marks, and also in the more general setting of ,orgraph sequences, though withandsuitably redefined, and an additional relative entropy term present in thesetting to account for deviations of the mean root degree from that under the true law. In the absence of edge marks, the form (<ref>) can be shown to take an even simpler form for all graph sequences (see Corollary <ref>).Our results also extend to the component empirical measure. In this case, the corresponding rate function is represented as a countable sum, with each summand capturing deviations of the depth h-neighborhood of the root from the true law via a form analogous to that in (<ref>), although the definitions of the analogs ofandare now more involved, entailing unimodular extensions of marked graphs (see definitions (<ref>)-(<ref>) and Theorem <ref>). Nevertheless, the rate function has a common form for all three graph ensembles, providing a Sanov-like theorem for unimodular sparse graphs, with the quantity in (<ref>) representing a fundamental divergence functional on the space of probability measures on rooted graphs with respect to a “true law” of the graph with i.i.d. vertex and edge marks. §.§ Comments on the proof The proof of our main result is carried out in two steps. In the first step, we establish an intermediate form of the rate function in terms of differences of relative entropies; see (<ref>) and Theorem <ref>. An analogous representation holds for unmarked graphs <cit.>, but the proof in the marked case is considerably more difficult because, as mentioned above, the corresponding form (<ref>) of the rate function has many additional terms that are not present in the unmarked case. This entails establishing representations of combinatorial quantities, similar to those appearing in (<ref>), in terms of relative entropies with respect to the true law. While this intermediate form has a useful counting-based interpretation, and is also much more tractable than the combinatorial form in (<ref>),it involves relative entropies of different signs, and even its non-negativity is not immediate.The next step, which involves showing that the intermediate form of the rate function coincides with the expression in (<ref>), is considerably more involved. First, we show that the auxiliary quantitiesandfor the component empirical measure are indeed probability measures.In the case of vertex-marks they are specified explicitly via the definition below (<ref>), and so this is immediate. However, in the presence of edge-marks and for component measures, they are only defined by specifying their densities with respect to another measure, and hence it requires work to show that they are indeed probability measures. Next, we identify probability measures on equivalence classes of rooted trees with corresponding probability measures on randomly labeled rooted trees. We then exploit resultant exchangeability properties to show that certain covariance measures coincide with suitable sized-biased marginals (see <cit.>) and to identify unimodular extensions of certain probability measures arising in the definition of the rate function with associated conditional laws (see Remark <ref>(1)). Throughout, we crucially exploit the unimodularity property of marked random graphs and the associated mass transport principle (see Section <ref>). In the course of the proof, we also arrive at an alternative characterization of a combinatorial quantity called the microstate entropy that arises in <cit.>, which may be of independent interest (see Corollary <ref>).§.§ Motivation and applicationsThere are multiple motivations for obtaining a simple, easily interpretable form of the rate function. We discuss three of these below:A. Gibbs conditioning principles: Consider the following question: what is the distribution of the random graph sequence and marks, when conditioned on the event that the fraction of vertices for which the sum of marks on neighboring vertices is much larger than its mean? In Proposition <ref>, we illustrate how our new form of the rate function can be exploited to provide a rather explicit answer to this question for thegraph sequence. In many discrete-time locally interacting particle systems on a sparse graph, transition probabilities of the process are governed by the sum (or some other functional) of neighboring marks. Thus, conditional limit laws of this type would provide insight into how rare events occur in such interacting particle systems. More examples of suchGibbs conditioning principles will be elucidated in forthcoming work.B. Variational formulas for characterizations of the annealed pressure: In a companion work <cit.>, the neighborhood empirical measure LDP for graphs with more general (possibly continuous) i.i.d. marks is shown to hold in a stronger topology. The latter form is then exploited to obtain variational formulas and characterize the annealed pressure of aclass of statistical physics models on sparse random graphs with general (not necessarily discrete) spins.While such variational formulas are easily derived in the discrete case using elementary combinatorics, consideration of continuous spin models naturally entails a large deviations analysis of marked empirical measures. C. Extension to Polish mark spaces:It is natural to ask if one can also extend the LDP for the component empirical measure to the case of i.i.d. marks on general Polish spaces. The old form of the rate function in (<ref>) admits an interpretation only for discrete marks. An approximation argument was used in <cit.> to establish the LDP for continuous marks, yielding a non-explicit rate function, expressed in terms of limits. In contrast, the definition of the rate function in (<ref>) automatically extends to general Polish space marks. It is natural to conjecture that this extension in fact coincides with the rate function of the LDP for the component measure with general marks. While (as mentioned above) this conjecture can be verified for the neighborhood empirical measure (see <cit.>), the case of the component empirical measure remains open, but the form of the rate function in (<ref>) potentially allows for a different proof approach. Such a result would have important implications for understanding large deviations and conditional limit laws for a broad class of evolving interacting particle systems for which hydrodynamic limits have been established in <cit.> (e.g., see the application to interacting gradient diffusions in <cit.>). §.§ Related workFinally, we mention some related literature on LDPs for sparse random graphs. In <cit.>, combinatorial arguments were used to establish anLDP for the joint vertex-edge empirical measure in marked and random regular graphs with i.i.d. marks in a discrete space. This LDP was extended in <cit.> to general marks by approximation. The corresponding rate functions are in terms of relative entropies, but they involve certain “sub-consistency” conditions. None of these works consider the component empirical measure. In <cit.>, an LDP for the (vertex, children) empirical measure of a “tree-indexed Markov chain” was established. Several other LDPs have been established for unmarked sparse random graphs that have completely different motivations. For example, the works of <cit.> are motivated by combinatorial questions concerning the prevalence of various kinds of subgraphs and component sizes in sparse random graphs. These complement earlier work addressing analogous questions for dense graphs such as <cit.>, which uses the theory of graphons, and large deviations for sparse graphs (but not uniformly sparse, as considered here) using the theory of nonlinear large deviations <cit.>. The speed of the LDP in <cit.> depends on the structure and size of the subgraph of interest. In contrast, our LDPs are always at speed n and the focus in this article is on marked random graphs, which (as mentioned above) is required for understanding large deviations for interacting particle systems on sparse random graphs. The latter theory is inchoate in comparison to the well developed theory in the “mean-field” setting (of interacting particle systems on the complete graph). §.§ Organization The rest of this paper is organized as follows. In Section <ref>, we recall some preliminaries on rooted marked graphs and probability measures on them, and introduce some notation. We define the model and quantities of interest and state our main results on LDPs for neighborhood and component empirical measures in Sections <ref> and <ref>, respectively. Although the neighborhood empirical measure LDPs follow from the more general component measure LDPs, we state them separately for ease of readability and to defer the substantial additional notation required to state the full component measure result. Section <ref> contains the statement of a conditional limit theorem for the neighborhood empirical measure for therandom graph sequence; its proof is relegated to Section <ref>. The proof of the intermediate representation is given in Section <ref>. The proof of the neighborhood and component empirical measure LDPs are given in Sections <ref> and <ref>, with several technical results relegated to Appendices <ref>-<ref>.§ PRELIMINARIES AND NOTATION§.§ Generic notationLetdenote the set of natural numbers, letdenote the set of real numbers, and define := ∪{0}.For x ∈, we write x^+ = max{x, 0} and x^- = -min{x, 0}. For a finite set A, let |A| denote its cardinality. Let {·} denote the indicator function. Given a Polish space(, d), let () denote its Borel σ-field. Let () denote the space of probability measures on (, ()) equipped with theLévy-Prohorov metric (which we denote by d_()), which generates the weak topology on (). If ν∈() and f is a ν-integrable function on , let ⟨ν ,f ⟩ denote ∫_ f dν.Ifis either afinite or countable set andν∈(), we denote H(ν) to denote the Shannon entropy of ν:H(ν) = -∑_x ∈ν(x) logν(x),with the convention 0 log 0 = 0. Ifisa Polish space and μ, ν∈(), the relative entropy of μ with respect to ν is given byH(μν) = ∫_logdμ/dνdμ if μ≪ν, ∞ otherwise. Let (Ω, , ) be a complete probability space. The expectation with respect tois denoted by . This probability space is assumed to be rich enough to support all random elements defined in this paper. For a random variable , let () denote its law. Also, we adopt the convention that the sum over the empty set is 0 and the product over the empty set is 1. §.§ The space of rooted marked graphsAn (undirected) graph is denoted by G = (V, E), where V is the vertex set and E is the edge set. For a vertex v ∈ V, let N_v(G) := {u ∈ V: {u, v}∈ E} denote the set of all neighbors of v in G. The degree of vertex v is the cardinality of N_v(G). The vertex set V is either a finite set or a countable set, and all graphs considered in this paper are locally finite; that is, each vertex has a finite degree. For u, v ∈ V, the distance between u and v is the length of the shortest path connecting u and v. Two graphs G_1 = (V_1, E_1) and G_2 = (V_2, E_2) are said to be isomorphic (denoted by G_1 ≃ G_2) if there exists a one-to-one map φ: V_1 → V_2 such that {u, v}∈ E_1 if and only if {φ(u), φ(v)}∈ E_2.A connected graph G with a distinguished vertex o is denoted by (G, o). Such a graph is called a rooted graph, and the vertex o is called its root. The rooted graph(G_1, o_1) is said to be isomorphic to the rooted graph (G_2, o_2) if there exists a one-to-one map φ : V_1 → V_2 such that φ(o_1) = o_2 and {u, v}∈ E_1 if and only if {φ(u), φ(v)}∈ E_2. Here, φ is called an isomorphism from (G_1, o_1) to (G_2, o_2), and this equivalence relation is denoted by (G_1, o_1) ≃ (G_2, o_2). Letdenote the set of equivalence classes of rooted graphs under the above equivalence relation. Alternatively, we viewas the space of all unlabeled rooted graphs. For (G, o) ∈, let (G, o)_h,h ∈, denote the subgraph of G rooted at o containing all vertices at a distance of at most h from the root.We equipwith the local topology, which makesa Polish space (see <cit.> or <cit.> for precise definitions).We now define corresponding marked graphs. Letandbe nonempty finite sets. Letdenote the set of unlabeled rooted marked graphs,where each vertex is assigned an -valued mark and each directed edge is assigned a-valued mark. An element ofis denoted by (G, o, (x, y)), where o is the root,{x_v}_v ∈ G are the vertex marks, and {(y_(u,v), y_(v,u))}_{u,v}∈ E are theedge marks. Here, y_(u,v) (resp. y_(v,u)) is the mark on the edge {u,v} associated with the vertex u (resp. v); hence, each undirected edge gets a ×-valued mark. We shall also denote a typical element ofby (G, (x, y)), or simply G. For h ∈, let (G, o, (x, y))_h denote the marked rooted subgraph of (G, o, (x, y)) containing all vertices at a distance of at most h from the root o.As in the unmarked case,can be equipped with the local topology, and made a Polish space. Also, for h ∈, let h⊂ denote the space of rooted marked graphs with depth at most h (i.e., every vertex is at most at a distance h from the root), and we equip it with the subspace topology.Let ⊂ denote the set of rooted marked trees. Also, for h ∈, let h⊂ denote the set of rooted marked trees of depth at most h; for h = 0, 0 is the set . A typical element ofwill generally be denoted by (τ, o, (x, y)),(τ, (x, y)), or simply τ, depending on the context. For τ∈, let both deg_τ(o) and |N_o(τ)| denote the degree of the root of τ.We now define certain subgraphs associated with a (marked) graph. Let(G, (x, y)) be an (unrooted) graph with -valued vertex marks{x_v}_v ∈ G and×-valuededge marks {(y_(u,v), y_(v,u))}_{u,v}∈ E. For a vertex v ∈ G, let _v(G, (x, y)) denote the marked connected component containing the vertex v rooted at v,viewed as an element of . Also, let _v(G, (x, y)) denote the marked depth 1 tree rooted at v viewed an element of 1. Throughout this paper, specific (deterministic) marked trees will be denoted by τ, τ^', etc., and -valued random elements will be denoted by , ^', etc. We will consider both marked graphs on a finite vertex set (whose vertices are labeled) and elements of(i.e., unlabeled rooted marked graphs) – whether a marked graph belongs to the former or latter set should be clear from the context.§.§.§ Ulam-Harris-Neveu labeling for rooted trees We now describe the Ulam-Harris-Neveu random labeling scheme (see <cit.>) which produces a random rooted marked labeled tree, with vertices assigned labels from the set = {o}∪(⋃_k=1^∞^k) in the manner described below.(These vertex labels should not be confused with vertex marks.) Given (τ, o, (x,y)) ∈,the root is assigned the label o. Labels are then assigned to the other vertices based on a breadth-first search starting from the root with ties broken uniformly at random. In other words, the vertices in N_o(τ) are assigned a label from {1, 2, …, |N_o(τ)|} uniformly at random. Now, suppose that for some h ≥ 1, all vertices at a distance less than or equal to h from the root have been assigned labels. Also, for u = (u_1, …, u_i) ∈^i and v= (v_1, …, v_j) ∈^j, let uv ∈^i+j denote the concatenation (u_1, …, u_i, v_1, …, v_j). Then, fix any vertex v that is at a distance h+1 from the root and let p(v) denote its parent (i.e., the unique neighbor of v whose distance from the root is h).On the event that p(v) has label i = (o, i_1, i_2, …, i_h) and |N_p(v)(τ)| = ℓ +1, assign v the label (i,m) where m is chosen uniformly at random from {1,2, …, ℓ}, independently of all other random assignments made thus far. Carry out the same procedure (independently) for each vertex at a distance h+1 from the root. Proceed by induction to label the whole tree. Note that, under this labeling, an element (τ, (x, y)) ∈ is viewed as a random rooted marked tree whose vertex set is a subset of . §.§ Probability measures on rooted markedgraphs Let () denote the space of probability measures onequipped with the weak topology. Also, let ( ) ⊂() denote the set of probability measures on equipped with the subspace topology. A typical element of () will generally be denoted by ρ. Expectation with respect to ρ is denoted by _ρ. For h ∈, let ρ_h denote the marginal of ρ on the depth h truncation of the (random) rooted marked graph. Note that ρ_h ∈(h). In particular, o̊∈() denotes the law of the mark on the root vertex. If ρ∈(), then ρ_h ∈(h). For h = 1, a(1) element will generally be denoted by μ. Let ρ_o, deg∈() denote the law (under ρ) of the degree of the root, that is, the law ofwhen () = ρ. Finally, given >̱ 0,let ⊂() denote the set of probability measures onsuch that = $̱. Although () is the space of probability measures onunlabeled rooted marked trees, it is sometimes convenient to view it as a collection of probability measures onlabeled rooted marked trees under the Ulam-Harris-Neveu labeling scheme. This viewpoint is especially useful to describe certain functions defined on (). Throughout this paper, for arandom element , wherever vertex labels are used for , it is understood thatis viewed as a rooted labeled tree labeled according to the Ulam-Harris-Neveu scheme. § THE LDP FOR THE NEIGHBORHOOD EMPIRICAL MEASUREIn this section, we introduce the model and state our main results on the LDP for the neighborhood empirical measure. §.§ Marked random graph modelWe consider random graphs whose vertices and edges are marked with independent random variables. We consider three families of sparse graphs, each parametrized by a fixed constant>̨ 0:1. () Graphs with a fixed degree distribution: Let M ∈, and let α∈() have mean $̨ and support contained in{0,1,…,M}. Let{α_n}be such that*For each n ∈, the support of α_n lies in {0,1,…, M};*For each 0 ≤ i ≤ M, α_n({i}) is of the form k/n for some k ∈{0,1, …, n}, and n∑_i =0^M i α_n({i}) is even;* α_n →α in () asn →∞.Letdenote the set of graphs onnvertices[The restriction that n∑_i =0^M i α_n({i}) is even is required since the sum of vertex degrees must be even. Under that assumption,by the Erdős-Gallai theorem,is nonempty for large n.] whose degree distribution equalsα_n.2. () Graphs with a fixed number of edges: Let{m_n}be a sequence such thatm_n/n →/̨2asn →∞.Letdenote the set of graphs onnvertices withm_nedges.3. () Sparserandom graphs: Letdenote therandom graph onnvertices with connection probability/̨n.We now introduce three corresponding sequences{G_n}of independent random graphs:* (α_n, )̨ graph sequence: where G_n is uniformly sampled fromfor each n ∈; * ()̨ graph sequence: where G_n is uniformly sampled fromfor each n ∈;* ()̨ graph sequence: where G_n is therandom graph with connection probability /̨n for each n ∈. Fixing one of the three graph sequences above, we define a corresponding marked graph sequence by assigning i.i.d. marks to the vertices and edges of each graphG_nin the sequence. More precisely, letandbe nonempty finite sets, and letνandξbe probability measures onand×,respectively. We mark each vertex ofG_nwith i.i.d. random variables{X^n_v}_v ∈ G_nwith lawν, independent ofG_n. We also markeach edge ofG_nwith i.i.d. random variables{(Y_(u,v), Y_(v,u))}_u, v ∈ G_n, u ∈ N_v(G_n)with lawξ̅∈(×), independent ofG_nand{X^n_v}_v ∈ G_n,defined by ξ̅(y, y^') := 1/2[ξ(y, y^') + ξ(y^', y)],y, y^'∈,and associate the markY_(v,u)(resp.Y_(u,v)) with the vertexv(resp.u). Inother words, for every (undirected) edge{u,v}ofG_n, we sample an element(Y, Y')from the lawξand assignYtouandY'tovwith probability1/2and assignY'touandYtovwith probability1/2,independent of everything else. This gives us the corresponding random marked graph sequence{(G_n, (X^n, Y^n))}. §.§ The neighborhood empirical measureGiven any of the marked random graph sequences{(G_n, (X^n, Y^n))}described in Section <ref>, we consider the corresponding neighborhood empirical measure, defined as:L_n := 1/n∑_v ∈ G_nδ_(_v(G_n, (X^n, Y^n))), n ∈,where_v(G_n, (X^n, Y^n))is the first neighborhood of(G_n, (X^n, Y^n))rooted atv, viewed as an element of1. ThenL_nis a(1)random element.We denote the the neighborhood empirical measure of the(α_n, )̨,()̨and()̨marked random graph sequences byL^_n,L^_nandL^_n, respectively.Let η_1 be the law of a 1 random element having i.i.d. vertex and edge marks with law ν andξ, respectively, both independent of each other and independent of the root degree. When the root degree distribution is Poi()̨ (resp. α), we denote it by _1 (resp. _1). It is well known that as n →∞, both L^_n and L^_n (resp. L^_n) converge weakly to _1 (resp. _1); see, e.g., <cit.>, <cit.>, <cit.>.§.§ Statement of the neighborhood empirical measure LDPTo state the LDP for the sequences{L^_n},{L^_n}and{L^_n}, we need to introduce some notation in order to define the rate functions. Recallη_1from Remark <ref>, and letμ∈(1)be such thatμ≪η_1and0 < _μ < ∞. Next, we define the size-biased distributionμ̅ofμbydμ̅/dμ(τ) := deg_τ(o)/_μ, τ∈1.Note thatμ̅is indeed a probability measure since_μdμ̅/dμ()= 1. Further, note that there is at least one child underμ̅, that is,μ̅(≥ 1) = 1.We now define several important marginal laws ofμ. Let = (, (, ))be a random1element distributed according toμ̅. Viewingas a random labeled tree using the Ulam-Harris-Neveu scheme described in Section <ref> and noting thatμ̅(≥ 1), let_v(resp._o) denote the vertex mark of the vertex with labelv(resp. the root) and, similarly, let_(v,o)(resp._(o,v)) denote the edge mark on the edge{v, o}associated with the vertex with labelv(resp. the rooto). Letdenote the law of(_1,_(1,o))and letdenote the conditional law[We fix a version of this conditional law for each μ∈(1), the choice of which is inconsequential in the definition of the rate function.] of(_1,_(1,o))given(_o, _(o,1)). Additionally, let∈((×)^2)denote the joint law of((_1, _(1,o)), (_o, _(o,1))), that is,= ((_1, _(1,o)), (_o, _(o,1))).We define the size-biased quantities,andin an exactly analogous fashion. Usingμ≪η_1, it can be shown that≪(see Lemma <ref>).Next, forμ∈(1)with0 < _μ < ∞, define the probability measures= and = by d/dη_1(τ) : = ∏_v ∈ N_o(τ)d/d(X_v, Y_(v,o)), τ = (τ, (X, Y)) ∈1,andd/dη_1(τ) : = ∏_v ∈ N_o(τ)d/d(X_v, Y_(v,o)| X_o, Y_(o,v)), τ = (τ, (X, Y)) ∈1.Note that bothanddepend onη_1, but we suppress this dependence in the above definition for the sake of readability. In the particular case when bothμandη_1are supported on$̨-regular trees without edge marks, we have = μ^1,o and = η^1,o = ν⊗ν; in this case, andtake the explicit form(x_o, x_1, …, x_)̨= ν(x_o) ⊗_v=1^μ̨^1(x_v);(x_o, x_1, …, x_)̨= ν(x_o) ⊗_v=1^μ̨^1|o(x_v | x_o).Although not immediately apparent, in the general case as well, it can be shown that bothandare indeed probability measures (see Lemma <ref>).The rate functions will be expressed in terms of the family of functions_,̱η_1: (1) → [0, ∞], ≥̱0, defined as follows:_,̱η_1(μ) : =H(μ_oν)if β = 0and _μ[] = 0, 1/2 H(μ) + H(μ) if β > 0, _μ[] = β, is symmetricand μ≪η_1, ∞otherwise.Also, define ℓ_:̨_+ →_+ byℓ_(̨β) := /2(β/log(β/)- β/ + 1 ), ≥̱0,with the convention 0 log 0 = 0. We define the rate functions ^, ^, ^ : (1) →[0,∞] as follows:^(μ) := _,̨_1(μ) if μ_o, deg = α, ∞otherwise;^ (μ) := _,̨_1(μ);^(μ) :=ℓ_(̨'̱) +_'̱, '̱_1(μ) if '̱ := _μ[] < ∞, ∞otherwise. We can now state our result for the neighborhood empirical measure.The sequence {L_n^} (respectively, {L_n^} and {L_n^}) satisfies the LDP on (1)with rate function ^ (respectively, ^ and ^). Observe that the rate functions in (<ref>)-(<ref>) have a tractable form, expressed as the sum of two relative entropy terms, with the first term measuring the entropic cost of deviation of the distribution of the leaf marks from independence and the second entropic cost measuring conditional independence (given the root) of the leaf marks in μ. It is interesting to note that although theandrandom graph sequences themselves are very different, we have shown that their rate functions take a similar form. The same is true for thesequence, except that there is an additional cost ℓ_κ(·) to capture possible deviations of the mean root degree from that of the true law (which are forbidden at the large deviation scale for theandrandom graph sequences). In the absence of edge marks, this common form can in fact be simplified further. In the particular case when there are no edge marks, we have the following alternative representation for _,̱η_1:_,̱η_1(μ) =H(μ_oν)if β = 0and _μ[] = 0,H(μ) + /2H(⊗)if β > 0, _μ[] = β, is symmetricand μ≪η_1, ∞otherwise.The proof of Corollary <ref> is deferred to Section <ref>. In the next section, we provide one example to illustrate how the results of this section facilitate the analysis of conditional limit laws.§.§ A Gibbs conditioning principleFor simplicity, we shall only consider vertex marks.Letdenote the space of rooted marked trees of depth 1 with vertex marks from .Given a function h : →, consider the local h-sum functional f : → defined byf(τ, X) = ∑_v ∈ N_o(τ) h(X_v),(τ, X) ∈.Consider the neighborhood empirical measure L_n=L_n^ of the(α_n, )̨ random graph sequence with limiting degree distribution α∈({0,1,…, M}) and limiting true law _1 ∈(). Since α has finite support andis a finite set, f is uniformly bounded. Therefore, by the weak convergence result in Remark <ref>, it follows that ⟨ L_n, f ⟩ : = _L_n[f] converges weakly to __1[f] as n →∞. Quantities of the form _L_n[f]are relevant for many interacting particle system models. For example, in the SIR model for epidemics (see <cit.>), each vertex takes the state S (susceptible), I (infected) or R (recovered), and at any given time, the probability that a susceptible vertex becomes infected depends on the number of neighboring nodes that are infected.In this case, if f is defined as in (<ref>) with h(X_v) = {X_v = I}, then the quantity⟨ L_n, f ⟩ = _L_n [f]captures the growth rate of infected individuals at any time.The most likely way in which there is an unusually large infection growth rate (starting from an i.i.d. initial condition) is governed by the following question for any fixed c >__1[f]:Conditioned on ⟨ L_n, f ⟩≥ c, how does L_n behave for large n?The form of the rate function in (<ref>) allows us to provide a precise answer to this question. Suppose __1[f] < c < ·̨max_x ∈ h(x). Then for any open set A ∋μ^*,c, we havelim_δ↓ 0lim sup_n →∞1/nlog(L_n ∉ A | ⟨ L_n, f ⟩≥c - δ)< 0,whereμ^*,c(τ, X) := γ(deg_τ(o), X_o) ∏_v ∈ N_o(τ)ψ_γ(X_v),(τ, X) ∈,with ψ_γ(x) := 1/∑_n =0^M n γ(n, x),x ∈,where γ∈({0,1,…, M}×) and λ > 0 is the unique solution toγ(n, x) = α(n) ν(x)exp{λ n h(x)}/∑_y ∈ν(y)exp{λ n h(y)},n ∈{0,1,…, M}, x ∈,and ∑_m,y m h(y) γ(m, y) = c.In particular, this implieslim_δ↓ 0lim_n →∞(L_n ∈·| ⟨ L_n, f ⟩≥ c-δ)= δ_μ^*,c(·). In other words, conditioned on the rare event ⟨ L_n, f⟩≥ c, the distribution of the neighborhood of a uniformly chosen vertex in the marked graph has the following form: the root degree and the root mark is sampled from the tilted distribution γ whose density (with respect to the true law α⊗ν) is proportional to exp{λ n h(x)} for a uniquely specified λ > 0, and the leaf marks are i.i.d. with law given by a scaled average of γ.In the special case of $̨-regular graphs,⟨ L_n, f ⟩ = (κ+1) ∑_v ∈ G_n X_v^n, and hence, the Gibbs conditioning principle of Proposition <ref> reduces to that of (the usual) Sanov's theorem <cit.> in whichμ^*,ccorresponds to i.i.d. vertex marks on the$̨-star. § THE LDP FOR THE COMPONENT EMPIRICAL MEASUREWe now state our main result in full generality. Given one of the sequences of random graphs{(G_n, (X^n, Y^n))} introduced in Section <ref>, we define the correspondingcomponent empirical measure sequence:U_n := 1/n∑_v ∈ G_nδ_(_v(G_n, (X^n, Y^n))), n ∈,where we recall that _v(G_n, (X^n, Y^n)) denotes the connectedcomponent of (G_n, (X^n, Y^n)) rooted at v, viewed as an element of . ThenU_n is a () random element; the randomness in U_n comes from the randomness of both the underlying graph and of the vertex and edge marks. Note that the neighborhood empirical measure L_n defined in (<ref>) is the depth 1 marginal of U_n. We denote the component empirical measure of the (α_n, )̨, ()̨ and ()̨ random graph sequences by U^_n, U^_n and U^_n, respectively.§.§ PreliminariesWe first introduce some concepts required to state the full LDP.§.§.§ Local weak convergenceWe first recall the definition of local weak convergence <cit.> and describe the typical behavior of the component empirical measures {U^_n}, {U^_n} and {U^_n}.A sequence of marked random graphs {(G_n, (X^n, Y^n))}, where G_n has n vertices,converges locally in law to ρ∈() if {U_n},the corresponding sequence of component empirical measures, converges to ρ in () weakly as n →∞.The following convergence results generalize Remark <ref>.*Let η^α∈() denote the law of the size-biasedGalton-Watson tree with offspring distribution α and i.i.d. vertex marks with law ν and i.i.d. edge marks (independent of the vertex marks) with law ξ, both independent of the underlying tree. It is well known that {U_n^} converges locally in law to η^α as n →∞ (see, e.g.,<cit.>, <cit.>). *For ≥̱0, letdenote the law of the Galton-Watson tree with Poi()̱ offspring distribution and i.i.d. vertex marks and i.i.d. edge marks (independent of the vertex marks), both independent of the underlying tree.It is well known that both {U_n^} and {U_n^} converge locally in law toas n →∞ (see, e.g., <cit.>, <cit.>). §.§.§ UnimodularityUnimodularity is an important symmetry property associated with () elements. Roughly speaking, this property suggests that “everything shows up at the root.” The precise definition is via the following mass transport principle <cit.>. Let [ ; ] denote the space of unlabeled doubly rooted marked graphs (i.e., unlabeled marked graphs with two distinguished vertices, which could coincide), equipped with the corresponding topology of local convergence (see <cit.>).A probability measure ρ∈() is said to be unimodular if, for any measurable function f : [; ] →_+, we have_ρ[∑_v ∈f(, o, v)] =_ρ[∑_v ∈f(, v, o)].The definition of unimodularity forprobability measure ρ∈() is exactly analog. Let denote the set of unimodular probability measures onsuch that _ρ = $̱.The above definition also applies to()elements, that is, probability measures on (unmarked) rooted graphs.The following are examples of unimodular measures.*Let (G_n, (x^n, y^n)) be a deterministic marked graph on n vertices. Its component empirical measure U_n ∈() is a unimodular probability measure.* Let Q ∈() be a probability measure with nonzero finite mean. The size-biased Galton-Watson tree is defined as follows: the root has offspring distribution Q; all subsequent vertices have independent number of offsprings with distribution Q̅, where Q̅(k) := (k+1) Q(k+1)/∑_j ∈ j Q(j),k ∈.The law of the size-biased Galton-Watson tree is unimodular.*The laws , ≥̱0, andin Remark <ref> are allunimodular. §.§.§ AdmissibilityWe now define certain elements that generalize the size-biased edge marginaldefined in (<ref>). Rather than edge marginals, the objects associated with the vertices 1 andoare now subtrees (see Figure <ref>), as specified in Definition <ref>. If τ is a (labeled) marked tree and {u, v} is an (undirected) edge in τ, let τ(v ∖ u) ∈× denote the pair (_v(τ∖{u, v}), y_(v, u)), where_v(τ∖{u, v}) is the subgraph of τ rooted at v,after removing the edge {u,v},viewed as an element of ,and y_(v, u) is the mark on the edge {u, v} associated with v. For h ∈,let τ(v ∖ u)_h denote the pair (_v(τ∖{u, v})_h, y_(v, u)), where (_v(τ∖{u, v})_h ∈h is the truncation of_v(τ∖{u, v}) up to depth h. Additionally, if either u or v is not a vertex in τ or {u,v} is not an edge in τ, then define τ(v ∖ u) := 0.Note that for a -valued random variable , by Remark <ref> and Definition <ref>, bothandare random variables taking values in ×∪{0}. Furthermore, on the event that ≥ 1, bothandtake values in ×.Next, forρ∈with>̱ 0,similar to the definition in (<ref>), we define the size-biased versionρ̅ofρbyd/dρ(τ) :=deg_τ(o)/ =deg_τ(o)/_ρ, τ∈.Note thatis indeed a probability measure because_ρd/dρ() = 1. Also, note thatρ̅≪ρandρ̅(≥ 1) = 1. Likewise, forh ∈andh̊∈^(̱h), we define the size-biased versionhusing (<ref>), but withρreplaced byh̊.Let h ∈, h̊∈(h) and () = h̊, with 0 < _h̊ < ∞.*Let h denote the marginal law of (_h-1, _h-1). Since h(≥ 1), it follows that h∈((h-1×)^2);*Let h∈(h-1×) denote the first marginal of h, that is, the law of _h-1;*For τ^'∈h-1×, let h(· | τ^') denote the conditional law of _h-1 given _h-1 = τ^'.The following notion ofadmissibilitygeneralizes the symmetry property of the size-biased edge marginalsin (<ref>).The measure h̊ is said to beadmissible if his symmetric. §.§.§ Unimodular extension of admissible lawsFor anyh ∈, we now define a certain admissible probability measure onh+1as an extension of a given admissible probability measure onh(see <cit.>, <cit.>). These probability measures will serve as the “reference laws” associated with the restriction of the component empirical measures up to depthh, and certain auxiliary probability measures associated with these extensions will appear as the “base measures” for the relative entropies in the rate function,playing the same role asandin the definition (<ref>) of_,̱η_1(μ).Let>̱ 0, and leth̊∈^(̱h)be admissible. Let() = h̊. Forτ,τ^'∈h-1×such thath(τ, τ^') > 0, defineh(τ, τ^')() := h̊_h = |_h-1= τ, _h-1= τ' , ∈h,that is,h(τ, τ^')(·)denotes the conditional law of_hgiven_h-1 = τand_h-1 = τ'. We now use these conditional laws to define a one-step unimodular extension ofh̊, denoted byh+1∈(h+1). First, sample ∈h with law h̊ (i.e., () = h̊). If = 0, stop and set the unimodular extension to be the root. If ≥ 1, independently for eachv ∈ N_o(), sample^'∈h× with law h(_h-1,_h-1)(·) and replace _h-1 with ^' to obtain a h+1 random element. Set h+1 as the law of the resultant h+1 random element. In other words, for τ∈h, h(τ) is given byh(τ) = h̊-̊1̊(τ_h-1) ∏_v ∈ N_o(τ)h-1(_h-2, _h-2)(_h-1). The expression in (<ref>) clearly shows thathis an extension ofh̊-̊1̊because(h)_h-1 = h̊-̊1̊.*We can express h in terms of h and some combinatorial quantities; see (<ref>). This alternative expression will be used later in some of the proofs.*When h = 1, a similar extension procedure (referred to as the “tiling procedure”)has been carried out in a different context in<cit.> and <cit.> for certain models of interacting diffusions and jump processes, respectively,to extend the “local equation” that describes the root-and-neighborhood marginal law of aparticle system to the law of the full particle system on the infinite random tree.§.§ Statement of the component empirical measure LDP We are now in a position to introduce the rate functions associated with the general component empirical measure LDP.Leth ≥ 2. Leth̊∈(h)be admissible, suppose that0 < _h̊ < ∞, and recall the definition ofhfrom (<ref>).It can be shown thath̊≪handh≪h(see Lemma <ref>).Define the probability measures_h, _h ∈(h)by_h := h,d_h/dh (τ) := ∏_v ∈ N_o(τ)dh/dh(_h-1, _h-1), τ∈h.It can be shown that_h is indeed a probability measure (see Section <ref>). It is also true that h = h (see Lemma <ref>), and hence,d_h/dh(τ)= ∏_v ∈ N_o(τ)dh/dh(_h-1), τ∈h.For h=1, set _1 =, which is defined analogously in (<ref>). Also, if ρ∈, it can be shown that h̊ is admissible for all h ∈, see Remark <ref>. We now introduce a family of functions_,̱η: () → [0, ∞],≥̱0andη∈, as follows:_,̱η(ρ) : =H(o̊ν)if β = 0and _ρ[] = 0,1/2∑_h ≥ 1 H(h̊_h) + H(h̊_h) if β > 0,ρ∈and 1̊≪η_1, ∞ otherwise.Here,ηplays the role of the “true law” that satisfies the following assumption: η is the law of a random tree that has i.i.d. vertex marks with law ν and i.i.d. edge marks (independent of the vertex marks) with law ξ, both independent of the tree structure, with _η = $̱.We can now state our full LDP.The sequence {U_n^} (respectively, {U_n^} and {U_n^}) satisfies the LDPon () with rate function ^ (respectively, ^ and ^), where^(ρ) :=_,̨(ρ) if ρ_o, deg = α, ∞otherwise,^(ρ):= _,̨(ρ),^(ρ) :=ℓ_(̨'̱) + _'̱, '̱(ρ) if '̱ : = _ρ[] < ∞, ∞otherwise,where ℓ_k is defined in(<ref>)and,andare defined in Remark <ref>.Note that the rate functions in Theorem <ref> consist of a countable sum of term, with thehth summand being an average of two relative entropies that capture the entropic cost of deviation ofh̊from_hand_h. They have a similar interpretation as in the case of the neighborhood empirical measure.Although the families {U_n^} and {U_n^} both converge weakly toin (), note that the rate functions that govern their large deviations behavior are slightly different (although of a similar form). Since the number of edges in thecase is fixed, the probability that the average degree of the root deviates from the true mean is superexponentially small, that is, ^(ρ) = ∞ unless ρ∈. However, for thesequence, deviations of the average degree are permissible at the large deviation scale, and this is captured by the additional term ℓ_κ in ^. §.§ Discussion and outline ofproofsThe proof of Theorem <ref> is carried out in two steps. Step 1:As indicated in the introduction, the LDP for{U^_n}was proved in <cit.> and the LDP for{U^_n}and{U^_n}were proved in <cit.>, by leveraging the colored configuration model of <cit.> and its extension in <cit.> to marked graphs. The rate functions for these LDPs (denoted by^,^and^) are expressed in terms of various combinatorial quantities, including the so-called microstate entropy, and thus their definitions are deferred to Section <ref>; ; see (<ref>)-(<ref>). For each of the random graph sequences, we leverage admissibility properties ofρ_hto first rewrite these combinatorial quantities in terms of suitable relative entropies with respect to the true lawη(see Assumption <ref>),to yield the intermediate representation for the rate functions given in Theorem <ref> below. To define the rate functions, we first define_,̱η : () → [0, ∞],≥̱0, by_,̱η(ρ) : =H(o̊ν)if β = 0and _ρ[] = 0,H(1̊η_1) - β/2 H(1)+ ∑_h ≥ 2H(h̊h ) - β/2 H(hh) if β > 0,ρ∈, 1̊≪η_1,and< ∞, ∞otherwise,whereηplays the role of the “true law” as stated in Assumption <ref>.The sequence {U_n^} (respectively, {U_n^} and {U_n^}) satisfies the LDPon () with rate function ^ (respectively, ^ and ^), where^(ρ):=_,̨(ρ) if ρ_o, deg = α, ∞otherwise,^(ρ):= _,̨(ρ),^(ρ):=ℓ_(̨'̱) + _'̱, '̱(ρ) if '̱ : = _ρ[] < ∞, ∞otherwise. The proof of Theorem <ref> is carried out in Section <ref>. Although_,̱ηis defined in terms of differences between relative entropies, both of which could possibly be∞, we show in Lemma <ref> that_,̱ηis well defined.Step 2: This step is more involved and entails showing the following: The following identity is satisfied: _,̱η = _,̱η.The proof of Theorem exploits various properties of relative entropy, and uses unimodularity, the mass transport principle and random tree labelings to show that the relative entropy terms appearing in (<ref>) can be expressed asexpectations under the marginal of a size-biased distribution and to express the unimodular extensions in terms of suitable conditional laws.The result follows fromTheorems <ref> and <ref>, and the definitions of the rate functions in (<ref>)-(<ref>) and (<ref>)-(<ref>). § GIBBS CONDITIONING PRINCIPLE: PROOF OF PROPOSITION <REF>Consider the optimization problem v^*: = inf^(μ) : μ∈(),⟨μ, f ⟩≥c.Note that our setting satisfies the assumptions required for the standard Gibbs conditioning principle for large deviations <cit.>.Indeed, *With : = {(τ, X) ∈ : deg_τ(o) ≤ M}⊂,by Theorem <ref> and <cit.>, the sequence {L_n} = {L^_n} satisfies the LDP on () with rate function ^ restricted to ();*The map μ↦⟨μ, f ⟩ is linear and continuous on();*The constraint set C:={μ∈(): ⟨μ, f ⟩≥ c} satisfies* inf_μ∈ C^(μ) < ∞;*Let δ > 0. With C_δ :={μ∈(): ⟨μ, f ⟩≥ c-δ}, we have C = ∩_δ > 0 C_δ, and (L^_n ∈ C_δ) > 0 for all n;* C is contained in the interior of C_δ for all δ > 0.Moreover, since{μ∈() : μ_o, deg = α,⟨μ, f ⟩≥ c}is closed in(), and^has compact level sets (which is a consequence of the LDP in Theorem <ref>), it follows that the infimum in (<ref>) is attained. By <cit.>, the proof would be complete if we show thatμ^*,cdefined in (<ref>) is the unique solution to the optimization problem(<ref>). We do this in three steps. Step 1: We first show that, for any admissibleμ∈(),⟨μ, f⟩depends onμonly via the joint law of the root degree and root mark. By (<ref>) and the admissibility ofμ, we have 1/_μ{_1 = a} = (a) = (a) = 1/_μ{_o= a}.Also, since the random variables{_1, …_i}given = i ≥ 1are exchangeable, by the admissibility ofμ, we also have⟨μ, f ⟩ = _μ∑_v ∈ N_o() h(_v) =_μ h(_1)= ⟨̨, h ⟩ = ⟨̨, h ⟩= _μ h(_o).This proves the claim. Step 2: Show thatv^*in (<ref>) solves the following simpler constrained optimization problem:v^* = inf{ H(γα⊗ν) : γ∈([M_0] ×), ∑_x ∈γ(n,x) = α(n)∀ n ∈ [M_0],∑_n=0^M ∑_x ∈ n h(x) γ(n, x) ≥ c},where[M_0] = {0,1, …, M}. For anyμ∈(), we introduce the following notation: letμ_𝐝_o, _odenote the joint law of(, _o) = (𝐝_o, _o)when(, )has lawμ, letμ__o |𝐝_odenote the conditional law of_ogiven𝐝_o, and letμ__1, …, _𝐝_o | 𝐝_o, _odenote the conditional joint law of(_1, …, _𝐝_o)given(𝐝_o, _o).Given an admissibleμ, we now define an admissibleψsuch thatψ_𝐝_o, _o = μ_𝐝_o, _oand^(μ) ≥^(ψ). Defineψ(τ, X) := α(deg_τ(o))μ__o |𝐝_o(X_o |deg_τ(o)) ∏_v ∈ N_o(τ)(X_v),(τ, X) ∈.We first verify thatψis admissible. Note that_ψ[𝐝_o] = $̨ since the degree distribution of ψ is α. For a, b ∈, using the fact that _ψ{_1 = a} | 𝐝_o = (a) on the set 𝐝_o ≥ 1,(<ref>), and the identityψ_𝐝_o, _o = μ_𝐝_o, _o,we have ψ̅(a,b) = 1/_ψ𝐝_o {_1 = a, _o = b} = 1/_ψ_ψ{_o = b} |𝐝_o𝐝_o_ψ{_1 = a} |𝐝_o =(a)/_ψ{_o = b}𝐝_o= (a) (b).It follows that ψ is admissible. Next, we show that ^(μ) ≥^(ψ).Using the chain rule and non-negativity of relative entropy, we obtainH(μ) = H(μ_𝐝_o, _o_𝐝_o, _o)+ ∫ H(μ__1, …, _𝐝_o | 𝐝_o, _o__1, …, _𝐝_o | 𝐝_o, _o) d μ_𝐝_o, _o≥ H(μ_𝐝_o, _o_𝐝_o, _o).An exact analogous argument yieldsH(μ)≥ H(μ_𝐝_o, _o_𝐝_o, _o).By (<ref>), (<ref>) and (<ref>),the conditional joint law of the leaf marks given the degree and the root mark is the same under ψ, ψ and ψ. Also, note that ψ_𝐝_o, _o = μ_𝐝_o, _o, and _𝐝_o, _o = _𝐝_o, _o = α⊗ν. Together with (<ref>),(<ref>), and the definition of ^ in (<ref>),it follows that^(μ) ≥^(ψ) = H(ψ_𝐝_o, _oα⊗ν).Using (<ref>), (<ref>) and the fact that ψ_𝐝_o, _o = μ_𝐝_o, _o, (<ref>) follows. Step 3: We finally solve the finite-dimensional constrained optimization problem in (<ref>). First, note that the constraint set is closed and convex. Since H(·α⊗ν) is strictly convex and has compact lower level sets, it follows that there exists a unique minimizer, which we denote by γ. The necessary conditions for optimality imply that there exist λ≥ 0 and λ' ∈ such that1 + logγ(n, x)/α(n) ν(x) - λ' - λ n h(x) = 0for alln ∈{0,1,…, M}, x ∈, λc - ∑_n =0^M ∑_x ∈ n h(x) γ(n,x) = 0.From (<ref>), the constraint ∑_x ∈γ(n,x) = α(n) for all n ∈{0,1, …, M} implies thatexp{1 - λ' } = ∑_x ∈ν(x) exp{λ n h(x)}.Using this, (<ref>) reduces toγ(n,x) = α(n) ν(x) exp{λ n h(x)}/∑_y ∈ν(y) exp{λ n h(y)},n ∈{0,1,…, M}, x ∈.From (<ref>), it follows that either λ = 0 or ∑_n,x n h(x) γ(n,x) = c. However, λ = 0 is not possible since it would imply that γ = α⊗ν is the joint law of the degree and the root mark under _1. But ⟨_1, f ⟩ < c by assumption, hence the condition ∑_n=0^M ∑_x ∈ n h(x) γ(n, x) ≥ c is violated. Therefore,we must have ∑_n,x n h(x) γ(n,x) = c, that is,c =∑_n = 0^M ∑_x ∈ n α(n)ν(x)h(x) exp{λ n h(x)}/∑_y ∈ν(y) exp{λ n h(y)} =:g(λ).To show the existence of λ satisfying the above, note that g is continuous in λ. We have g(0) = __1[f] < c by assumption. Also, g(λ) →m̨ąx̨_x ∈h(x) > c as λ→∞, whence there exists λ > 0 such that the first equality in the previous display holds. It follows that the optimizer of (<ref>) is given by (<ref>) where λ > 0 is chosen so that ∑_n, x n h(x) γ(n, x) = c. The uniqueness of the optimizer γ follows from the strict convexity of H(·α⊗ν). Finally, given this optimizer γ for the problem in (<ref>), the unique optimizer for the optimization problem in (<ref>) can be constructed via (<ref>). This completes the proof. § PROOF OF THE ALTERNATIVE REPRESENTATIONIn this section, we prove Theorem <ref>.We show equality of the first summands of _,̱η and _,̱η in Section <ref> and equality of each of the subsequent summands in Section <ref>. The proofs rely on some preliminary results first established in Section <ref>. Throughout this section, we fix ≥̱0 and the “true law”η∈ as defined in Assumption <ref>.§.§ Labeled treesWe start with two simple observations about () elements and the true law η. Let h ∈ and leth̊∈(h). Letbe a h random element with law h̊, that is, () = h̊. Using the Ulam-Harris-Neveu labeling described in Section <ref>, one has the following exchangeability properties: *Conditioned on = i ≥ 1,the random variables {(_h-1, _h-1),v ∈ [i])} are exchangeable.*For any v ∈ N_o(), the conditional marginal law of (_h-1, _h-1) given = i ≥ 1 is the same as the conditional marginal law of (_h-1, _h-1) given = i. That is,(_h-1, _h-1) | = i= ((_h-1, _h-1) | = i).Therefore, if f is a measurable function on ×h-1 with suitable integrability properties, then _h̊ ∑_v ∈ N_o() f_h-1, _h-1|= _h̊ f_h-1, _h-1| .Suppose () = η, where η satisfies Assumption <ref>.*The random variables {_0, v ∈ [i]} are conditionally independent given = i ≥ 1.*The random variables {_0, v ∈ N_o()} are conditionally independent given {_0, v ∈ N_o()}. Also, for each v ∈ N_o(), the random variable _0 is conditionally independent of {_0, u ∈ N_o(), u ≠ v} given _0. However, note that the random variables _0 and _0 need not be independent since the edge mark on the directed edges (o , v) and (v , o) could be correlated.§.§ Alternative form of the first term of _,̱ηThroughout this section, we fix >̱ 0 andρ∈. We start with a simple observation about the measure _1 defined in (<ref>).Let >̱ 0 and ρ∈. If 1̊≪η_1, then 1≪ andH(1) < ∞.For the first assertion, it follows immediately fromthedefinition in (<ref>) that1̊ (·|≥ 1) and η_1(·|≥ 1) are equivalent to 1 and1, respectively. Since 1̊≪η_1 by assumption, we obtain1∼1̊^1,o(·|≥ 1) ≪η_1^1,o(·|≥ 1) ∼,where `∼' denotes the equivalence of measures.The second assertion is an immediate consequence of the first assertion and the fact that (×) × (×) is a finite set. We now make the important observation that the measures_1 and _1 defined in Remark <ref> by (<ref>) and (<ref>), respectively,are indeed probability measures and satisfy certain absolute continuity properties. The proofs are deferred to Appendix <ref>.The measures _1 and _1 are probability measures on 1. Furthermore, 1̊≪_1 and 1̊≪_1.We nowexpress H̱(1) as an expectation under 1̊. Suppose that1̊≪η_1. ThenH̱(1) = _1̊∑_v ∈ N_o()logd1/d_0, _0.Since 1̊≪η_1, we have 1≪ by Lemma <ref>. Therefore,using(<ref>),(≥ 1) = 1 and the identity =̱_ρ,H̱(1) = _1logd1/d = _1logd1/d_0, _0{≥ 1} = _1̊d1/d1̊logd1/d_0, _0{≥ 1}=_1̊logd1/d_0, _0. By Lemma <ref>,we also have H(1) < ∞.Therefore, conditioning onand using (<ref>) with h=1 and f(·, ·) =logd1/d (·, ·), we obtainH̱(1) = _1̊∑_v ∈ N_o()_1̊logd1/d_0, _0|,which is equal to the right-hand side of(<ref>). We now establish a key result, which expresses H(1) in terms of_1 and _1.Suppose that1̊≪η_1. Then we haveH̱(1) = _1̊logd_1/dη_1() + _1̊logd_1/dη_1().We first show that_1̊∑_v ∈ N_o()logd1/d_0 < ∞.By Lemma <ref>, we haveH(1) < ∞. Using the chain rule and the non-negativity of relative entropy, it follows thatH(1) < ∞.Using the admissibility of 1̊ and η_1 (which follow from the unimodularity of ρ (resp. η); see Remark <ref>), and(<ref>), we haveH̱(1) = _1logd1/d = _1logd1/d_0 = _1̊logd1/d_0.Since H(1) < ∞ and ≥ 0, the above shows that for a ∈{+, -},_1̊logd1/d_0^a< ∞.Using (<ref>) with h=1 and f(τ, ·) = logd1/d(·) ^a for all τ∈×0, one obtains (<ref>).Next, we show that _1̊∑_v ∈ N_o()logd1/d_0= _1̊∑_v ∈ N_o()logd1/d_0.Recall that ρ is unimodular. For a ∈{+, -}, define the function f^a: [; ] →_+ byf^a(τ, o_1, o_2) := {o_1 ∈ N_o_2(τ)}logd1/dτ(o_1 ∖ o_2)_0^a,for (τ, o_1, o_2) ∈[ ; ]. Here, [; ] ⊂[; ] denotes the set of marked doubly rooted trees equipped with the natural local topology. Applying (<ref>) with f = f^a, we obtain_1̊∑_v ∈ N_o()logd1/d_0^a = _1̊∑_v ∈ N_o()logd1/d_0^a.Combining the last display for a ∈{+, -} with (<ref>), we conclude that(<ref>) holds.Next, by the admissibility of 1̊ and η_1, for any τ, τ' ∈×, we haved1/d(τ, τ^') = d1/d(τ') ×d1/d(τ | τ')= d1/d(τ') ×d1/d(τ | τ').Together with Lemma <ref> (which is applicable due to our assumption 1̊≪η_1),(<ref>), the fact that H(1) < ∞ and (<ref>), this implies that H̱(1) = _1̊∑_v ∈ N_o()logd1/d_0 + _1̊∑_v ∈ N_o()logd1/d(_0 | _0) =_1̊∑_v ∈ N_o()logd1/d_0 + _1̊∑_v ∈ N_o()logd1/d(_0 | _0).Also, the definition of _1 in (<ref>) implies _1̊logd_1/dη_1() = _1̊∑_v ∈ N_o()logd1/d(_0).Similarly, the definition of _1 in (<ref>) implies _1̊logd_1/dη_1() = _1̊∑_v ∈ N_o()logd1/d(_0 | _0).Thus, (<ref>) follows from (<ref>), (<ref>) and (<ref>). Combining the previousresults, we now obtain the main result of this subsection. Suppose that 1̊≪η_1. ThenH (1̊η_1) - /2 H(1) =1/2H(1̊_1) + H(1̊_1).Since 1̊≪η_1, we have H(1) < ∞ by Lemma <ref>.Thus, by (<ref>) and the second assertion of Lemma <ref>, we have_1 ≪η_1 and 1̊≪_1, respectively. Hence, d1̊/dη_1 = d1̊/d_1×d_1/dη_1. This impliesH(1̊η_1)= _1̊logd1̊/d_1+ _1̊logd_1/dη_1 =H(1̊_1) + _1̊logd_1/dη_1.Similarly, by (<ref>) and the second assertion of Lemma <ref>, we have _1 ≪η_1 and1̊≪_1.Hence, d1̊/dη_1 = d1̊/d_1×d_1/dη_1, which impliesH(1̊η_1)=H(1̊_1) + _1̊logd_1/dη_1.Summing (<ref>) and(<ref>), and using (<ref>), we obtain (<ref>).§.§ Alternative form of the remaining terms of _,̱ηWe start with two preliminary lemmas. Lemma <ref> shows that the rate function _,̱η in (<ref>) is well defined, and Lemma <ref> shows that _h is indeed a probability measure and establishes various relations between h̊, h, h and _h. Their proofs are involved and technical, and hence deferred to Appendices<ref> and <ref>, respectively. Let >̱ 0 and let ρ∈. If< ∞, then the following properties hold:* H(h̊) < ∞ for all h ∈,* H(h̊h) < ∞ for all h ≥ 2,* H(h) < ∞ for all h ∈,* H(hh) < ∞ for all h ≥ 2.Let >̱ 0 and let ρ∈. The following properties hold for h ≥ 2:* h̊≪h and h≪h.* h = h.* _h is a probability measure. * h̊≪_h. For the rest of this section we fix >̱ 0,ρ∈ and h ≥ 2. We now express H(hh) in the definition of _,̱η as an expectation with respect toh̊.If< ∞, then it follows thatH̱(hh) = _h̊ ∑_v ∈ N_o()logdh/dh(_h-1, _h-1.By <ref> of Lemma <ref>, we haveh̊≪h andh≪h. Together with the fact that h(≥ 1),H̱(hh) = _hlogdh/dh = _hlogdh/dh(_h-1,_h-1){≥ 1}.Substituting the expression forh from (<ref>) into the above display yieldsH̱(hh)= _h̊logdh/dh(_h-1,_h-1).Also, since < ∞, it follows from Lemma <ref> thatH(hh) < ∞. Therefore, from (<ref>), conditioning onand using (<ref>) with f(·, ·) = logdh/dh(·, ·), we obtain (<ref>). We are now in a position to establish the main result of this subsection.Suppose that< ∞.Then for h ≥ 2, we haveH(h̊h) - /2 H(hh) = 1/2H(h̊_h) + H(h̊_h).Since < ∞, it follows from <ref> and <ref> of Lemma <ref> thatH(h̊h) < ∞ and H(hh) < ∞. We first write H(h̊h) in terms of H(h̊_h). Since ρ∈, by <ref> and<ref> of Lemma <ref>, it follows that h̊≪h andh̊≪_h, respectively. Moreover, _h ≪hby (<ref>). Hence, writingdh̊/dh = dh̊/d_h×d_h/dh, we have H(h̊h)=_h̊logdh̊/d_h×d_h/dh= H(h̊_h) + _h̊logd_h/dh.Using (<ref>) to rewrite logd_h/dhτ for τ∈h, and invoking Lemma<ref>(which is justified by the assumption < ∞), we obtainH(h̊h)= H(h̊_h) + _h̊∑_v ∈ N_o()logdh/dh(_h-1, _h-1). =H(h̊_h) + H̱(hh).On the other hand, since _h = h from (<ref>), we have H(h̊h) = H(h̊_h). Thus, (<ref>) follows by adding H(h̊h) - H̱(hh) to both sides of (<ref>) and dividing by 2.§.§ Proof of Theorem <ref>We fix ≥̱0 and assume thatη is eitheror . We start with a preliminary lemma. Let ρ∈. If = ∞,then H(ρ_o, deg_o, deg) = ∞.Let ρ∈ be such that = ∞.This, in particular, implies that 0 < <̱∞. Letdenote the geometric distribution with parameter 1/2. It follows that0 ≤ H(ρ_o, deg) =∑_k ∈ρ_o, deg(k) logρ_o, deg( k)/(k)= -H(ρ_o, deg) + log 2 ·,where H is the Shannon entropy, and henceH(ρ_o, deg) ≤log 2 · = ḻo̱g̱2.On the other hand, since there exists constants c_1, c_2 > 0 such that log(k!) ≥ c_1 + c_2 k log k by Stirling's approximation, H(ρ_o, deg_o, deg) ≥- H(ρ_o, deg) + -̱log+ c_1 +c_2 , and the result follows from (<ref>) and the assumption = ∞. We can now prove Theorem <ref>.If =̱ 0, from the definitions of _,̱η and _,̱η in (<ref>) and (<ref>), respectively, we have _,̱η(ρ) = _,̱η(ρ) = H(o̊ν). Next, assume >̱ 0. In this situation, we have three cases. Case I: Suppose ρ∉.Then by (<ref>) and (<ref>)we have _,̱η(ρ) = _,̱η(ρ) = ∞.Case II: Supposeρ∈ and= ∞.Then from the definition of _,̱η in (<ref>),we have_,̱η(ρ) =∞. We now argue that _,̱η(ρ) = ∞. Due to (<ref>), we may assume that 1̊≪η_1. Since = ∞ and α has finite support, it follows that η = for some >̱ 0. From the definitions of _,̱η and _1 in (<ref>) and (<ref>), respectively, and using the chain rule and non-negativity of relative entropy, we obtain2_,̱η(ρ)≥ H(1̊_1) ≥ H(ρ_o, deg_o, deg) = ∞,where the latter inequality follows from Lemma <ref>. This shows_,̱η(ρ) = _,̱η(ρ) = ∞. Case III: Suppose let ρ∈ and < ∞. Then Proposition <ref>, Proposition <ref>, (<ref>), and(<ref>) together imply_,̱η(ρ) = _,̱η(ρ). This completes the proof.§.§ Proof of Corollary <ref>Let μ∈^(̱1) be such thatis symmetric, _μ = >̱ 0, and μ≪η_1. From (<ref>)-(<ref>) it is automatic that ≪. Since η^1|o(· | x_o) = ν(·) for all x_o ∈in the case when there are no edge marks, it follows that d/d (τ) = ∏_v ∈ N_o(τ)d/d(⊗)X_v, X_o, τ = (τ, X) ∈.Therefore, by the definition in(<ref>) and the exchangeability property in Remark <ref>, we getH(μ) = H(μ) + _μlogd/d = H(μ) + _μlogd/d(⊗)(_1,_o)= H(μ) + _μ̅logd/d(⊗)(_1, _o) = H(μ) + H̱(⊗),and the result follows.§ PROOF OF THEOREMS <REF> AND <REF>In this section, we complete the proof of our main result. We define the microstate entropy and related quantities in Section <ref> and state the existing LDPs with rate functions involving the microstate entropy in Section <ref>. We then prove Theorem <ref> by establishing the equivalence between these rate functions and(<ref>)-(<ref>) in Sections <ref>-<ref>. Together with Theorem <ref>, this completes the proof of the component empirical measure LDP (Theorem <ref>). Finally, the LDP for the neighborhood empirical measure (Theorem <ref>) is deduced as a corollary to Theorem <ref> in Section <ref>.§.§ PreliminariesWe begin with some preliminaries. Throughout this subsection, we fix >̱ 0. Also, recallfrom Section <ref>.§.§.§ Covariance measures associated withelementsWe start by defining some combinatorial quantities associated withelements and identify the size-biased marginals in Definition <ref> using them. Let h ≥ 1. Recall Definition <ref>. We make the following definition.For h ∈, given τ, τ' ∈h-1×, define E_h(τ, τ^'): h→ byE_h(τ, τ^') () := | v ∈ N_o() : _h-1 = τ, _h-1 = τ^'|, ∈.In words, E_h(τ, τ')() is equal to the number of neighbors v of the root o in the marked treesuch that the pair of trees comprised of (i) the (h-1)-neighborhood of v with root v in the treewith the edge {v, o} removed (but with the edge mark y_(v, o) retained), and (ii) the (h-1)-neighborhood of o with root o in the treewith the edge {v,o} removed (but with the edge mark y_(o, v) retained) is equal to (τ, τ').We now describe certain covariance measures associated withelements that were first defined for the case of unmarked graphs in<cit.>, and then extended to marked graphs in <cit.>.Let ρ∈. For h ∈, define h∈((h-1×)^2) byh(τ, τ^') := 1/_h̊E_h(τ, τ^') (), for τ, τ^'∈h-1×.Also, let h^1 (resp. h^o) denote the first (resp. second) marginal of h.The arguments τ and τ' in the definition of E_h in (<ref>) are elements of h-1×, that is, they are rooted marked trees of depth at most h-1 (with an additional edge mark); in particular, they could also be trees of depth h-2 (with an additional edge mark). The same applies to the arguments of h as well.In Definition <ref>, when h = 1, both τ and τ' are × elements. Therefore, in Definition <ref>,1 is a probability measure on (×)^2. Let h ∈, τ, τ' ∈h-1×, and h̊∈^(̱h). Let() =h̊. For ∈h with deg_(o) ≥ 1, the conditional probability that (_h-1 = τ, _h-1 = τ') given = is h̊_h-1 = τ, _h-1 = τ'| ==E_h(τ, τ^')()/deg_(o).This follows from the Ulam-Harris-Neveu labeling scheme in Section <ref>. We now show that h coincides with size-based marginals h in Definition <ref>.Let ρ∈. For every h ∈, we have h = h.Fix h ∈. Applying first Definition <ref> and Remark <ref>, then (<ref>) and the identity =̱_ρ, and finally (<ref>), for τ, τ^'∈h-1×, we haveh(τ, τ^') = ∑_∈h:deg_(o) ≥ 1h() E_h(τ, τ^')()/deg_(o) = ∑_∈h: deg_(o) ≥ 1h̊() E_h(τ, τ^')()/ = h(τ, τ^'). In particular, we have:In view of Lemma <ref> and the definition of admissibility in Definition <ref>, the admissibility of h̊ is equivalent to the condition that h is symmetric. If ρ is an element of from Definition <ref>, we claim that h̊ is admissible for each h ∈. Indeed, for any τ, τ' ∈h-1×, applying the mass transport principle (<ref>) with the function f: [ ; ] →_+ defined byf(, o_1, o_2) = {o_1 ∈ N_o_2(),(o_2 ∖ o_1)_h-1 = τ,(o_1 ∖ o_2)_h-1 = τ^'},(, o_1, o_2) ∈[; ],it follows that h(τ, τ^') = h(τ^', τ). By Remark <ref>, this shows that h̊ is admissible. §.§.§ Microstate entropyWe start with some definitions. Given ρ∈, define the function : ×→_+ by(y,y^') := ∑̱_x,x^'∈1((x,y), (x',y^')),y,y^'∈.Note that /∈(×), since ∑_y,y^'∈(y,y^') = ,̱and / is the (×)-marginal of 1∈((×) × (×)). Also, defines()̱ := /2 - /2log,̱ s⃗():= ∑_y, y^'∈(y,y^')/2 - (y,y^')/2log(y,y^'). Fix h ∈. If < ∞, by Lemma <ref> and Lemma <ref>, we see thatH(h̊) < ∞ and H(h) < ∞. Define J_h : (h) → [-∞, ∞)byJ_h(h̊) :=-s()̱ + H(h̊) - /2 H(π_h̊) - ∑_τ, τ^'∈h-1×_h̊ [ log E_h(τ, τ^')()!],if h̊∈^(̱h), h̊ is admissible, and h < ∞; andJ_h(h̊) := -∞ otherwise. Although J_h depends on $̱, we do not denote this dependence for the ease of readability.We now define the microstate entropy. Givenθ∈()and a symmetric function : ×→_+, define the mapΣ_, θ : () → [-∞, ∞]as follows:Σ_, θ(ρ):= lim_h →∞ J_h(h̊) if0 < :̱= _ρ < ∞,ρ∈,= ,and o̊ = θ,-∞otherwiseThe quantityΣ_, θis well defined due to the fact thath ↦ J_h(h̊)is non-decreasing (see <cit.>, <cit.>). This quantity was first introduced in<cit.> in the setting of unmarked graphs and extended to the marked setting in <cit.>. In both settings, it is referred to as the “microstate entropy” (see also <cit.>) since it is defined in terms of a graph counting problem.Note that Σ_, θ(ρ) = -∞ whenever either (i) ≠, (ii) o̊≠θ, or (iii) ρ∉.§.§ The LDP for the component empirical measureWe now state the existing results on the component empirical measure LDPs.Define the rate function for thegraph sequence by^(ρ) := H(o̊ν) + /2 H/ξ̅- _α[log !] + H(α)+H(o̊) +s⃗() - 2s()̨ - Σ_, o̊(ρ),ifρ_o, deg = α; and^(ρ) := ∞otherwise. Here,is as defined as in (<ref>) with$̱ replaced by $̨. For>̱ 0, define_:̱() → [0, ∞]by _(̱ρ) :=H(o̊) + H(o̊ν) +/2 H/ξ̅ +s⃗() - Σ_, o̊(ρ), if _ρ[] = ,̱ ∞, otherwise,and define the rate functions for theandsequences by ^(ρ) : = _(̨ρ)^(ρ) := ℓ_(̨0)+ H(o̊ν)if _ρ[] = 0, ℓ_(̨'̱) + _'̱(ρ) ,if0 < '̱: = _ρ[]< ∞, ∞,otherwise.The sequence {U^_n} (respectively, {U^_n} and {U^_m}) satisfies the LDP on () with rate function ^ (respectively, ^ and ^).The LDP for {U^_n} on () with rate function ^ is proved in <cit.>. The LDP for {U^_n} (resp. {U^_n}) on () with rate function ^ (resp. ^) is proved in <cit.> (resp.<cit.>).§.§ Proof of ^ = ^ §.§.§ Preliminary lemmasWe need to first show some preliminary results. Recall the definitionsandfor≥̱0from Remark <ref>. Let β > 0 and let ρ∈. The quantities 1, ξ̅ anddefined in (<ref>), (<ref>) and (<ref>), respectively,satisfy following identities:(i) H(π_1̊π__1) = H(π_1̊π__1)= -H(π_1̊) - 2 ∑_ y, y^'∈, x, x^'∈π_1̊((x,y), (x^', y^')) log (ν(x^'))+ H/ξ̅ -1/∑_y,y^'∈(y,y^') log((y,y^')) + log.̱(ii) ∑_y,y^'∈(y,y^') log(ξ̅(y,y^'))= -H̱/ξ̅ + ∑_y,y^'∈(y,y^') log((y,y^')) - ḻo̱g̱.̱We first show the second identity (<ref>). Using the definition of relative entropy in (<ref>) and the fact that∑_y,y^'∈(y, y^') = $̱ (see (<ref>)), we see that∑_y,y^'∈(y,y^') log(ξ̅(y,y^')) = -∑̱_y,y^'∈(y,y^')/log(y,y^')//ξ̅(y,y^') + ∑_y,y^'∈(y,y^') log((y,y^')/)̱,and(<ref>) follows.We now show the first identity (<ref>). Since bothandhave i.i.d. vertex and edge marks that are independent of each other and the tree structure, by the definition ofin (<ref>) and Remark <ref>, it follows that bothπ__1((x, y), (x^', y^'))andπ__1((x, y), (x^', y^'))are equal toν(x) ν(x^') ξ̅(y, y^')for all(x,y), (x', y') ∈×. This yields the identityH(π_1̊π__1)= H(π_1̊π__1).Moreover, together with(<ref>), (<ref>), the admissibility of1̊(which isa consequence of the unimodularity ofρand Remark <ref>), and the definition ofin (<ref>), this impliesH(π_1̊π__1) = - H(π_1̊) - ∑_ y, y^'∈, x, x^'∈π_1̊((x,y), (x^', y^')) logν(x) ν(x^') ξ̅(y, y^') = -H(π_1̊) - 2 ∑_ y, y^'∈, x, x^'∈π_1̊((x,y), (x^', y^')) logν(x^') - ∑_ y, y^'∈, x, x^'∈π_1̊((x,y), (x^', y^')) logξ̅(y,y^') = -H(π_1̊) - 2 ∑_ y, y^'∈, x, x^'∈π_1̊((x,y), (x^', y^')) log (ν(x^'))- 1/∑_ y, y^'∈(y,y^') log(ξ̅(y,y^')).Together with (<ref>),(<ref>), and (<ref>), we arrive at (<ref>).Let β > 0. Let ρ∈and suppose that < ∞. If J_1(1̊) > -∞, thenΣ_, o̊(ρ) = J_1(1̊) -∑_h ≥ 2 H(ρ_hρ^*_h) - /2 H(π_ρ_hπ_ρ^*_h).By repeating the computations in <cit.> verbatim for the marked case, we can deduce the relationJ_h(ρ_h) = J_h-1(h̊-̊1̊)-H(ρ_hρ^*_h) - /2 H(π_ρ_hπ_ρ^*_h),h ≥ 2.(In particular,by Lemma <ref>, we have J_h(h̊) > -∞ for all h ≥ 2.) Together with the definition of Σ in (<ref>), the conclusion follows.Let ρ∈ with ρ_o, deg = α. ThenJ_1(1̊) = H(o̊ν) + /2 H/ξ̅- _α[log !] + H(α) + s⃗() - 2s()̨ +H(o̊) +H(1̊_1) - /2 H(1) .Since ρ_o, deg = α and α has finite support, we have H(α) < ∞, _α[log !] < ∞ and H(1̊_1) < ∞. Together with the fact that bothandare finite sets, it follows that all the terms on the right-hand side of (<ref>) are finite. Using (<ref>)and (<ref>),we haveH(1̊_1) = -H(1̊) - ∑_τ∈11̊(τ) log_1(τ).Sinceis the law of the size-biased Galton-Watson tree with offspring distribution α with i.i.d. vertex marks with law ν and i.i.d edge marks with law ξ, both independent of each other and theunderlying unmarked tree (see Remark <ref>), for any τ∈1 with root mark x, we have, with E_1 given by (<ref>),_1(τ) = α(deg_τ(o))deg_τ(o)!ν(x)∏_x^'∈y,y^'∈(ν(x^') ξ̅(y,y^'))^E_1((x,y), (x^', y^'))(τ)/E_1((x,y), (x^', y^'))(τ)!.This implies-∑_τ∈11̊(τ) log_1(τ) = I_1 + I_2 + I_3,where, using the identity ∑_τ∈1 : τ_o = x1̊(τ) = o̊(x) andrecalling that ρ_o, deg = α, we have I_1:= -∑_x ∈o̊(x) logν(x) - _αlog !- _αα() = H(o̊) + H(o̊ν) -_α[log ! ] + H(α), I_2 := -∑_y, y^'∈, x, x^'∈∑_τ∈1:τ_o = x1̊(τ) E_1((x, y), (x^', y^'))(τ)log(ξ̅(y,y^') ν(x^')) , I_3 := ∑_y, y^'∈, x, x^'∈∑_τ∈1 : τ_o = x1̊(τ) logE_1((x, y), (x^', y^'))(τ)!. We first consider I_2. By(<ref>), (<ref>) and the fact that _α = $̨, we have∑_τ∈1:τ_o = x1̊(τ) E_1((x, y), (x^', y^'))(τ) = 1((x,y), (x^', y^')). Together with the definition ofin (<ref>), this impliesI_2 =- ∑̨_ y, y^'∈, x, x^'∈π_1̊((x,y), (x^', y^')) log (ξ̅(y,y^') ν(x^'))= -∑_y,y^'∈(y,y^') log(ξ̅(y,y^')) - ∑̨_y, y^'∈x, x^'∈π_1̊((x,y), (x^', y^')) log (ν(x^')).Combining the last two displays with (<ref>) and (<ref>), we see that H(1̊_1) = -H(1̊) +H(ρ_oν) +H(ρ_o) - _αlog ! + H(α)- ∑_y,y^'∈(y,y^') log(ξ̅(y,y^')) - ∑̨_ y, y^'∈, x, x^'∈π_1̊((x,y), (x^', y^'))log (ν(x^')) + ∑_ y, y^'∈, x, x^'∈_1̊logE_1((x,y), (x^', y^'))()!.Usingthe expression in (<ref>) for the sixth term on the right-hand side above, the identity (<ref>) andthe definition ofJ_1in (<ref>), we obtainH(1̊_1) - /2 H(π_1̊π__1)= -H(1̊) + /2H(π_1̊) + H(o̊ν) + H(o̊) - _αlog ! + H(α)+/2 H/ξ̅+ /2log-̨1/2∑_y,y^'∈(y, y^') log((y, y^'))+∑_ y, y^'∈, x, x^'∈_1̊logE_1((x,y), (x^', y^'))()! = -J_1(1̊) +H(o̊ν) + H(o̊) - _αlog ! + H(α)+/2 H/ξ̅+ s⃗() - 2s()̨.This proves (<ref>). §.§.§ Proof of ^ = ^By the definitions in (<ref>) and (<ref>) we have^(ρ) =^(ρ)= ∞whenever eitherρ_o, deg≠αorρ∉. On the other hand, whenρ∈withρ_o, deg = α,Lemma <ref> together with (<ref>) implies that ^(ρ) = H(1̊_1) - /2 H(π_1̊π__1) + J_1(ρ) - Σ_, o̊(ρ).Since1̊has finite support (becauseρ_o, deg = α), from the definition ofJ_1(1̊)in (<ref>), it follows thatJ_1(1̊) >-∞. Hence, the last display together withLemma <ref> and Lemma <ref> imply^(ρ) = ^(ρ). §.§ Proof of ^ = ^ and ^ = ^ §.§.§ Preliminary lemmasWe first prove a preliminary lemma, which shows a similar statement as in Lemma <ref> in the case of therandom graph sequence.Let β > 0. Let ρ∈ and suppose that < ∞. ThenJ_1(1̊) = s⃗() + H(o̊) + H(o̊ν) + /2 H/ξ̅ - (H(1̊_1) - /2 H(π_1̊)).The proof proceeds similar to that of Lemma <ref> in thecase, and we shall only indicate the main differences. Since < ∞, by Lemma <ref>, we have H(1̊) < ∞. Using (<ref>) and (<ref>), we getH(1̊_1) = -H(1̊) - ∑_τ∈11̊(τ) log_1(τ),For any τ∈1 with root mark x, using the definitionof _1, we have_1(τ) =ν(x) ∏_y,y^'∈, x^'∈exp{-̱̅ξ(y, y^') ν(x^')} (̱̅ξ(y, y^') ν(x^'))^E_1((x, y), (x^', y^'))(τ)/E_1((x, y), (x^', y^'))(τ)!.Using this expression, similar to (<ref>), we arrive at-∑_τ∈11̊(τ) log_1(τ) = I_1 + I_2 + I_3,where I_2 and I_3 are as given in (<ref>) and (<ref>), respectively, with $̨ replaced by$̱, and I_1:= -∑_x ∈o̊(x) logν(x) + ∑_x ∈o̊(x) ∑_y, y^'∈, x^'∈̱̅ξ(y,y^')ν(x^')- ḻo̱g̱.̱Note that, sinceandare finite sets, I_1 < ∞. Also, since E_1((x, y), (x^', y^'))(τ) ≤deg_τ(o), and since _ρ[] = <̱∞, we have I_2 < ∞. If ρ is such that J_1(1̊) = - ∞, then from the definition of J_1 in (<ref>) and the finiteness properties established in Lemma <ref>,it followsthat I_3 = ∞. Therefore, from (<ref>) and (<ref>), we get H(1̊_1) = ∞. Hence, both sides of (<ref>) evaluates to -∞ when J_1(1̊) = -∞.Therefore, in the rest of the proof, we may assume that J_1(1̊) > - ∞, which, by Lemma <ref>, yields I_3 < ∞. Consider I_1 in the previous display. Since ∑_x ∈o̊(x) ∑_y, y^'∈, x^'∈̱̅ξ(y,y^')ν(x^') = $̱, using(<ref>) and (<ref>), we getI_1 = H(ρ_oν) +H(ρ_o) + -̱ḻo̱g̱.̱Together with the expression ofI_2from (<ref>) with$̨ replaced by $̱, the definition ofI_3from (<ref>), (<ref>) and (<ref>),we obtainH(1̊_1) = -H(1̊) +H(ρ_oν) +H(ρ_o) + -̱ḻo̱g̱ - ∑_y,y^'∈(y,y^') log(ξ̅(y,y^')) - ∑̱_ y, y^'∈, x, x^'∈log (ν(x^')) π_1̊((x,y), (x^', y^'))+ ∑_ y, y^'∈, x, x^'∈_1̊logE_1((x,y), (x^', y^'))()!.Substituting for the sixth term on the right-hand side above from (<ref>) of Lemma <ref>, using the identity (<ref>) forH(1)from Lemma <ref>, using the definitions ofJ_1in (<ref>) ands()̱ands⃗()from Definition <ref>, it follows thatH(1̊_1) - /2 H(π_1̊) = -J_1(1̊) +H(o̊ν) + H(o̊) +/2 H/ξ̅ + s⃗().Rearranging the above yields (<ref>), and thus completes the proof of the lemma.Next, we show that_=̱_,̱, where_$̱ and _,̱ are defined in (<ref>) and (<ref>), respectively.Let β > 0. We have _(̱ρ) = _,̱(ρ) for all ρ∈().We consider three cases. Case I: Let ρ∈() ∖. By the definition of _,̱, we have _,̱(ρ) = ∞. Also, by Remark <ref>, we have Σ_, o̊(ρ) = - ∞. This shows _(̱ρ) =_,̱(ρ) whenever ρ∈() ∖. Case II: Let ρ∈ be such that = ∞. In this case, by the definition of Σ in (<ref>) and the definition of J_1 in (<ref>), we see that Σ_, o̊(ρ) = -∞. Therefore, _(̱ρ) = ∞. On the other hand, by Lemma <ref>, we have H(ρ_o, deg_o, deg) = ∞. Therefore, by the chain rule and non-negativity of relative entropy, we haveH(1̊_1)≥ H(ρ_o, deg_o, deg) = ∞.Together with Lemma <ref> and (<ref>),this shows that _,̱(ρ) = ∞. Hence, _(̱ρ) = _,̱(ρ) whenever ρ∈ is such that = ∞. Case III: Let ρ∈ be such that < ∞. Recall from (<ref>) that Σ_, o̊(ρ) = lim_h →∞ J_h(h̊). First, suppose that J_1(1̊) = -∞. Since J_h(h̊) is non-increasing in h<cit.>, it follows that J_h(h̊) = -∞ for all h ≥ 2, and hence, from (<ref>), we get Σ_, o̊(ρ) = -∞. Therefore, _(̱ρ) = ∞. On the other hand, by Lemma <ref>, J_1(1̊) = -∞ implies that H(1̊_1) = ∞. Therefore, together with Lemma <ref>, we have _,̱(ρ) = ∞. Next, suppose that J_1(1̊) > -∞. In this case, by Lemma <ref> andLemma <ref>, it followsthat Σ_, o̊(σ) = s⃗() + H(o̊) + H(o̊ν) + /2 H/ξ̅ - (H(1̊_1) - /2 H(π_1̊π__1)) - ∑_h ≥ 2 H(ρ_hρ^*_h) - /2 H(π_ρ_hπ_ρ^*_h).Using the above display, the definitions of _(̱ρ) and _,̱(ρ) in (<ref>) and (<ref>) respectively, and Lemma <ref>, it follows that _,̱(ρ) = _(̱ρ).This completes the proof of the lemma §.§.§ Proof of ^ = ^ and ^ = ^Since >̨ 0, by Lemma <ref> and (<ref>), we have _=̨_,̨ = ^.For therandom graph sequence, substituting into (<ref>)the identity _'̱ = _'̱, '̱ when '̱ > 0 from Lemma <ref>, using the definition of _'̱, '̱ when '̱ = 0 from (<ref>), and invoking the definition of ^ from (<ref>), it follows that ^ = ^.We can now complete the proof of Theorem <ref>. The theorem follows from Theorem <ref> together with the facts ^ = ^, ^ = ^ and ^ = ^ established in Sections <ref> and <ref>. As a consequence of our results, we also arrive at an alternative expression for the microstate entropy in terms of the rate function _,̱ in (<ref>).Let ρ∈() be such that 0 < :̱=_ρ[] < ∞. ThenΣ_, o̊(ρ) = H(o̊) + H(o̊ν) +/2 H/ξ̅ +s⃗() - _,̱(ρ).This is an immediate consequence of the LDP for therandom graph sequence established in Theorems <ref>, <ref> and <ref>.§.§ Proof of Theorem <ref>We now prove Theorem <ref> using Theorem <ref>. Consider the projection map ϖ : () →(1) that maps ρ to its depth 1 marginal ρ_1. We first show that this map is continuous.The map ϖ: () →(1) is continuous.Let ρ^n →ρ in () as n →∞, i.e., for any f ∈ C_b(), we have∫_ f dρ^n →∫_ f dρ asn →∞.Define μ^n = ϖ(ρ^n), n ∈, and μ = ϖ(ρ). Let g ∈ C_b(1). Using f(G) = g(G_1), G ∈, note that∫_ fdρ^n = ∫_1 g dμ^n, n ∈;and∫_ f dρ = ∫_1 g d μ,and hence μ^n →μ in (1) as n →∞. This shows that ϖ is continuous.Recall the one-step extension from Definition <ref>. To prove Theorem <ref>, we will also need to define the full unimodular extension (see Remark <ref>). Let h ∈, and let h̊ be admissible.First, we sample ∈h with law h̊ (i.e., () = h̊). If = 0, we stop. If ≥ 1, independently for eachv ∈ N_o(), sample^'∈h× with law h(_h-1,_h-1)(·) and replace _h-1 with ^' to obtain a h+1 random element.We repeat this construction independently for the vertices in _2– that is, for each vertexv ∈_2, we (independently) sample ^”with law h((v ∖ p(v))_h-1, (p(v) ∖ v)_h-1) and replace (v ∖ p(v))_h-1 with ^”. This results in a h+2 random element. We repeat this construction independently for all vertices in _3, _4, etc.We define h as the law of the resultantrandom element.Note that h = (h-1)_h, where h is defined in (<ref>).We now prove Theorem <ref>.Consider L_n^. Note that L_n^ = ϖ(U^_n).By Lemma <ref>,ϖ is continuous. Since {U_n^} satisfies the LDP on ()with rate function ^ (by Theorem <ref>), it follows from the contraction principle (e.g., <cit.>) that the sequence {L^_n} satisfies the LDP with rate function (1) ∋μ↦inf{^(ρ) : μ = ϖ(ρ)}.On the one hand, if μ is either not admissible or μ_o, deg≠α, thenfor any ρ such that ϖ(ρ) = μ, by Remark <ref>, we have either ρ∉ or ρ_o, deg≠α. In this case, the infimum in the previous display is ∞. Recalling the definition of ^ in (<ref>), we have ^(μ) = ∞. On the other hand, if μ is admissible and ρ_o, deg = α, then the above infimum is attained by 𝖴𝖦𝖶𝖳(μ). Indeed, by the definitions in (<ref>), (<ref>), (<ref>), and (<ref>), the above infimum is larger than ^(μ). If ρ = 𝖴𝖦𝖶𝖳(μ), then h̊ = h for all h ≥ 2. In particular, h = h, and by (<ref>) and (<ref>), it follows that_h = _h = h = h̊ for all h ≥ 2, whence^(ρ) = ^(μ). Therefore, we conclude that the above infimum is attained by 𝖴𝖦𝖶𝖳(μ). Hence, the map in the above display coincides with the function^, and it follows that {L_n^} satisfies the LDP on (1) with rate function ^. Using analogous arguments, we also conclude the LDP for the families{L_n^} and {L_n^} on (1) with rate functions ^ and ^, respectively. This completes the proof of the theorem. § FINITENESS OF VARIOUS ENTROPIES: PROOF OF LEMMA <REF> We now prove the finiteness of entropies and relative entropies involving ρ_h, ρ_h^*, ρ̅_h and ρ̅_h^*, as stated in Lemma <ref>. Throughout this appendix, we use the definitions introduced in Section <ref>. We will also appeal to the identity h = h proved inLemma <ref>. Note that Lemma <ref> does not depend on any result in Section <ref>, and hence there is no circular reasoning in replacing h by h throughout this section. To prove Lemma <ref>, we first establish an alternative expression for the conditional laws h introduced in (<ref>)in terms of the quantity E_h defined in (<ref>). To this end, we introduce the following definition. For h ∈, τ∈h× and τ^'∈h-1×, let τ⊕τ^' denote the h element formed by attaching τ^' to the root of τ.See Figure <ref> for illustration. Also, note that the degree of the root in τ⊕τ' is one more than the degree of the root of τ, that is,deg_τ⊕τ'(o) = 1+deg_τ(o), τ∈h×, τ' ∈h-1×.Let h ∈. If h̊∈(h) is admissible, then for any τ, τ' ∈h-1× with h(τ, τ') > 0, we haveh(τ, τ^')() = {_h-1= τ}/h(τ, τ^')h̊(⊕τ^') E_h(τ^', τ)(⊕τ^'),∈h×.For convenience, we rewrite the definition of h in (<ref>) below:h(τ, τ')() = h̊_h =| _h-1 = τ, _h-1 = τ',∈h.If _h-1≠τ, it is clear that the right-hand side above as well as the right-hand side of (<ref>) is equal to 0. Thus, we can assume that _h-1 = τ. In this case, by the last display,h(τ, τ')() = h̊_h = , _h-1 = τ'/h̊_h-1 = τ, _h-1 = τ'.Consider the denominator first. Using the condition h(τ, τ') > 0, the admissibility of h̊ (which implies h(τ', τ) > 0), the definition of h in(<ref>), the identity =̱_ρ, and (<ref>), we obtainh̊_h-1 = τ, _h-1 = τ'/h_h-1 = τ, _h-1 = τ' = /deg_τ(o) + 1.Together with Definition <ref>(i) this impliesh̊_h-1 = τ, _h-1 = τ'= h(τ', τ) /deg_τ(o)+ 1.Next, to simplify the numerator of the right-hand side of (<ref>), note that the h random element , labeled according to the Ulam-Harris-Neveu scheme (see Remark <ref>), satisfies (_h = , _h-1 = τ')if and only if (_h-1 = τ, _h-1 = τ') andis isomorphic to the (unlabeled) tree ⊕τ'. Hence, using Remark <ref>, we see thath̊_h = , _h-1 = τ' = E_h(τ', τ)(⊕τ')/deg_τ(o) + 1h̊⊕τ'.Substituting the previous two displays into (<ref>), invoking the admissibility of h̊ and using Lemma <ref>, we arrive at (<ref>).Let >̱ 0. Let ρ∈ and suppose that < ∞. By <cit.>, we have H(h̊) < ∞ for all h ∈. This shows <ref>. Also, for h ≥ 2,by <cit.>, we have- ∑_τ∈hh̊(τ) logh(τ)< ∞.Then <ref> follows on observing that<ref> and the above display implyH(h̊h) = -H(h̊)- ∑_τ∈hh̊(τ) logh(τ)<∞. We now turn to the proof of<ref>. Since 1 is a probability measure on the finite set (×) × (×), it follows that H(1) < ∞. Let h ≥ 2. By Lemma <ref>, we know that h = h.We firstshow that- ∑_τ, τ^'∈h-1×h(τ, τ^') logh(τ, τ^')<∞.Let τ, τ^'∈h-1×. From the definition of h in (<ref>),Lemma <ref>, and the definition of h in (<ref>), we haveh(τ, τ^') = 1/∑_τ̃: E_h(τ, τ^')(τ̃) ≥ 1h(τ̃)E_h(τ, τ^')(τ̃) ≥1/h̊-̊1̊(τ^'⊕τ_h-2)h-1(τ_h-2, τ^'_h-2)(τ),where h-1 is given by (<ref>). Using the expression for h-1(τ_h-2, τ^'_h-2) in (<ref>), andnoting thatE_h-1(τ^'_h-2, τ_h-2)(τ⊕τ_h-2^') ≥ 1, the above display becomesh(τ, τ^') ≥1/^̱2 h-1(τ_h-2, τ^'_h-2)h̊-̊1̊(τ^'⊕τ_h-2) h̊-̊1̊(τ⊕τ^'_h-2).Also since h̊ is admissible h(τ, τ') = h(τ', τ) for all τ, τ' ∈h-1. Together with (<ref>) thisimplies that- ∑_τ, τ^'∈h-1× h(τ, τ^') logh(τ, τ^')≤2 log+̱∑_τ, τ^'∈h-1×h(τ, τ^') logh-1(τ_h-2, τ^'_h-2) - ∑_τ, τ^'∈h-1×h(τ, τ^') logh̊-̊1̊(τ^'⊕τ_h-2) - ∑_τ, τ^'∈h-1×h(τ, τ^') logh̊-̊1̊(τ⊕τ^'_h-2)≤ 2 log-̱ 2 ∑_τ, τ^'∈h-1×h(τ, τ^') logh̊-̊1̊(τ^'⊕τ_h-2),where the last inequality follows since ∑_τ, τ^'∈h-1×h(τ, τ^') logh-1(τ_h-2, τ^'_h-2) = -H(h-1) ≤ 0.Observe that for any τ_h-2∈h-2× and τ^'∈h-1×, using (<ref>) and the monotone convergence theorem, we have∑_∈h-1×:_h-2 = τ_h-2h(, τ^')=1/_h̊∑_∈h-1×:_h-2 = τ_h-2 E_h(, τ')().Note that for any τ”∈h, by using the definition of E_h from (<ref>) and Remark <ref> for the first equality below, and Definition <ref> (also see Figure <ref>) for the second equality, we obtain∑_∈h-1×:_h-2 = τ_h-2 E_h(, τ')(τ”) = E_h(τ_h-2, τ')(τ”) =E_h(τ_h-2, τ')(τ' ⊕τ_h-2) if τ”_h-1 = τ' ⊕τ_h-2,0 otherwise, ≤deg_τ' ⊕τ_h-2(o)if τ”_h-1 = τ' ⊕τ_h-2,0 otherwise.Together with (<ref>) and the property (<ref>), this implies that- ∑_τ, τ^'∈h-1× h(τ, τ^') logh̊-̊1̊(τ^'⊕τ_h-2)≤ -∑_τ_h-2∈h-2× τ^'∈h-1×1+deg_τ'(o)/h̊-̊1̊( τ^'⊕τ_h-2) logh̊-̊1̊( τ^'⊕τ_h-2) =-1/∑_τ̃∈h-1deg_τ̃(o) h̊-̊1̊(τ̃) logh̊-̊1̊(τ̃),where the last equality uses Definition <ref>, the property (<ref>), and the fact that the set {τ' ⊕τ”: τ' ∈h-1×, τ”∈h-2×} coincides with the set h-1. Since H(h̊-̊1̊) < ∞ and < ∞, it follows from <cit.> that -∑_τ̃∈h-1deg_τ̃(o) h̊-̊1̊(τ̃) logh̊-̊1̊(τ̃)< ∞.Combining(<ref>),(<ref>), and(<ref>), we arrive at (<ref>). Note that H(h) = - ∑_τ, τ^'∈h-1×h(τ, τ^')logh(τ, τ^') = - ∑_τ, τ^'∈h-1×h(τ, τ^') logh(τ, τ^') - H(hh),which is finite by the non-negativity of relative entropy and (<ref>). Since h = h by Lemma <ref>, this proves <ref>.Finally, we show <ref>. Let h ≥ 2. We haveH(hh) = ∑_τ, τ^'∈h-1×h(τ, τ^') logh(τ, τ^')/h(τ, τ^') = -H(h) - ∑_τ, τ^'∈h-1×h(τ, τ^') logh(τ, τ^'),which is finite by the non-negativity of Shannon entropy and (<ref>). Since h = h from Lemma <ref>, this proves <ref>. § PROOF OF LEMMA <REF>As in Appendix <ref>, we will refer to quantities defined inSection <ref>, and usethe identity π_μ = from Lemma <ref>. Note that Lemma <ref> does not depend on any result in Section <ref>, and hence there is no circular reasoning in replacingby π_μ. §.§ Proof of the first assertion of Lemma <ref>By (<ref>), showing _1 ∈(1) is equivalent to showing _η_1∏_v ∈ N_o()d1/d (_0)= 1.We first show (<ref>) with 1̊ =μ, 1 = 1 = π_μ and =, the latter as justified by Lemma <ref>. By Remark <ref>(1), for i ≥ 1, we have_η_1∏_v ∈ N_o()dπ_μ^1/dπ_η_1^1(_0) |= i= ∏_v ∈ [i]_η_1dπ_μ^1/dπ_η_1^1(_0) |= i .If () = η_1, by the independence properties in Remark <ref>(1), for any v ∈ [i], the conditional law of _0 given = i ≥ 1 is equal to π_η_1^1. Hence, we have_η_1dπ_μ^1/dπ_η_1^1(_0) |= i= _π_η_1^1dπ_μ^1/dπ_η_1^1 = 1.Together with (<ref>) and the convention that a product over the empty set is identically 1, this implies that _η_1∏_v ∈ N_o()dπ_μ^1/dπ_η_1^1(_0) |= 1 η_1-a.s.Then (<ref>) follows on taking expectations of both sides, and using = and π_μ = from Lemma <ref>. We now turn to the proof of _1 = ∈(1), which by (<ref>) is equivalent to showing_η_1∏_v ∈ N_o()d1/d(_0 | _0)= 1.We show (<ref>) with 1̊ =μ, 1 = 1 = π_μ and =. Define ∈((×)× (×)) by(τ, τ^') := π_η_1^o(τ') π_μ^1|o(τ | τ'),τ, τ' ∈×,where ^o is the second marginal ofand π_μ^1|o is the conditional law of (1 ∖o) given (o ∖ 1) when () = μ. Note that, for any τ∈1 such that deg_τ(o) ≥ 1, by disintegration, we havedπ_μ^1|o/dπ_η_1^1|o(_0 | _0) = d/d(_0), _0) for all v ∈ N_o(τ).Therefore, using the properties of conditional expectations, we have_η_1 ∏_v ∈ N_o()dπ_μ^1|o/dπ_η_1^1|o (_0 | _0) = _η_1∏_v ∈ N_o()d/d(_0 , _0) = _η_1_η_1∏_v ∈ N_o()d/d(_0 , _0) | _0, u ∈ N_o().By Remark <ref>(2),it follows that_η_1 ∏_v ∈ N_o()d/d(_0 , _0)| _0, u ∈ N_o()= ∏_v ∈ N_o()_η_1d/d(_0 , _0) | _0.Note that the conditional joint law of ( _0, _0) given _0 = τ' when () = η_1 is the same as the conditional joint law of (, ') given ' = τ' when (, ') =; this follows from the independence properties in Remark <ref> and the definition of . Therefore, it follows that_η_1d/d(_0 , _0) | _0 = τ' = _d/d(, ')| ' = τ'. We now argue that the conditional expectation on the right-hand side of the above display is equal to 1-a.s. To this end, let A = {(σ, σ') ∈ (×)^2: σ' = τ'}. Then from the definition ofin (<ref>), we see that (A) = (A). Therefore, we have_{A} = _{A} = _d/d{A},which shows that_d/d(, ') | ' = 1 -a.s.When combined with (<ref>), (<ref>), (<ref>), and the relations = π_μ and = π_η from Lemma <ref>, this proves(<ref>).§.§ Proof of the second assertion of Lemma <ref>We first show that μ≪. Suppose that τ∈1 is such that (τ) = 0. From the definition ofin (<ref>), there exists some v ∈ N_o(τ) such that dπ_μ^1/dπ_η_1^1(_0) = 0 If we set τ^1 := _0 and τ^o := _0, then by (<ref>) it follows thatE_1(τ^1, τ^o)(τ) ≥ 1. On the other hand, the last display shows thatπ_μ^1(τ^1) = 0. Together with the definition of π_μ in (<ref>), note that0= π_μ^1(τ^1) ≥π_μ(τ^1, τ^o) =1/_μE_1(τ^1, τ^o)()≥1/μ(τ).This impliesμ(τ) = 0 and provesμ≪. Next, we show μ≪ using an analogous argument. Suppose that τ∈1 is such that (τ) =0. From the definitionofin (<ref>), there exists some v ∈ N_o(τ) such that dπ_μ^1|o/dπ_η_1^1|o(_0 | _0) = 0.Setting τ^1 := _0 and τ^o := _0, (<ref>) implies E_1(τ^1, τ^o)(τ) ≥ 1. Also,using the last display, we obtain dπ_μ/d(τ^1, τ^o)= dπ_μ^o/dπ_η_1^o(τ^o) ×dπ_μ^1|o/dπ_η_1^1|o(τ^1 | τ^o) = 0,and hence,π_μ(τ^1, τ^o) = 0. Together with the definition of π_μ in (<ref>), this implies0 =π_μ(τ^1, τ^o)= 1/_μE_1(τ^1, τ^o)()≥1/μ(τ),which shows that μ(τ) = 0. Therefore, we conclude that μ≪.§ PROOF OF LEMMA <REF>We now prove the four properties listed in Lemma <ref>. We shall use the covariance measure h introduced in Definition <ref> instead of the size-biased marginals h throughout this appendix. §.§ Proof of <ref>Suppose that h(τ) = 0 for some τ∈h. We proceed to show that then h̊(τ)= 0. From(<ref>), since the (h-1)-neighborhood marginals of both h̊ and h are the same, if h̊-̊1̊(τ_h-1) = 0, then we have h̊(τ) = 0. On the other hand, suppose that h̊-̊1̊(τ_h-1) > 0. By the definition of h in(<ref>), the fact thath(τ) = 0 implies there exists u ∈ N_o(τ) such thath-1(_h-2, _h-2)(_h-1) = 0.Set τ^1 :=_h-1 and τ^o := _h-2. Using the definition of h(τ^1, τ^o) in (<ref>), and noting that E_h-1(τ^o, τ^1_h-2)(τ^1 ⊕τ^o) ≥ 1, the last display implies that h̊-̊1̊(τ^1 ⊕τ^o) = 0. Define the function f = f_τ^1, τ^o : [ ; ] →_+ byf(, o_1, o_2) =1 ifo_1 ∈ N_(o_2),(o_2 ∖ o_1)_h-1 = τ^1and (o_1 ∖ o_2)_h-2 = τ^o,0 otherwise,for (, o_1, o_2) ∈[; ]. On the one hand, since f(τ, o, u) = 1 and u ∈ N_o(τ), we have _h̊∑_v ∈ N_o() f(, o, v)≥h̊(τ).On the other hand, we have _h̊∑_v ∈ N_o() f(, v,o)= _h̊∑_v ∈ N_o(){(o ∖ v)_h-1 = τ^1, (v ∖ o)_h-2 = τ^o}.Note that∑_v ∈ N_o(){(o ∖v)_h-1 = τ^1, (v ∖ o)_h-2 = τ^o}≥ 1if and only if _h-1 = τ^1 ⊕τ^o.Substituting this back into (<ref>), thennoting that τ^1 ⊕τ^o ∈h-1 and using (<ref>),it follows that_h̊∑_v ∈ N_o() f(, v,o)= _h̊-̊1̊∑_v ∈ N_o() f(, v,o) =h̊-̊1̊(τ^1 ⊕τ^o) E_h-1(τ^o, τ^1)(τ^1 ⊕τ^o) =0 .Since ρ is unimodular, applying (<ref>) with f=f_τ^1, τ^o, we conclude that the left-hand sides of (<ref>) and(<ref>) are equal. Hence,h̊ (τ)= 0, which proves h̊≪h.Next, we show h≪h. Since h̊≪h, from the definition in (<ref>), we conclude thath∼h̊^1,o(·|≥ 1) ≪ (h)^1,o(·|≥ 1) ∼h.Here, `∼' denotes the equivalence of measures. This proves <ref> of Lemma <ref>.§.§ Proof of<ref> <ref> follows immediately from Lemma <ref> and Lemma <ref> below.For anyτ∈h-2× andτ^'∈h-1×, we have h(τ, τ^') = h(τ, τ^'). Furthermore, we have h^1 = h^1.Let τ∈h-2× andτ^'∈h-1×. We first show that h(τ, τ^') = h(τ, τ^'). For any ∈h, note from (<ref>) that E_h(τ, τ^')() is a function of _h-1.Hence, recalling from (<ref>) that the (h-1)-neighborhood marginal ofh is equal to h̊-̊1̊ and using the definition h in (<ref>), we see thath(τ, τ^') = 1/_h̊E_h(τ, τ^')() =1/_h̊-̊1̊E_h(τ, τ^')()=1/_hE_h(τ, τ^')()=h(τ, τ^').This shows the first assertion.Next, we show that h = h. Let τ∈h-1×. First taking the marginal, then using the admissibility of h̊, which holds by Remark <ref> due to the assumed unimodularity of ρ, and substituting the definition of h from (<ref>), this impliesh(τ) = ∑_τ^'∈h-1×h(τ, τ^') = ∑_τ^'∈h-1×h(τ^', τ) = ∑_τ^'∈h-1×1/_h̊E_h(τ^', τ)().On the other hand, since h is also admissible (which follows from the unimodularity of h in Definition <ref><cit.> together with the fact that h = (h-1)_h and Remark <ref>),(<ref>) also holds with h̊ replaced with h. Since E_h(τ, τ^')(·) ≥ 0,by monotone convergence,_h̊∑_τ^'∈h-1× E_h(τ^', τ)()= ∑_τ^'∈h-1×_h̊E_h(τ^', τ)().Thus, to complete the proof, it suffices to show that _h̊∑_τ^'∈h-1× E_h(τ^', τ)()= _h∑_τ^'∈h-1× E_h(τ^', τ)() .But this is an immediate consequence of the fact that ∑_τ^'∈h-1× E_h(τ^', τ)() depends ononly through _h-1, and the fact that (<ref>) implies (h)_h-1 = h̊-̊1̊.§.§ Proof of<ref>Since _h ≪h by (<ref>), in orderto show that _h is a probability measure, it suffices to show that _hd_h/dh () = 1.First note that_hd_h/dh()= _h_hd_h/dh() | _h-1.The definition of h in (<ref>) showsthat, if()= h,the random variables(_h-1, _h-1), v ∈ N_o()are conditionally independent given _h-1. Therefore, using the definition of _h in (<ref>) and the fact that h = h, we conclude that_hd_h/dh() | _h-1= ∏_v ∈ N_o()_hdh/dh (_h-1, _h-1) | _h-1.Let τ := _h-2 and τ' := _h-2. On the one hand, from the definition of h in (<ref>), under h, the conditional joint law of (_h-1, _h-1) given _h-1 is h-1(τ, τ^')(_h-1), which by (<ref>) is equal to 1/h-1(τ, τ^')h̊-̊1̊(_h-1⊕τ') E_h-1(τ^', τ)(_h-1⊕τ^'),where E_h is given by (<ref>). On the other hand, if (, ') has law h, we haveh( |(_h-2, '_h-2) = (τ,τ' )) = h(, τ')/h(τ, τ^') = h(, τ')/h-1(τ, τ^') = h(τ', )/h-1(τ, τ^'),where the second equality above follows from the fact that τ and τ' are h-2× elements and the (h-1)-neighborhood marginal of h equals h̊-̊1̊ by (<ref>), and the third equality holds since h is admissible. Note that, since _h-2 = τ, we haveE_h(τ', _h-1)(_h-1⊕τ') = E_h(τ', τ)(_h-1⊕τ').Therefore, from<ref> of Lemma <ref>, the definition of h in (<ref>), and the fact that _h-1⊕τ' ∈h-1, we obtainh(τ', _h-1) = h(τ', _h-1)= 1/h̊(_h-1⊕τ') E_h(τ', _h-1)(_h-1⊕τ')= 1/h̊-̊1̊(_h-1⊕τ') E_h-1(τ^', τ)(_h-1⊕τ').Together with (<ref>) this yields the relationh( = _h-1|(_h-2, '_h-2) = (τ, τ^'))= 1/h-1(τ, τ^')h̊-̊1̊(_h-1⊕τ') E_h-1(τ^', τ)(_h-1⊕τ').Let τ”= _h-1. By the definition of h in (<ref>), if() = h, for any v ∈ N_o(), the random variables _h-1∖_h-2 and _h-1∖_h-2 are conditionally independent given _h-1. Therefore, noting that h is the law of (_h-1, _h-1) when () = h (see Lemma <ref>), if (, ') = h,the random variable is conditionally independent of ' given _h-2 = τ and '_h-2 = τ'.Hence, from(<ref>) and (<ref>), it follows that the conditional joint law of (_h-1, _h-1) given _h-1 when () = h equals the conditionaljoint law of (, ') given (_h-2, ') = (τ, τ”) when (, ') = h. This implies_hdh/dh (_h-1, _h-1) | _h-1= _hdh/dh (, ') | (_h-2, ') = (τ, τ”). We now show that the above conditional expectation equals 1h-a.s. Let A = {(σ, σ') ∈ (h-1×)^2 : σ_h-2 = τ, σ' = τ”}. From<ref> of Lemma <ref>, it follows that h(A) = h(A). Therefore, _h{A}= _h{A}=_hdh/dh{A}, which shows that_hdh/dh (, ') | (_h-2, ') = 1 h-a.s.Hence, using (<ref>) and the above display, (<ref>) becomes _hd_h/dh() | _h-1 =1.Together with(<ref>) this implies (<ref>), as desired. §.§ Proof of<ref>Suppose that _h(τ) = 0 for some τ∈h. From the definition of _h in (<ref>), there exists u ∈ N_o(τ) such that dh/dh(_h-1, _h-1) = 0.Let τ^1 := _h-1 and τ^o := _h-1.Then E_h(τ^1, τ^o) (τ) ≥ 1 by (<ref>). Together with the last display, this implies that0 = h(τ^1, τ^o) = 1/_h̊E_h(τ^1, τ^o)()≥1/h̊(τ). Thus, h̊(τ) = 0, and it follows that h̊≪_h. plain
http://arxiv.org/abs/2312.16103v1
{ "authors": [ "Kavita Ramanan", "Sarath Yasodharan" ], "categories": [ "math.PR", "60F10, 05C80" ], "primary_category": "math.PR", "published": "20231226161304", "title": "On the large deviation rate function for marked sparse random graphs" }
Anticipated Network Surveillance - An extrapolated study to predict cyber-attacks using Machine Learning and Data Analytics]Anticipated Network Surveillance - An extrapolated study to predict cyber-attacks using Machine Learning and Data Analytics[1]Aviral [email protected]]Dhyan [email protected] These authors contributed equally to this work.3]Dr. Sharda [email protected]]Dr. Pooja [email protected]]Dr. Gaurang [email protected] *[1]Department of Computer Science, Amity University,Jaipur, 303002, Rajasthan, India[2]Department of Electronics and Communications, Nirma Univesity,Ahmedabad, 382470, Gujarat, India[3]Department of Computer Science, Nirma Univesity,Ahmedabad, 382470, Gujarat, IndiaMachine learning and data mining techniques are utiized for enhancement of the security of any network. Researchers used machine learning for pattern detection, anomaly detection, dynamic policy setting, etc. The methods allow the program to learn from data and make decisions without human intervention, consuming a huge training period and computation power. This paper discusses a novel technique to predict an upcoming attack in a network based on several data parameters. The dataset is continuous in real-time implementation. The proposed model comprises dataset pre-processing, and training, followed by the testing phase. Based on the results of the testing phase, the best model is selected using which, event class which may lead to an attack is extracted. The event statistics are used for attack prediction. Implementation of the proposed architecture is evaluated against accuracy, precision, recall, and F1 score for processed data. [ [=====§ INTRODUCTION In this age of metaverse, society is witnessing an exponential growth of internet traffic. This leads to the development of indigenous applications, deployment of high-speed networks, and then after, securing of applications, data, and network. As the networks usage increases, sophisticated attacks also increase significantly <cit.>. These varying attack scenarios give more avenues to the attackers <cit.>. The alerts generated by the intrusion detection system are not manageable by the security administrators in real-time <cit.>. Hence, securing a network in identifying a problem dynamically is very critical.The advances in machine learning and data analysis enabled researchers to automate and extract significant information from the vast data repositories using various sensors (detection systems) in the entire organization <cit.>. Hence, the importance of threat intelligence and preparation for attack scenarios become necessary <cit.>.§.§ Major contributionsThrough this paper, the following outcomes are achieved - * An exhaustive study of various techniques and relevant datasets for developing threat intelligence in a network.* Proposing a novel threat intelligence architecture model with an intent to anticipate the network attacks* Summarizing the performance of different methodologies to implement the said architecture* Analyzing the performance of the proposed architecture on various datasets* Anticipating the next attack by extracting the key features and analyzing the space of possible events The paper is organized as follows: Section I<ref> introduces the proposed concept, Section II <ref> briefs the work done by researchers in this domain, Section III<ref> discusses the proposed solution followed by implementation results in Section IV. Conclusions are mentioned in Section V followed by references in Section VI.§ LITERATURE SURVEY This section offers gratitude to various researchers for their contribution to the Threat Intelligence domain. In this section, a precise, yet comprehensive survey of various datasets, intrusion predictions, and mathematical, theoretical, and graphical models are explored to emulate the implementation of the proposed threat intelligence system. This survey motivated the design of architecture proposed to anticipate the attacks in the network. §.§ DatasetsThe candidate datasets for the research at hand are various public network logs. They are available for different types of networks with several attacks incorporated within the log. Details of some of the most prevalent data sets are discussed in <cit.>. Some specific characteristics of these datasets are shown in <ref>.Study concludes that the datasets of DARPA series and KDDCup99 are created using primitive traffic and do not offer better estimates of the modern network traffic. However, UNSW-NB15 and CICIDS offer better diversity of records and mimic the modern real-time network traffic. Thus, they are better suitable for this proposed work. §.§ Threat IntelligenceRecent innovations in cyber threat intelligence through machine learning techniques are surveyed in /citeHus'ak using use cases of vaticination and soothsaying in the field of cyber security. Researchers worked in the domain of intrusion detection systems (IDS) for various types of networks, offering various applications with customization for organizational requirements. These intrusion detection systems are classified as misuse or pattern-based IDS <cit.>, anomaly-based IDS <cit.> and specification-based IDS <cit.>. The efficiency of any IDS is measured in terms of a higher detection rate with reduced false alarms in general <cit.>. Alerts are logged and processed. On the contrary, attack anticipation aims to predict an attack based on past studies, extrapolation work, and data analytics <cit.> <cit.>. This subsection hence surveys the work leading to attack prediction. §.§ Intrusion Prediction”One of the main tasks of intrusion prediction is to detect the novel attacks” <cit.>,<cit.>. Apart from detecting zero-day attacks, intrusion prediction also includes the prediction of vulnerabilities, attack propagation <cit.> and multi-step attacks <cit.>, and other network security events. While attack protuberance is substantially reckoned on separate models and algorithms of network attacks, there is an abundance of modelsand algorithms which are used for attack prediction. These metods employ different interpretaion of network data which vary from discrete models like attack graphs <cit.> or Event Trees to continual approaches like time series methods <cit.> like ARIMA,GARMA ARMA etc. This makes prediction of attacks and their types possible with the application and assitance of discrete models which are used for attack projection <cit.>. This variation in prediction start aims at considering the odds of exploitation of a particular vulnerability in the network instead of just being an observed malicious event. The continuous model based time series represents several attacks with reference to network time and request features. The time series will then help to predict the possibility of an attack to happen or not. These continous approaches often include usage of non-technical data along with network logs to predict intrusions through sentiment analysis based methods applicable for social media networks <cit.>. Changes in user behaviour can also be supervised to overcome the capriciousness of cyber-attacks. §.§ Forecasting in Network SecuritySituation forecasting is given priority in cyber security to warn the global community against any unpleasant events <cit.> As per Leau et al. <cit.>, knowing the complete state of a system or the given network will help us shifting the focus from single attacker in a network. Perception, comprehension, and projection are three levels of network security situation forecasting <cit.>. Majority of papers employ quantitative analysis for describing the present state of security <cit.>. The results obtained through quantitative analysis are used to generate a Network Vulnerability Model in the next phase. No information about precise identification of futuristic attacks is provided by this method. It can, however, issue alerts about a general increase or decrease in securing the network in the future. Quantitative as well as Qualitative analysis necessitates the use of a metric for assessing the security of a network <cit.>. In the beginning, each host’s security situation is assessed. Later, the results for each host are factored along with the host’s weight and summed up to get the network’s general security. Authors use different methods for estimating host security. The host’s importance is usually expressed by weight. The quantity and damage incured by attacks on the entire network determine the overall intensity. This prediction can thus serve as a forewarning of an impending change or variationin the number of attacks as well as their types. The stud y suggests that since, both the inputs and the predicted values are numerical, almost all the models used to forecast network security situations are continuous models. §.§ Prediction methods in cyber security – A taxonomyThe methods adopted for attack prediction are enunciated in this subsection. Several approaches <cit.> as mentioned in <ref> are proposed for putting the methods in different categories as shown below:* Representing an attack or network security situation using theoretical and formal methods.* Methods based on continuous models* Methods employing machine learning and data mining§.§.§ Probabilistic modelsThis Category of Models explores the attack prediction methods using Probabilistic - graph-based models such as directed attack graphs, Bayesian networks, as well as Markov models. §.§.§ Attack Graph BasedAn attack graph was pioneer method for predicting attacks <cit.>. It is represented as G = (States (S), transition relations (r), Initial State Set (S0), Success state set (Ss)). Transition relations, r, are a delineations of the possible actions which might be used by an attacker with probabilistic weights statistically defined by the action by the attacker. The weights of the paths are then used to predict the past with maximum probability. An attack is counted to be successful if the actions by the attacker lead to any one element of the success state setfrom the initial state. Factor graph is a variant of attack graphs <cit.>. A factor graph consists of random variables along with factor functions to determine the weigths. §.§.§ Bayesian NetworksA Bayesian network is a probabilistic model that represents the variables and the relationships between them <cit.>. The network is a Directed Acyclic Graph (DAG) where the random variables along with their conditional probability weights are enclosed in the nodes. Let graph G=(Vertex (V), Edges (E)) be a DAG, and where X = (Xi, i ) is a set of random variables indexed by i. Building the model i.e. finding the most appropriate X for the G, requires either expert knowledge about the network and its events, or training using machine learning algorithms. The edge weights are computed using past records. Alert prediction using Bayesian networks uses probabilities represented in the network (model). Probabilities predicted by the network represents the likelihood of the particular event. Low probability alerts are filtered out by appropriate threshold-setting. Only Events with probability higher than the threshold are reported, and a then ppropriate defense mechanisms can be set. Okutanet al. <cit.> included signals unrelated to the target network for predicting attacks using the Bayesian network. Accuracy of Prediction ranges from 63% to 99%, thereby making the approach quite promising. §.§.§ Markov ModelsMarkov models are another popular tool for predicting attacks using model checking prediction methods. Markov models, which include well-known examples of Markov chains <cit.> and Hidden Markov Models <cit.>, are the popular categories. Being represented using graphs, various algorithms ranging from Bayesian networks to attack graphs can be employed to find insights. Markov models, in contrast to previously published methodologies, work effectively in the presence of any states which are not able to be observed and transition, removing the need for intrusion detection and attack prediction algorithms to have comprehensive information. In the hidden Markov model (HMM) the state of a model cannot be inferred directly as HMM is modelled as a Markov process with unobserved (hidden) states. In HMM, the outputs depend only on the current state. In case of sequential attacks involving pre-defined events may be detected by an IDS, and therefore the cautions will be raised. With reference to HMMs, these alerts are considered as the observable outputs of attack classes. As IDS cannot descry all events. In order to construct an Hidden Model from the attack sequences, following steps are involved: * Determine cardinatlity of states set, i.e. the total number of states in the model* Determine the number of different observation symbols of each state* Compute the transition probability distribution between the states* Create a distributions of Original states.Observation chances and State transtions are to be extracted manually from records of events <cit.>. Farhadi et al. <cit.> proposed a sequential pattern mining approach for generating attack scenarios. These attack scenarios are then used for alert correlation and prediction instead of relying on the manual approach. The attack scenarios are represented using an HMM that is used for attack recognition. This process employs unsupervised machine learning and data mining. Abraham and Nair et al.<cit.> have proposed a predictive framework based on Markov models for exploitability analysis. CVS data has been processed and used to assess the life-cycle of vulnerabilities and predict their impact on the network.§.§.§ Machine Learning and Data Mining Based ModelsThe existing methods for applying machine learning techniques for attack prediction are summarized in <ref>.In most circumstances, machine learning and data mining methodologies are suitable for predicting network security situations. Forecasts of the quantity, volume, and constitution of assaults in the network, as well as their distribution over time, are common outcomes. Here, the work done using time series-based methods is described as continuous data is required to build the prediction model. Cyber-attacks can also be predicted using Spatio-temporal patterns in time series <cit.>.§.§.§ Time Series ModelsTime series analysis and forecasting is a tool used for predictive analysis which is further used in various applications. Time series are used frequently in Intrusion and Anomaly detection system. A time series is used to represent regular network traffic patterns. As a result, any deviation from the expected value of network parameters (metrics/features) in a given moment are declared as an anomaly. Historical records of a network phenomenon or statistics, represented numerically is stored as the time series data <cit.>. In the case of cybersecurity and network surveillance, it can be the behavior pattern of an attacker, the requests made by the the attacker or a network security situation state. A lot approaches of time series analysis use moving averages which are calculated by creating a series of averages of subsets of the time series. The weights and exponential smoothing allows the model used to predict to better reflect the nature of the trainng data which is theinput time series. Abdullah et al. <cit.> proposed GARMA and Pillai et al. <cit.> proposed ARMA time serieswas evaluated on live data collected from a honeynet. Zhan <cit.> has compared the performance difference between long-term and short-termpredictions of cyber-attacks with various parameters under observation. For this, GARCH and FARIMA time series models were used and about 88% accuracy was observed with a prediction window i.e. the attack was to take place, after one hour.§.§.§ Machine learning ModelsMethods that use the applications of Machine Learning and Data Mining have found real-world applications allowing the program to learn from data and make decisions without human interference, except to fine-tune hyperparameters for seeking desired outcomes, either in supervised or in unsupervised mode. Long training times and huge computation power are mandated. Machine learning models can be made on a dataset to either regress, cluster, or classify. Zhang <cit.> illustrated that, at times, a simple network outperforms a complex network as there might be data leaks or a lack of proper data pre-processing steps in place. In <cit.> the authors designed a classifier with about 90% accuracy to prove that machine learning applications can be used for attack detection and may thus be extended to attack forecasting.§.§.§ Tree Based ModelsIn the past couple of years, there is a surge in usage of XGboost Models due to their versatility and performance on complex data. <cit.>. The GPU Support and performance on XGBoost models is an added advantage for quick and reliable Pipeline design and Development. The prevalence of Random Forests and Boost Models depends on the weak prediction of the estimators which are ensembled. These Estimators are Decision Trees, for limited features and thus allow an ensemble of a large number of them to get such high performance. §.§ ChallengesFollowing are the challenges in predicting intrusions and mitigating their after effects: * Processing alerts manually is a very slow and error-prone activity* Identifying the point of attack is critical* Mitigating the issue beforehand based on the point of attack is mandated* The larger the network in place, the more complex will be the intrusion prediction and mitigation* Applying contemporary analytics to implement the most suitable learning techniques for real-time intrusion predictionHence, the proposed architecture addresses the above-mentioned challenges and aims to provide an extrapolated, automatic, real-time intrusion prediction system. § PROPOSED ARCHITECHURE The proposed architecture solves the mundane task of managing intrusion alerts to identify the point of attack and then hardening the network. The larger the network, the more complex the solution would be. Hence, an approach to automatically predict intrusions with explainable justification about the point of attack that trigger the attack is proposed herewith. The process is divided into 4 phases as shown in <ref>.§.§ Data PreprocessingThe dataset is analyzed to remove outliers. This dataset, in turn, allows us to have a better understanding and have a more transparent visualization of what the data has to convey. The machine learning models would be able to process the dataset more accurately. In the pre-processing phase, the corrupt entries are removed and new features are created. The new features are derived from the existing features. The focus of the work is on knowing about the attacks instead of the attacker. Hence, the columns related to the source and destination IP are dropped. Dropping the unwanted columns eliminates the horizontals while focusing on the verticals of the dataset. The classifier is hence created for network events which should be able to categorize which event is most likely to occur in the network. §.§ Model SelectionAfter the datasets are pre-processed, the model selection phase is initiated. The performance of various algorithms is compared for these datasets. For this task, artificial neural networks, as well as tree-based networks like decision trees and Random Forests, are used. These methods are supplemented by additive boosting and gradient boosting techniques. Probabilistic methods like Naive Bayes and Gaussian process along with the curve fitting methods like simple vector classifiers and logistic regression are used. Accuracy and F1 score are noted to ensure minimum false positives. Time taken to process the data also happens to be a parameter of concern as the system is supposed to process huge datasets from the modern network infrastructure. Through the implementation results, the model with a better F1 score and minimum time requirement will be selected to predict the unknown events. The most efficient model will be selected by fine-tuning the hyper-parameters of the model to make it more specific for the network it is being trained for. §.§ Feature ExtractionAs of now, a model that almost accurately classifies an event into its respective class is created. These event classes can now be considered separate classes and their features can be extracted. These prime features are nothing but the requirement for any event to most likely be a member of that class. This helps us to create boundaries for the respective classes. For extracting these prime features, Principal Component Analysis along with feature reduction is used. These methods find features that are the most important for correctly classifying the event in their respective classes. Thus, the number of features is reduced with a focus on the significant characteristics of a class which results in further improving the simplicity of the model to allow generalization for unknown events. The importance of features in UNSW-NB15 is shown in <ref> and <ref> §.§ Hyper-parameter TuningThe classifier model with the greatest baseline score is taken for optimization for the concerned dataset to allow better fitting with generalization and specificity of the network model. This re-iterates a pre-made grid of the classifier to allow for better performance on the event space. §.§ Metrics for Model EvaluationThis subsection explores various performance metrics used to evaluate the developed model.§.§.§ Attack PredictionAs the principal features are available for the respective classes, space or domain can be created for all possible events which might happen in the network. Creating this domain as shown in <ref> helps in mapping map the clusters in the higher dimensional space created by the permutations and combinations of the extracted characteristics. This allows the creation of various regions of these events. Using the selected principal components, Cartesian produce is created. This creates a space for all possible events. The space can then be divided into regions of events with a chance to predict the future event type. By finding the probability distribution of these events, we can tell which type/class of attack is the most likely to happen. This probability distribution has been created by extracting prime features from the classes, thereby allowing us to predict what type of attack may happen next and allow us to better prepare for it or at least plan for appropriate mitigation.§ IMPLEMENTATION RESULTSThe proposed architecture can predict the type of attack that might happen on the network in the future. To test the theories, the proposed architecture is implemented on a sample dataset of UNSWNB15 <cit.> and CICIDS-17 <cit.>. As the experimentation continues on the given datasets, the scripts are implemented such that they can work independently on any created dataset. As the final classes are categorical, the features are ordinally encoded to a discrete integer mapping with each feature <cit.>. It is observed that several features of the dataset are continuous with a very large domain. This is not an issue for decision trees; however, it is not appropriate for many other algorithms which work on discrete values. It might be learning just the exact values of the features and making wrong predictions for unseen values, thereby, leading to overfitting. To solve this issue, firstly, new features which can be extracted by the number of zeros in the features or the decimal places in the given feature are plotted into a discrete logarithmic scale. Thus, creating new features and removing the continuous features does allow the model to have a better judgment for significant and minor values of the features while keeping the mapping the same. The continuous signals are mapped into the discrete logarithmic signals and an estimate of the range for a particular entry can be deduced. The change in the number of permutations and combinations is shown in Table 5. Converting to the logarithmic scale allows us to reduce the number of permutations and combinations of the possible events. This reduces the dimensions of the Cartesian product exponentially. It is because, an entry implies repetition of that value for all permutations and combinations, thereby increasing the dimension of the matrix. Converting values to the more generalized logarithmic scale instead of the continuous values, not only reduces the total number of combinations, but the algorithm also learns about the range of features. Hence, it introduces more generalization, resulting in better results. The difference can be seen in <ref> <ref> and <ref>:To select the most suitable model for this proposed work, various models are deployed and verified against the test data and cross-validation data. Results are evident as shown in <ref> and <ref>. Implementation results show that the XGBoost models which are the excessive gradient boosted decision trees perform the best on the training as well as test data. Also, the model consumes less training period which is evident from <ref> and <ref>. Next step is the event class extraction as the model for the proposed architecture is selected along with the processed data. As we create the event class extractor, it is realized that the dataset is highly imbalanced. To counter this imbalance, the data is processed again for training the classifier. Various options can be employed to either performing under-sampling or over-sampling or a mixture of both. Under-sampling is a technique to balance uneven datasets by keeping all the data in the minority class and decreasing the size of the majority class (which would be more generalized). Over-sampling means to create artificial entries for the class which has the minimum number of members and thus, creating an even dataset. A combination of under-sampling and over-sampling can fine-tune the classifier without over-fitting it to the data. After processing the data for the final time and training the event class extractor, the final step is to perform attack prediction.§.§ Attack PredictionThe accuracy of the model is computed based on the prime / principal features responsible for representing the attack. It can be deduced that, if the model is accurate, this reduced feature-set can be used for modeling the event class predictor. With all unique features and their individual scores, a Cartesian product can be created to build the model on elements of the given Cartesian product. This would result in a series of attack types. By creating a cumulative distribution of the attack predictions, we have a distribution of the possible attacks. After receiving the results from the Cartesian product, we can now multiply our accuracy with the results from the distribution, compensating for both the loss at the selected model and the loss which was incurred due to focusing on the prime features of the dataset. Thus, we can get a probability of the future attack class.§.§ ObservationsThis section explores the observations and discusses the computation of metrics and their achieved values to justify the implementation results. §.§.§ Evaluation metricsThe multi-class confusion matrix is as shown in <ref>. This confusion matrix is used to compute the output metrics. Training and evaluation of model with reference to various metrics, decides how the model learns and thus forms a feedback loop with the model to increase its performance. A combination of accuracy and F1 score are used to check the performance of the system. These metrics are computed as: Precision = True Positives/True Positives + False Positives Recall = True Positives/True Positives + False NegativesAccuracy = True Positives + True Negatives/All SamplesF1 Score = 2 * Precision * Recall/Precision + RecallObserving the Probability Distribution of UNSW-NB15 event space as in <ref>, it can be seen that the classes are not uniform. But as we are using Tree based algorithm, specifically XGBoost, the imbalance in the distribution doesn’t affect the performance of the model. This allows for an easier understanding of the network. Detailed statistical analysis of various classes of attacks are mentioned in Additional Files.<ref>and <ref> demonstrate the class distribution in event space and change in performance with feature selection in UNSW-NB15 and CICIDS-17. § CONCLUSIONS AND FUTURE WORKSIn this paper, we have presented an architecture for analysing the Event space for a network to assist in predicting the future attack types. The model can also be used for detecting zero day attacks and precisely identifying the fault in the network leading to attack. While the process is quickened by usage of XGBoost algorithm, the statistical analysis of the generated event space is a compute intensive task. Though the computing task is expensive, it allows us to analyse the vulnerability of network without performing experimentation for penetration testing on the existing network. Real-time, circumstantial vulnerability assessment of the network can also be engaged if the suggested model is deployed in a network. Real-time network hardening can also be considered as an upcoming activity under the future work.§ TABLES § DECLARATIONS Some journals require declarations to be submitted in a standardised format. Please check the Instructions for Authors of the journal to which you are submitting to see if you need to complete this section. If yes, your manuscript must contain the following sections under the heading `Declarations':* Funding- No external Funding was received for the Research Conducted within the Attached work. * Competing interests - The authors certify that they have NO affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent- licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript. The authors declare that they have no competing interests.* Ethics approval - Not Applicable* Consent to participate - The authors of the manuscript give consent to participate* Consent for publication- The authors of the manuscript give consent to publish the manuscript of found suitable* Availability of data and materials - The datasets analysed during the current study are available in the at their respective sources whose links are available as follows:DARPA 98 https://www.ll.mit.edu/r-d/datasets/1998-darpa-intrusion-detection-evaluation-dataset DARPA 99 https://www.ll.mit.edu/r-d/datasets/1999-darpa-intrusion-detection-evaluation-dataset DARPA 2000 https://www.ll.mit.edu/r-d/datasets/2000-darpa-intrusion-detection-scenario-specific-datasetsKDD’99 http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html NSL KDD https://www.unb.ca/cic/datasets/nsl.html UNSW-NB15 https://research.unsw.edu.au/projects/unsw-nb15-dataset CICIDS 17 https://www.unb.ca/cic/datasets/ids-2017.html CICIDS 18 https://www.unb.ca/cic/datasets/ids-2018.html* Code availability - Not Applicable* Authors' contributions - Aviral Srivastava: Methodology, Data Curation, Correseponding Dhyan Thakkar: Software, Investigation, Writing Original Draft Sharada Valiveti: Formal Analysis, Investigation, Visualization Pooja Shah: Conceptualization, Writing – Reviewing and Editing, Supervision, Project Administration Gaurang Raval: Writing – Reviewing and Editing, Validation If any of the sections are not relevant to your manuscript, please include the heading and write `Not applicable' for that section.*Supplementary informationIf your article has accompanying supplementary file/s please state so here.Authors reporting data from electrophoretic gels and blots should supply the full unprocessed scans for key as part of their Supplementary information. This may be requested by the editorial team/s if it is missing.Please refer to Journal-level guidance for any specific requirements.*AcknowledgmentsAcknowledgements are due to the super-computing infrastructure of Nirma University as we were using the Param Shavak super computer remotely.*Abbreviations IDS: Intrusion Detection System; ML: Machine learning; RF: Random forest; DT: Decision tree; GBT: Gradient Boosting Tree; FPR: False positive rate; HIDS: Host-based IDS; NIDS: Network-based IDS; SIDS: Signature-based IDS; AIDS: Anomaly-based IDS; DDoS: Distributed Denial of Service; PIO: Pigeon Inspired Optimizer; XGBoost:Extreme Gradient Boosting; LSTM: Long-Short Term Memory; KNN: K-Nearest Neighbors; RBF: Radial Basis Function; SVM: Support Vector Machine; DNN: Deep Neural Network; AUC: Area under the receiver operating characteristic curve; ANN: Artificial Neural Network; PCA: Principal Component Analysis; DoS: Denial of Service; SGD: Stochastic Gradient Descent;IEEEtran
http://arxiv.org/abs/2312.17270v1
{ "authors": [ "Aviral Srivastava", "Dhyan Thakkar", "Dr. Sharda Valiveti", "Dr. Pooja Shah", "Dr. Gaurang Raval" ], "categories": [ "cs.CR", "cs.LG" ], "primary_category": "cs.CR", "published": "20231227010911", "title": "Anticipated Network Surveillance -- An extrapolated study to predict cyber-attacks using Machine Learning and Data Analytics" }
[1]Litan Kumar [email protected] 2]Kumar Sankar [email protected] 3]Prakash Chandra [email protected] *[1]Department of Mathematics, Jadavpur University,Kolkata, India [2]ECSU, Indian Statistical Institute, Kolkata, India [3]Department of Mathematics, Jadavpur University, Kolkata, India Fitting’s Heyting-valued logic and Heyting-valued modal logic have already been studied from an algebraic viewpoint. In addition to algebraic axiomatizations with the completeness of Fitting’s logic and modal logic, topological duality theorems have also been developed. Bitopological methods have recently been employed to investigate duality for Heyting-valued logic. However, bitopology and biVietoris-coalgebra techniques are conspicuously absent from the development of duality for Heyting-valued modal logic. In this paper, we are attempting to close this gap. We develop a bitopological duality for algebras of Fitting’s Heyting-valued modal logic. We construct a bi-Vietoris functor on the category PBS_ℒ of Heyting-valued version of pairwise Boolean spaces. Finally, we obtain a dual equivalence between categories of bi-Vietoris coalgebras and algebras of Fitting’s Heyting-valued modal logic. As a result, we have that Fitting’s many-valued modal logic is sound and complete with respect to coalgebras of a bi-Vietoris functor. Bi-coalgebraic view of Fitting’s Heyting-valued modal logic [ Received 29 September 2023 / Accepted 18 December 2023 ============================================================§ INTRODUCTION Algebraic axiomatization of a modified version of Fitting’s Heyting-valued modal logic has already been addressed in <cit.>. Maruyama <cit.> proposed Jónsson-Tarski topological duality (see <cit.>) for ℒ-ℳℒ-algebras (algebras of Fitting’s Heyting-valued modal logic). Jónsson-Tarski duality for ℒ-ℳℒ-algebras is essentially a ℒ-valued version of Jónsson-Tarski duality for modal algebras.We aim to construct a bitopological duality for algebras of Fitting’s Heyting- valued modal logic by setting up a notion of PRBS_ℒ as the category of ℒ-valued version of pairwise Boolean spaces with a relation. As a result, natural duality theory for modal algebras is extended in the context of bitopological languages. The main results are bitopological and coalgebraic dualities for ℒ-ℳℒ-algebras, where ℒ is a semi-primal algebra having a bounded lattice reduct. Our general theory extends both the Jónsson-Tarski duality and Abramsky-Kupke-Kurz-Venema coalgebraic duality <cit.> in the setting of bitopological language. Furthermore, it introduces a novel coalgebraic duality for ℒ-ℳℒ-algebras.An exemplary story in coalgebraic logic can be found in <cit.>. The Stone duality <cit.> between Boolean algebras and setsrepresents syntax and semantics of propositional logic. The algebras and coalgebras of an endofunctor on a category define the syntax and semantics of the modal propositional logic. As an illustration, the modal logic K and Kripke semantics derive from the Stone duality by taking an endofunctor on sets. So, in acceptable circumstances, we can achieve duality between the relevant algebras and coalgebras. In addition to demonstrating the fact that the widely recognized Stone duality could be articulated in coalgebraic terms, Abramsky <cit.> also showed that a coalgebraic formulation could be provided for the Jónsson-Tarski duality between descriptive general Kripke frames and modal algebras(see also <cit.> for further information). In particular, the category of descriptive general Kripke frames is isomorphic to the category of Boolean spaces i.e., zero-dimensional, compact and Hausdorff spaces. Esakia <cit.> also noticed this connection. Therefore, coalgebras for the Vietoris functor on the category of Boolean spaces can represent sound and complete semantics for modal logic. In <cit.>, the author showed that coalgebras of a Vietoris functor on the category of Priestley spaces i.e., compact, totally ordered disconnected spaces, provide sound and complete semantics for positive modal logic. The objective of this work is to combine the idea that the semantics of Fitting’s many-valued modal logic can be understood as coalgebras for the bi-Vietoris functor on the category PBS_ℒ of ℒ-valued pairwise Boolean spaces and pairwise continuous maps. The structure of the paper is as follows. In Section <ref>, we review the fundamental notions of bitopological spaces and algebras of Fitting’s Heyting-valued logic, denoted as ℒ-𝒱ℒ-algebras, and we also discuss the bitopological duality theorem for ℒ-𝒱ℒ-algebras. Section <ref> contains our first result i.e., bitopological duality for Fitting’s Heyting-valued modal logic. In Section <ref>, we introduce the concept of pairwise Vietoris space and construct an endofunctor V_ℒ^bi on the category PBS_ℒ. Finally, we show the coalgebraic duality for Fitting’s Heyting-valued modal logic. In Section <ref>, we conclude the paper by outlining several potential future research directions. § PRELIMINARIES We assume that the readers are familiar with the basic concepts of topology and category theory. We refer the reader to <cit.> for information on universal algebra and lattice theory. For category theory, we refer to <cit.>.§.§ Bitopological spacesA bitopological space is defined as a triple (X,τ_1,τ_2) in which (X,τ_1) and (X,τ_2) are topological spaces. Consider δ_1 and δ_2 represent, respectively, the collections of τ_1-closed sets and τ_2-closed sets. We set β_1=τ_1∩δ_2 and β_2=τ_2∩δ_1. <cit.>* A bitopoological space (X,τ_1,τ_2) is said to be pairwise Hausdorff space if for every pair (x,y) of distinct points x,y∈ X there exists disjoint open sets U_x∈τ_1 and U_y∈τ_2 containing x and y, respectively.* A bitopoological space (X,τ_1,τ_2) is said to be pairwise zero-dimensional if β_1 is a basis for τ_1 and β_2 is a basis for τ_2.* A bitopoological space (X,τ_1,τ_2) is said to be pairwise compact if the topological space (X,τ), where τ=τ_1∨τ_2, is compact. According to Alexander’s Lemma( a classical result in general topology), the idea of pairwise compact described in Definition <ref> is equivalent to the condition that every cover {U:U∈τ_1∪τ_2} of X has a finite subcover. A pairwise Boolean space is a bitopological space that is pairwise Hausdorff, pairwise zero-dimensional, and pairwise compact. A map f:(P,τ_1,τ_2)→ (P_1,τ_1^1,τ_2^1) is said to be pairwise continuous if the map f:(P,τ_i)→ (P_1,τ_i^1) is continuous for I∈{1,2}. Pairwise Boolean spaces and pairwise continuous maps form a category, denoted by PBS. <cit.>If T_1 and T_2 are subbasis for the topologies τ_1 and τ_2, respectively, then T_1∪ T_2 is a subbasis for the topology τ_1∨τ_2.<cit.>Let (X,τ_1,τ_2) be a pairwise compact bitopological space.Consider a finite collection {C_i:C_i∈δ_1∪δ_2, i=1,2,⋯,n} of subsets of X. Then ⋂_i=1^nC_i is pairwise compact.It is clear from the above proposition that any τ_1-closed or τ_2-closed subset of a pairwise compact space X is pairwise compact.§.§ Fitting’s Heyting-valued logicsFitting <cit.> proposed ℒ-valued logics and ℒ-valued modal logics for a finite distributive lattice ℒ(i.e., ℒ is a Heyting algebra) in 1991. Maruyama <cit.> introduced algebraic axiomatization of Fitting’s logics. In <cit.> the author studied Fitting’s Heyting-valued logic and Heyting-valued modal logic without regard for fuzzy truth constants other than 0 and 1, and added a new operation T_ℓ(-), ℓ∈ℒ. From a logical perspective, T_ℓ(p) infers that the truth value of a proposition p is ℓ. The operations of ℒ-valued logic, denoted by ℒ-𝒱ℒ, are ∨,∧,→,0,1 and T_ℓ(-), ℓ∈ℒ, where ∨,∧,→ are binary operations, 0 and 1 are nullary operations and T_ℓ is a unary operation. For ℓ_1,ℓ_2∈ℒ, ℓ_1→ℓ_2 means the pseudo-complement of ℓ_1 relative to ℓ_2.The following lemmas describe some term functions.Define a function f:ℒ^4→ℒ by f(ℓ_1,ℓ_2,ℓ_3,ℓ_4)={[ ℓ_3 (ℓ_1=ℓ_2); ℓ_4 (ℓ_1≠ℓ_2); ].Then, f is a term function of ℒ. For every ℓ∈ℒ, define T_ℓ:ℒ→ℒ by T_ℓ(ℓ')={[1 (ℓ'=ℓ);0 (ℓ'≠ℓ);].Then, T_ℓ is a term function of ℒ. Let ℓ∈ℒ. Then the function U_ℓ:ℒ→ℒ defined by U_ℓ(ℓ')={[1 (ℓ'≥ℓ);0 (ℓ'≱ℓ);]., is a term function of ℒ.We recall the idea of ℒ-𝒱ℒ-algebras, which provides sound and complete semantics of ℒ-valued logic ℒ-𝒱ℒ. <cit.>An algebraic structure (𝒜,∧,∨,→,T_ℓ(ℓ∈ℒ),0,1) is said to be a ℒ-𝒱ℒ-algebra iff the following conditions hold for anyℓ_1, ℓ_2∈ℒ, and a,b∈𝒜: * (𝒜,∧,∨,→,T_ℓ(ℓ∈ℒ),0,1) is a Heyting algebra;* T_ℓ_1(a)∧ T_ℓ_2(b)≤ T_ℓ_1→ℓ_2(a→ b)∧ T_ℓ_1∧ℓ_2(a∧ b)∧ T_ℓ_1∨ℓ_2(a∨ b);T_ℓ_2(a)≤ T_T_ℓ_1(ℓ_2)(T_ℓ_1(a));* T_0(0)=1; T_ℓ(0)=0 (ℓ≠ 0); T_1(1)=1; T_ℓ(1)=0, if ℓ≠ 1;* ⋁{T_ℓ(a): ℓ∈ℒ}=1; T_ℓ_1(a)∨ (T_ℓ_2(a)→ 0)=1;T_ℓ_1(a)∧ T_ℓ_2(a)=0, (ℓ_1≠ℓ_2);* T_1(T_ℓ(a))=T_ℓ(a), T_0(T_ℓ(a))=T_ℓ(a)→ 0, T_ℓ_2(T_ℓ_1(a))=0, (ℓ_2≠ 0,1);* T_1(a)≤ a, T_1(a∧ b)=T_1(a)∧ T_1(b);* ⋀_ℓ∈ℒ(T_ℓ(a)↔ T_ℓ(b))≤ (a↔ b).The class of all ℒ-𝒱ℒ-algebras forms a variety (in the sense of universal algebra). If ℒ = {0, 1}, then ℒ-𝒱ℒ-algebras becomes Boolean algebras. A function between ℒ-𝒱ℒ-algebras is said to be homomorphism if it preserves the operations ∨,∧,→,T_ℓ(ℓ∈ℒ),0,1.Let 𝒱𝒜_ℒ denote the category of ℒ-𝒱ℒ-algebras. ℒ-valued modal logic denoted by ℒ-ℳℒ, is defined by ℒ-valued Kripke semantics. The idea of ℒ-valued Kripke semantics can be found in <cit.>. The operations of ℒ-valued modal logic ℒ-ℳℒ are the operations of ℒ-𝒱ℒ and a unary operation , called modal operation. We now recall the concept of ℒ-ℳℒ-algebras, which defines a sound and complete semantics for ℒ-ℳℒ. <cit.>An algebraic structure (𝒜,∧,∨,→,T_ℓ(ℓ∈ℒ),,0,1) is said to be a ℒ-ℳℒ-algebra iff it satisfies the following conditions: * (𝒜,∧,∨,→,T_ℓ(ℓ∈ℒ),0,1) is a ℒ-𝒱ℒ-algebra;* (a∧ b)= a∧ b;* U_ℓ(a)=U_ℓ( a), ∀ℓ∈ℒ, where the unary operation U_ℓ(ℓ∈ℒ) is defined by U_ℓ(a)=⋁{T_ℓ'(a):ℓ≤ℓ',ℓ'∈ℒ}, a∈𝒜. Logically it means that truth value of a is greater than or equal to ℓ. A homomorphism of ℒ-ℳℒ-algebras is a function that preserves all the operations of ℒ-𝒱ℒ-algebras and the modal operation . Let ℳ𝒜_ℒ denote the category of ℒ-ℳℒ-algebras and homomorphisms of ℒ-ℳℒ-algebras.For a Kripke frame (P,ℛ), ℛ[x]={y∈ P:xℛy}, where x∈ P, and ℛ^-1[P']={y∈ P:∃ x∈ P', yℛx}, where P'⊆ P. We recall a modal operation _ℛ on ℒ-valued powerset algebra ℒ^P of P. <cit.>Let (P,ℛ) be a Kripke frame and f∈ℒ^P. Then _ℛf:P→ℒ is defined by (_ℛf)(x)=⋀{f(y):y∈ℛ[x]}.<cit.>Let 𝒜 be an object in ℳ𝒜_ℒ. A binary relation ℛ_ on HOM_𝒱𝒜_ℒ(𝒜,ℒ) is defined as follows:ψℛ_ϕ∀ℓ∈ℒ,∀ a∈𝒜, ψ( a)≥ℓ⇒ϕ(a)≥ℓ.A ℒ-valued map 𝒟:HOM_𝒱𝒜_ℒ(𝒜,ℒ)×𝒜→ℒ is defined by 𝒟(ψ,a)=ψ(a), ψ∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ). <cit.>The ℒ-valued canonical model (HOM_𝒱𝒜_ℒ(𝒜,ℒ),ℛ_,𝒟) of 𝒜 is a ℒ-valued Kripke model. Then, 𝒟(ψ, a)=ψ( a)=⋀{ϕ(a):ϕ∈ℛ_[ψ]}. §.§ Bitopological duality for Fitting’s Heyting-valued logicWe will introduce the key ideas and findings from the bitopological duality theory for Fitting’s Heyting-valued logic. We refer to <cit.> for a more thorough explanation of the bitopological duality for Fitting’s Heyting-valued logic. Let 𝒮_ℒ denote the collection of subalgebras of ℒ. For a pairwise Boolean space ℬ, Λ_ℬ denotes the collection of pairwise closed subspaces of ℬ. It is shown in <cit.> that a pairwise closed subset of a pairwise compact space is also pairwise compact. Hence, each member of Λ_ℬ is pairwise Boolean space. A finite distributive lattice ℒ endowed with unary operation T_ℓ(ℓ∈ℒ) forms a semi-primal algebra. We have expanded the theory of natural duality <cit.> by creating a bitopological duality for ℒ-𝒱ℒ-algebras <cit.>.We now recall the category PBS_ℒ from <cit.>. The category PBS_ℒ is defined as follows: * Objects An object in PBS_ℒ is a tuple (ℬ,α_ℬ) where ℬ is a pairwise Boolean space and a mapping α_ℬ:𝒮_ℒ→Λ_ℬ satisfies the following conditions: * α_ℬ(ℒ)=ℬ;* if ℒ_1=ℒ_2∧ℒ_3 (ℒ_1,ℒ_2,ℒ_3∈ℒ), then α_ℬ(ℒ_1)=α_ℬ(ℒ_2)∩α_ℬ(ℒ_3). * Arrows An arrow ψ:(ℬ_1,α_ℬ_1)→(ℬ_2,α_ℬ_2) in PBS_ℒ is a pairwise continuous map ψ:ℬ_1→ℬ_2 that satisfies the criterion that if x∈α_ℬ_1(ℒ_1)(ℒ_1∈𝒮_ℒ), then ψ(x)∈α_ℬ_2(ℒ_1) i.e., ψ is a subspace preserving map.* The bitopological space (ℒ,τ,τ), where τ is the discrete topology on ℒ, is a pairwise Boolean space. Hence, (ℒ,τ,τ,α_ℒ), where α_ℒ is a mapping from 𝒮_ℒ to Λ_ℒ that is defined by α_ℒ(ℒ')=ℒ', is an object in PBS_ℒ.* For an object 𝒜 in 𝒱𝒜_ℒ , consider a bitopological space (HOM_𝒱𝒜_ℒ(𝒜,ℒ),τ_1,τ_2), where the topologies τ_1 and τ_2 are generated by the bases B^τ_1={⟨ a⟩:a∈𝒜}, where ⟨ a⟩={h∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ):h(a)=1}, and B^τ_2={B^c:B∈ B^τ_1}, respectively. Here B^c denotes the complement of B. [<cit.>] The bitopological space (HOM_𝒱𝒜_ℒ(𝒜,ℒ),tau_1,τ_2) is a pairwise Boolean space.The duality between the categories 𝒱𝒜_ℒ and PBS_ℒ is obtained via the following functors. A contravariant functor 𝔉:PBS_ℒ→𝒱𝒜_ℒ is defned as follows: * For an object (ℬ,α_ℬ) in PBS_ℒ, define 𝔉(ℬ,α_ℬ)=(HOM_PBS_ℒ((ℬ,α_ℬ),(ℒ,α_ℒ)),∨,∧,→,T_ℓ(ℓ∈ℒ),0,1), where ∨,∧,→,T_ℓ(ℓ∈ℒ),0,1 are pointwise operations on the set HOM_PBS_ℒ((ℬ,α_ℬ),(ℒ,α_ℒ)). The operations 0 and 1 are regarded as constant functions, with 0 and 1 being their respective values.* For an arrow ϕ:(ℬ,α_ℬ)→ (ℬ',α_ℬ') in PBS_ℒ, define 𝔉(ϕ):𝔉((ℬ',α_ℬ'))→𝔉((ℬ,α_ℬ)) by 𝔉(ϕ)(ζ)=ζ∘ϕ, where ζ∈ HOM_PBS_ℒ((ℬ',α_ℬ'),(ℒ,α_ℒ)). A contravariant functor 𝔊:𝒱𝒜_ℒ→ PBS_ℒ is defined as follows: * 𝔊 acts on an object 𝒜 in 𝒱𝒜_ℒ as 𝔊(𝒜)=(HOM_𝒱𝒜_ℒ(𝒜,ℒ),τ_1,τ_2,α_𝒜), where α_𝒜 is a mapping from 𝒮_ℒ to Λ_HOM_𝒱𝒜_ℒ(𝒜,ℒ) which is defined by α_𝒜(ℒ^*)=HOM_𝒱𝒜_ℒ(𝒜,ℒ^*), ℒ^*∈𝒮_ℒ.* 𝔊 acts on an arrow ψ:𝒜→𝒜^* in 𝒱𝒜_ℒ as follows: 𝔊(ψ):𝔊(𝒜^*)→𝔊(𝒜) is defined by 𝔊(ψ)(ϕ)=ϕ∘ψ, ϕ∈𝔊(𝒜^*). In <cit.> the following duality result is proved for ℒ-𝒱ℒ-algebras. The categories 𝒱𝒜_ℒ and PBS_ℒ are dually equivalent.§ BITOPOLOGICAL DUALITY FOR FITTING’S MODAL LOGIC§.§ Category We define a category PRBS_ℒ as follows: * Objects: An object in PRBS_ℒ is a triple(P,α_P,ℛ) such that (P,α_P) is an object in PBS_ℒ and ℛ is a binary relation on P that satisfies the following conditions: * for each p in P, ℛ[p] is a pairwise compact subset of P; * ∀𝒞∈β_1, [ℛ]𝒞, ⟨ℛ⟩𝒞∈β_1; * for any ℒ'∈𝔖_ℒ, if m∈α_P(ℒ') then ℛ[m]⊆α_P(ℒ'). * Arrows: An arrow f:(P,α_P,ℛ)→ (P',α_P',ℛ') in PRBS_ℛ is an arrow in PBS_ℒ which additionally satisfies the following conditions: * if p_1ℛp_2 then f(p_1)ℛ'f(p_2); * if f(p)ℛ'p' then ∃ p^*∈ P such that pℛp^* and f(p^*)=p'. In the next sequel, we introduce functors ℱ and 𝒢 to establish the dual equivalence between the categories ℳ𝒜_ℒ and PRBS_ℒ.§.§ Functors We define a functor𝒢:ℳ𝒜_ℒ→ PRBS_ℒ. * 𝒢 acts on an object (𝒜,) in ℳ𝒜_ℒ as 𝒢(𝒜)=(HOM_𝒱𝒜_ℒ(𝒜,ℒ),tau_1,τ_2,α_𝒜,ℛ_), where α_𝒜 is a mapping from 𝔖_ℒ to Λ_HOM_𝒱𝒜_ℒ(𝒜,ℒ) defined by α_𝒜(ℒ_1)=HOM_𝒱𝒜_ℒ(𝒜,ℒ_1), and ℛ_ is a binary relation onHOM_𝒱𝒜_ℒ(𝒜,ℒ) that is described in Definition <ref>. * 𝒢 acts on an arrow ψ:𝒜_1→𝒜_2 in ℳ𝒜_ℒ as follows: Define 𝒢(ψ):𝒢(𝒜_2)→𝒢(𝒜_1) by 𝒢(ψ)(ϕ)=ϕ∘ψ, where ϕ∈ HOM_𝒱𝒜_ℒ(𝒜_2,ℒ). Lemma <ref> and Lemma <ref> demonstrate the well-definedness of 𝒢. For an object(𝒜,) in ℳ𝒜_ℒ, 𝒢(𝒜) is an object in PRBSℒ. Definition <ref> shows that (HOM_𝒱𝒜_ℒ(𝒜,ℒ),τ_1,τ_2,α_𝒜) is an object in PBS_ℒ. So it is enough to show that ℛ_ meets the conditions specified in the object part of Definition <ref>. We first show that for 𝒲∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ), ℛ_[𝒲]∈δ_1∪δ_2. Let 𝒰∉ℛ_[𝒲]. Then by Definition <ref>, there is an element a∈𝒜 such that there is L_1∈ℒ, for which 𝒲( a)≥ L_1 but 𝒰(a)≱L_1. It follows that 𝒰∈⟨ U_L_1(a)⟩∈τ_2 and ℛ_[𝒲]∩⟨ U_L_1(a)⟩=∅ i.e., ⟨ U_L_1(a)⟩⊆ (ℛ_[𝒲])^c. Hence, ℛ_[𝒲]⊄ℛ̅_̅[̅𝒲̅]̅^τ_2. Equivalently, ℛ̅_̅[̅𝒲̅]̅^τ_2⊂ℛ_[𝒲]. Therefore, ℛ_[𝒲] is τ_2-closed. Since, (HOM_𝒱𝒜_ℒ(𝒜,ℒ),τ_1,τ_2) is pairwise compact, by Proposition <ref>, we have ℛ_[𝒲] is pairwise compact. Now we verify the condition (ii) in the object part of Definition <ref>. Since {⟨ a⟩:a∈𝒜}∈β_1 and {⟨ T_1(a)→ 0⟩ :a∈𝒜}∈β_2 are the basis for the topologies τ_1 and τ_2, respectively, so we show that for each a∈𝒜, ⟨ℛ_⟩(⟨ a⟩)∈β_1 and [ℛ_]⟨ a⟩∈β_1. We see that ⟨ℛ_⟩⟨ a⟩ ={𝒲∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ):ℛ_[𝒲]∩⟨ a⟩≠∅}=([ℛ_]⟨ T_1(a)→ 0⟩)^c={𝒲∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ):ℛ_[𝒲]⊄⟨ T_1(a)→ 0⟩} We show that ([ℛ_]⟨ T_1(a)→ 0⟩)^c is τ_1-open and τ_2-closed. Let 𝒰∈ ([ℛ_]⟨ T_1(a)→ 0⟩)^c. Then ℛ_[𝒰]⊄⟨ T_1(a)→ 0⟩. It is easy to see that ∃ τ_1-open set ⟨ T_1(a)⟩ such that 𝒰∈⟨ T_1(a)⟩. Let ℰ∈⟨ T_1(a)⟩. Then ℰ( T_1(a))=1. Using Kripke condition we have 1=ℰ( T_1(a))=⋀{𝒰(T_1(a)):ℰℛ_𝒰}. According to Lemma <ref>, 𝒰(T_1(a)) is either 0 or 1. Henceforth, for all 𝒰∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ) with ℰℛ_𝒰 we have 𝒰(T_1(a))=1. As a result, ℛ_[ℰ]⊄⟨ T_1(a)→ 0⟩ i.e., ℰ∈ ([ℛ_]⟨ T_1(a)→ 0⟩)^c. Henceforth, 𝒰∈⟨ T_1(a)⟩⊂ ([ℛ_]⟨ T_1(a)→ 0⟩)^c. Therefore, [ℛ_]⟨ T_1(a)→ 0⟩)^c is τ_1-open i.e., ⟨ℛ_⟩⟨ a⟩ is τ_1-open. Let 𝒲∈ (⟨ℛ_⟩⟨ a⟩)^c. Then ℛ_[𝒲]∩⟨ a⟩=∅. It is easy to see that there is τ_1-open set ⟨(T_1(a)→0)⟩ such that 𝒲∈⟨(T_1(a)→ 0)⟩. Also, by applying Kripke condition, we have ⟨(T_1(a)→ 0)⟩⊂ (⟨ℛ_⟩⟨ a⟩)^c. Therefore, 𝒲∈⟨(T_1(a)→ 0)⟩⊂ (⟨ℛ_⟩⟨ a⟩)^c. It shows that (⟨ℛ_⟩⟨ a⟩)^c is τ_1-open i.e., ⟨ℛ_⟩⟨ a⟩ is τ_1-closed. It follows from Lemma <ref> that ⟨ℛ_⟩⟨ a ⟩ is pairwise compact. Since, the topological space (HOM_𝒱𝒜_ℒ(𝒜,ℒ),τ_2) with basis {⟨ T_1(a)→ 0⟩: a∈𝒜} is a Hausdorff space, so ⟨ℛ_⟩⟨ a⟩ is τ_2-closed. Hence, ⟨ℛ_⟩⟨ a⟩∈β_1. Next we show that [ℛ_]⟨ a⟩∈β_1. We see that [ℛ_]⟨ a⟩ ={𝒲∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ):ℛ_[𝒲]⊆⟨ a⟩}=(⟨ℛ_⟩⟨ T_1(a)→ 0⟩)^c We claim that ⟨ℛ_⟩⟨ T_1(a)→ 0⟩=⟨ T_1(a)→ 0⟩. Let 𝒲∈⟨ T_1(a)→ 0⟩. Then 𝒲( T_1(a)→ 0)=1. Hence, 𝒲( T_1(a))=0. Using Kripke condition we have, 0=𝒲( T_1(a))=⋀{𝒰(T_1(a)):𝒲ℛ_𝒰}. Since 𝒰(T_1(a))=0or1, hence ∃ 𝒰∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ) with 𝒲ℛ_𝒰 such that 𝒰(T_1(a))=0. Then 𝒰∈⟨ T_1(a)→ 0⟩. Therefore, ℛ_[𝒲]∩⟨ T_1(a)→ 0⟩≠∅. Thus, 𝒲∈⟨ℛ_⟩⟨ T_1(a)→ 0⟩. Similarly, by employing Kripke condition we can show that if 𝒲∈⟨ℛ_⟩⟨ T_1(a)→ 0⟩ then 𝒲∈⟨ T_1(a)→ 0⟩. since ⟨ T_1(a)→ 0⟩∈β_2, we have ⟨ℛ_⟩⟨ T_1(a)→ 0⟩∈β_2. As a result,[ℛ_]⟨ a⟩∈β_1. Finally, we demonstrate that 𝒢(𝒜) meets condition (iii) in the object part of Definition <ref>. Let u∈α_𝒜(ℒ')=HOM_𝒱𝒜_ℒ(𝒜,ℒ'). Suppose ℛ_[u]⊄α_𝒜(ℒ'. Then ∃ v∈ℛ_[u] such that v∉α_𝒜(ℒ'). Hence, ∃ a^*∈𝒜 such that v(a^*)∉ℒ'. Let v(a^*)=ℓ^*. Now for any element ψ∈α_𝒜(ℒ'),ψ(T_ℓ^*(a^*→ a^*)={[ a^* if ψ(a^*)=ℓ^*; 1ifψ(a^*)≠ℓ^*; ].Using Kripke condition, we have u((T_ℓ^*(a^*)→ a^*))=⋀{ψ(T_ℓ^*(a^*)→ a^*):ψ∈ℛ_[u]}. This shows that u((T_ℓ^*(a^*)→ a^*))=ℓ^*∉ℒ'. But this contradicts the fact that u∈α_𝒜(ℒ'). As a result, 𝒢(𝒜) satisfies condition (iii). Let (𝒜_1,_1), (𝒜_2,_2) bethe objects in ℳ𝒜_ℒ and ψ:𝒜_1→𝒜_2 be an arrow in ℳ𝒜_ℒ. Then, 𝒢(ψ) is an arrow in PRBS_ℒ.Here 𝒢(ψ):𝒢(𝒜_2)→𝒢(𝒜_1) is defined by 𝒢(ψ)(ϕ)=ϕ∘ψ, ϕ∈ HOM_𝒱𝒜_ℒ(𝒜_1,ℒ). It follows from Definition <ref> that 𝒢(ψ) is an arrow in PBS_ℒ. Therefore, it is still necessary to demonstrate that 𝒢(ψ) satisfies conditions (i) and (ii) listed in the arrow portion of Definition <ref>. We first check condition (i). Let v_1ℛ__2v_2, where v_1, v_2∈𝒢(𝒜_2). We are to show that 𝒢(ψ)(v_1)ℛ__1𝒢(ψ)(v_2). Now, if v_1∘ψ(_1 a_1)≥ℓ for a_1∈𝒜_1 and ℓ∈ℒ, then we have v_1(_2ψ(a_1))≥ℓ. As v_1ℛ__2v_2, so we get v_2(ψ(a_1))≥ℓ. Hence, 𝒢(ψ)(v_1)ℛ__1𝒢(ψ)(v_2). We then check condition (ii), which is mentioned in the arrow part of Definition <ref>. This is equivalent to verifying ℛ__1[𝒢(ψ)(v_1)]=𝒢(ψ)(ℛ__2[v_1]). Let 𝒲∈ℛ__1[v_1∘ψ], where 𝒲∈ HOM_𝒱𝒜_ℒ(𝒜_1,ℒ). Then (v_1∘ψ)ℛ__1𝒲. Suppose 𝒲∉𝒢(ψ)(ℛ__2[v_1]). Then 𝒲≠𝒢(ψ)(v^*), ∀ v^*∈ HOM_𝒱𝒜_ℒ(𝒜_2,ℒ) such that v_1ℛ__2v^*. As (HOM_𝒱𝒜_ℒ(𝒜_1,ℒ),τ_1^𝒜_1,τ_2^𝒜_1) is a pairwise Hausdorff space, so we can consider 𝒲∈⟨ a_1⟩ and 𝒢(ψ)(v^*)=v^*∘ψ∈⟨ T_1(a_1)→ 0⟩. Since 𝒲∈ℛ__1[𝒢(ψ)(v_1)] and 𝒲(a_1)=1, we have 𝒢(ψ)(v_1)(_1 a_1)=1 i.e., (v_1∘ψ)(_1 a_1)=1. Since v_1ℛ__2v^*, we have 𝒢(ψ)(v_1)ℛ__1𝒢(ψ)(v^*) using the condition (i) specified in the arrow part of Definition <ref>. As 𝒢(ψ)(v_1)(_1 a_1)=1, Lemma <ref> shows that 𝒢(ψ)(v^*)(a_1)=1, i.e., v^*∘ψ∈⟨ a_1⟩. This contradicts the fact that 𝒢(ψ)(v^*)∈⟨ T_1(a_1)→ 0⟩. Therefore, ℛ__1[𝒢(ψ)(v_1)]⊆𝒢(ψ)(ℛ__2[v_1]). Similarly, we can show the reverse direction.We define a functor ℱ:PRBS_ℒ→ℳ𝒜_ℒ. * Define ℱ(P,α_P,ℛ)=(HOM_PBS_ℒ((P,α_P),(ℒ,α_ℒ)),∧,∨,→,T_ℓ(ℓ∈ℒ),0,1,_ℛ) for an object (P,α_P,ℛ) in PRBS_ℒ. Definition <ref> describes the modal operation _ℛ. Here ∧,∨,→, T_ℓ are pointwise operations defined on the set HOM_PBS_ℒ((P,α_P),(ℒ,α_ℒ)). * Let ψ:(P_1,α_P_1,ℛ_1)→ (P_2,α_P_2,ℛ_2) be an arrow in PRBS_ℒ. Define ℱ(ψ):ℱ(P_2,α_P_2,ℛ_2)→ℱ(P_1,α_P_1,ℛ_1) by ℱ(ψ)(ϕ)=ϕ∘ψ for ϕ∈ℱ(P_2,α_P_2,ℛ_2). If ψ,ϕ: (P,τ_1^P,τ_2^P,α_P)→(ℒ,τ,τ,α_ℒ) are pairwise continuous maps then ψ∧ϕ,ψ∨ϕ,ψ→ϕ, T_ℓ(ψ) are pairwise continuous maps. (HOM_PBS_ℒ((P,α_P),(ℒ,α_ℒ)),∧,∨,→,T_ℓ(ℓ∈ℒ),0,1) is a ℒ-𝒱ℒ-algebra.The following lemmas (Lemma <ref> and Lemma <ref>) show that the functor ℱ is well-defined.Let (P,α_P,ℛ) be an object in PRBS_ℒ. Then, ℱ(P,α_P,ℛ) is an object in ℳ𝒜_ℒ. It is clear from Definition <ref> that ℱ(P,α_P) is an object in 𝒱𝒜_ℒ. We need to show that _ℛ on ℱ(P,α_P,ℛ) is well-defined. Let η∈ℱ(P,α_P,ℛ).We then verify _ℛη∈ℱ(P,α_P,ℛ). For any ℓ∈ℒ, (_ℛη)^-1({ℓ}) ={p∈ P:⋀{η(p'):p'∈ℛ[p]=ℓ}=⟨ℛ⟩((T_ℓ(η))^-1({1}))∩(⟨ℛ⟩((U_ℓ(η))^-1({0})))^cAs both T_ℓ(η) and U_ℓ(η) are pairwise continuous maps, henceforth (T_ℓ(η))^-1({1})∈β_1∩β_2 and (U_ℓ(η))^-1({0})∈β_1∩β_2. (_ℛη)^-1{ℓ}∈τ_2^P. As a result, _ℛη is a pairwise continuous map from P to ℒ. Furthermore, by applying condition (iii) that is stated in the object part of Definition <ref>, we see that for any subalgebra ℳ∈𝔖_ℒ, and if m∈α_ℒ(ℳ) then (_ℛη)(m)=⋀{η(m'):m'∈ℛ[m]}∈α_ℒ(ℳ). Thus _ℛη is a subspace preserving map. Hence,_ℛη∈ℱ(P,α_P,ℛ). Let ψ:(P_1,α_P_1,ℛ_1)→ (P_2,α_P_2,ℛ_2) be an arrow in PRBS_ℒ. Then, ℱ(ψ) is an arrow in ℳ𝒜_ℒ. According to Definition <ref>, ℱ(ψ) is an arrow in 𝒱𝒜_ℒ. Therefore, it is sufficient to demonstrate that ℱ(ψ)(_ℛ_2ϕ_2)=_ℛ_1(ℱ(ψ)ϕ_2), where ϕ_2∈ HOM_PBS_ℒ((P_2,α_P_2),(ℒ,α_ℒ)). For any p_1∈ P_1, we have ℱ(ψ)(_ℛ_2ϕ_2)(p_1)=_ℛ_2ϕ_2∘ψ(p_1)=⋀{ϕ_2(p_2):p_2∈ℛ_2[ψ(p_1)]}, and _ℛ_1(ℱ(ψ)ϕ_2)(p_1)=_ℛ_1(ϕ_2∘ψ)(p_1)=⋀{ϕ_2∘ψ(p): p∈ℛ_1[p_1]}. As ψ satisfies conditions (i) and (ii) listed in item 2 of Definition <ref>, it is easy to show that (ℱ(ψ)(_ℛ_2ϕ_2))(p_1)≤_ℛ_1(ℱ(ψ)ϕ_2)(p_1) and _ℛ_1(ℱ(ψ)ϕ_2)(p_1)≤ (ℱ(ψ)(_ℛ_2ϕ_2))(p_1). As a result, ℱ(ψ)(_ℛ_2ϕ_2)=_ℛ_1(ℱ(ψ)ϕ_2).§.§ Bitopological Duality for ℒ-ℳℒ AlgebrasIn this subsection, we develop bitopological duality for algebras Fitting’s Heyting-valued modal logic. Let 𝒜 be a ℒ-ℳℒ algebra. Then 𝒜 is isomorphic to ℱ∘𝒢(𝒜) in ℳ𝒜_ℒ.We define β^𝒜:𝒜→ℱ∘𝒢(𝒜) by β^𝒜(a)(g)=g(a), where a∈𝒜 and g∈ HOM_𝒱𝒜_ℒ(𝒜,ℒ). It is known from Theorem <ref> that β^𝒜 is an isomorphism in the category 𝒱𝒜_ℒ. The only thing left to prove is that β^𝒜 preserves the modal operator, i.e., β^𝒜( a)=_ℛ_β^𝒜(a), a∈𝒜. Let g∈𝒢(𝒜). Then(_ℛ_β^𝒜(a))(g) =⋀{β^𝒜(g^*):g^*∈ℛ_[g]}=⋀{g^*(a):g^*∈ℛ_[g]}=g( a)(by Lemma <ref>)=β^𝒜( a)(g)Hence the result follows. Consider an object (P,α_P,ℛ) in PRBS_ℒ. Then, (P,α_P,ℛ) is isomorphic to 𝒢∘ℱ(P,α_P,ℛ) in the category PRBS_ℒ.Define ζ_(P,α_P,ℛ):(P,α_P,ℛ)→𝒢∘ℱ(P,α_P,ℛ) by ζ_(P,α_P,ℛ)(p)(ψ)=ψ(p), where p∈ P and ψ∈ HOM_PBS_ℒ((P,α_P),(ℒ,α_ℒ)). Theorem <ref> shows that ζ_(P,α_P,ℛ) is a bi-homeomorphism in the category PBS_ℒ. We show that ζ_(P,α_P,ℛ) and ζ^-1_(P,α_P,ℛ) satisfy the conditions given in item 2 of Definition <ref>. We claim that for any p,p'∈ P, p'∈ℛ[p]ζ_(P,α_P,ℛ)(p')∈ℛ__ℛ[ζ_(P,α_P,ℛ)(p)]. Let p'∈ℛ[p]. Suppose ζ_(P,α_P,ℛ)(p)(_ℛψ)≥ℓ, where ℓ∈ℒ and ψ∈ HOM_PBS_ℒ((P,α_P),(ℒ,α_ℒ)). Then ζ_(P,α_P,ℛ)(p)(_ℛψ)=(_ℛψ)(p)=⋀{ψ(p^*):p*∈ℛ[p]}. Since p'∈ℛ[p] and ζ_(P,α_P,ℛ)(p)(_ℛψ)≥ℓ, we have ζ_(P,α_P,ℛ)(p')(ψ)≥ℓ. Hence, ζ_(P,α_P,ℛ)(p)ℛ__ℛζ_(P,α_P,ℛ)(p'), i.e., ζ_(P,α_P,ℛ)(p')∈ℛ__ℛ[ζ_(P,α_P,ℛ)(p)]. Now we verify if ζ_(P,α_P,ℛ)(p')∈ℛ__ℛ[ζ_(P,α_P,ℛ)(p)] then p'∈ℛ[p]. We verify its contrapositive statement. Suppose p'∉ℛ[p]. By Definition <ref>, ℛ[p] is a pairwise compact subset of pairwise Boolean space P. Then it is easy to show that ℛ[p] is pairwise closed. Therefore we can get a τ_1^P-basis open set 𝒪∈β_1^P such that p'∈𝒪 and 𝒪⊆ P-ℛ[p], i.e., 𝒪∩ℛ[p]=∅. Define a mapping f:P→ℒ by f(p)={[0 if p∈𝒪;1 if p∈𝒪^c;].Then f is a pairwise continuous map from (P,τ_1^P,τ_2^P) to (ℒ,τ,τ). As a result, it can be shown that f∈ HOM_PBS_ℒ((P,α_P),(ℒ,α_ℒ)). Now, _ℛf(p)=⋀{f(z):z∈ℛ[p]}=1 and f(p')=0. Hence, ζ_(P,α_P,ℛ)(p)(_ℛf)=1 but ζ_(P,α_P,ℛ)(p')(f)≠ 1. Therefore, ζ_(P,α_P,ℛ)(p')∉ℛ__ℛ[ζ_(P,α_P,ℛ)(p)]. Hence, we have for any p,p'∈ P, p'∈ℛ[p]ζ_(P,α_P,ℛ)(p')∈ℛ__ℛ[ζ_(P,α_P,ℛ)(p)]. As a result, ζ_(P,α_P,ℛ) and ζ^-1_(P,α_P,ℛ) satisfy conditions (i) and (ii) mentioned in item 2 of Definition <ref>. Finally, we obtain the bitopological duality for Fitting’s Heyting-valued modal logic. The categories ℳ𝒜_ℒ and PRBS_ℒ are dually equivalent. Let ID_1 and ID_2 be the identity functors on ℳ𝒜_ℒand PRBS_ℒ, respectively. This theorem will be proved by defining two natural isomorphisms, β:ID_1→ℱ∘𝒢 and ζ:ID_2→𝒢∘ℱ. For an object 𝒜 in ℳ𝒜_ℒ define β^𝒜:𝒜→ℱ∘𝒢(𝒜) by β^𝒜(a)(g)=g(a), where a∈𝒜 and g∈𝒢(𝒜). For an object (P,α_P,ℛ) in PRBS_ℒ define ζ_(P,α_P,ℛ):(P,α_P,ℛ)→𝒢∘ℱ(P,α_P,ℛ) by ζ_(P,α_P,ℛ)(p)(ψ)=ψ(p), where p∈ P and ψ∈ HOM_PBS_ℒ((P,α_P),(ℒ,α_ℒ)). Then it can be shown that β and ζ are natural transformations. According to Theorems <ref> and <ref>, β and ζ are natural isomorphisms. In the next section, we develop a coalgebraic duality theorem for algebras of Fitting’s Heyting valued modal logic. § COALGEBRAIC DUALITYThe primary objective of this section is to define an endofunctor V_ℒ^bi:PBS_ℒ→ PBS_ℒ, called ℒ-biVietoris functor. Then we demonstrate that the category COALG(V_ℒ^bi) of coalgebras for the endofunctor V_ℒ^bi is isomorphic to the category PRBS_ℒ.We now recall the definitions of coalgebra and coalgebra morphisms. We refer the reader to <cit.> for an overview of coalgebras.§.§ The structure of the endofunctor V_ℒ^biWe define the pairwise Vietoris space as follows: Let (S,τ_1^S,τ_2^S) be a pairwise topological space and 𝒦(S) the set of all pairwise closed subsets of S. We define U={C∈𝒦(S):C⊆ U} and U={C∈𝒦(S):C∩ U≠∅}, U ⊆ S. Let β_1^S and β_2^S be the basis for the topologies τ_1^S and τ_2^S, respectively. The pairwise Vietoris space V_P(S) of the pairwise topological space (S,τ_1^S,τ_2^S) is defined as a pairwise topological space (𝒦(S),τ_1^V,τ_2^V), where τ_1^V is the topology on 𝒦(S) generated by subbasis { U, U: U∈β_1^S} and the topology τ_2^V on 𝒦(S) is generated by subbasis { U, U: U∈β_2^S}.We then show that V_P(S) is a pairwise Boolean space whenever S is a pairwise Boolean space. If (S,τ_1^S,τ_2^S) is a pairwise Boolean space then V_P(S)=(𝒦(S),τ_1^V,τ_2^V) is pairwise zero-dimensional.We shall show that β_1^V=τ_1^V∩δ_2^V is a basis for τ_!^V, where δ_2^V is the set of τ_2^V-closed sets. Let 𝒪∈τ_1^V. Then 𝒪 can be expressed as 𝒪=⋃_λ∈Λ(⋂_j=1^n_λ U_j∩⋂_k=1^m_λ U_k), U_j, U_k∈β_1^S=τ_1^S∩δ_2^S. In order to show that β_1^V is a basis for τ_1^V, , it is necessary to show that ⋂_j=1^n_λ U_j∩⋂_k=1^m_λ U_k∈β_1^V. Because the finite intersection of the members of β_1^V is again in β_1^V, it is sufficient to establish that for U∈β_1^S, U, U∈β_1^V. As τ_1^V is the topology generated by the subbasis { U, U:U∈β_1^S}, hence U, U∈τ_1^V.Now we see that ( U)^c= U^c and ( U)^c= U^c. Since U∈β_1^S, so U^c∈β_2^S. As a result, U, U∈δ_2^V. Henceforth, U, U∈β_1^V. Similarly, it can be shown that β_2^V=τ_2^v∩δ_1^V, δ_1^V is the set of τ_1^V-closed sets, is a basis for τ_2^V.If (S,τ_1^S,τ_2^S) is a pairwise Boolean space then V_P(S)=(𝒦(S),τ_1^V,τ_2^V) is pairwise Hausdorff.Let C,C'∈𝒦(S) and C≠ C'. Let c∈ C such that c≠ c', ∀ c'∈ C'. For each point c'∈ C', we choose disjoint open sets U_c'^cβ_2^S and U_c'∈β_1^S (using the condition that (S,τ_1^S,τ_2^S) is pairwise Hausdorff space.) containing points c' and c, respectively. So the collection {U_c'^c:c'∈ C'} is τ_2^S-open covering of C'. As C' is pairwise compact, so there is a finite collection {U_c'_i^c:i=1,2,⋯,n} such that C'⊆⋃_i=1^nU_c'_i^c. V'=⋃_i=1^nU_c'_i^c and U=⋂_i=1^nU_c'_i. As c∈ C∩ U, hence C∩ U≠∅. Also, C'∩ U=∅ because C'⊆ U^c. It follows that C∈ U∈τ_1^V and C'∉ U i.e., C'∈ ( U)^c= U^c∈τ_2^V. So we have two disjoint open sets U∈τ_1^V and U^c∈τ_2^V containing C and C', respectively.If (S,τ_1^S,τ_2^S) is a pairwise Boolean space then V_P(S)=(𝒦(S),τ_1^V,τ_2^V) is pairwise compact.It is known from Proposition <ref> that { U, U:U∈β_1^S∪β_2^S} is a subbasis for the topology τ_1^S∨τ_2^S. We shall show that every cover of 𝒦(S) by subbasis-open sets has a finite subcover. Let 𝒦(S)=⋃_λ∈Λ U_λ∪⋃_i∈ I V_i. Consider S_1=S-⋃_i∈ IV_i. Then S_1 is a pairwise closed subset of S. Hence, S_1∈𝒦(S). Since, S_1∉ V_i for each i∈ I, so that S_1∈⋃_λ∈Λ U_λ. Then for some λ'∈Λ, S_1∈ U_λ'. As a result, S_1⊆ U_λ' and hence S-U_λ'⊆ S-S_1=⋃_i∈ IV_i. Then, S=U_λ'∪⋃_i∈ IV_i. As S is pairwise compact, we have S=U_λ'∪⋃_i=1^nV_i. Let A be an arbitrary element of 𝒦(S). If A⊆ U_λ' then A∈ U_λ' otherwise A⊆⋃_i∈ IV_i i.e., A∩ V_i≠∅ for some i∈{1,2,⋯, n}. As a result, A∈ U_λ'∪⋃_i∈ I V_i. Therefore, V_P(S)=(𝒦(S),τ_1^V,τ_2^V) is pairwise compact. Lemmas <ref>, <ref> and <ref> establish the following result: If (S,τ_1^S,τ_2^S) is a pairwise Boolean space then V_P(S)=(𝒦(S),τ_1^V,τ_2^V) is also a pairwise Boolean space. We now construct the ℒ-biVietoris functor V_ℒ^bi.We define a ℒ-biVietoris functor V_ℒ^bi:PBS_ℒ→ PBS_ℒ as follows: * For an object (S,τ_1^S,τ_2^S,α_S) in PBS_ℒ, we define V_ℒ^bi(S,τ_1^S,τ_2^S,α_S)=(V_P(S),V_P∘α_S) where α_S is a mapping from 𝔖_ℒ to Λ_S, then for any ℒ_1∈𝔖_ℒ, V_P∘α_S(ℒ_1) is the pairwise Vietoris space of a pairwise closed subspace (i.e., pairwise Boolean subspace) α_S(ℒ_1) of S; * For an arrow f:(S_1,τ_1^S_1,τ_2^S_1,α_S_1)→ (S_2,τ_1^S_2,τ_2^S_2,α_S_2) in PBS_ℒ, V_ℒ^bi(f):(V_P(S_1),V_P∘α_S_1)→ (V_P(S_2),V_P∘α_S_2) is defined by V_ℒ^bi(f)(K)=f[K], where K∈ V_P(S_1). We now show the well-definedness of the functor V_ℒ^bi. Let (S,τ_1^S,τ_2^S,α_S) be an object in PBS_ℒ. Then V_ℒ^bi(S,τ_1^S,τ_2^S,α_S) is an object in PBS_ℒ.Theorem <ref> shows that V_P(S) is a pairwise Boolean space. Now we shall show that V_P∘α_S is a pairwise closed subspace of V_P(S). For ℒ_1∈𝔖_ℒ, an element of V_P(S)∘α_S(ℒ_1) is a pairwise compact subset of α_S(ℒ_1). As α_S(ℒ_1) is also pairwise compact subspace of S, so that an element of V_P∘α_S(ℒ_1) is a pairwise compact subset of S. As a result, V_P∘α_S(ℒ_1) is a subset of V_P(S). For U∈β_1^S, we get U∩ V_P∘α_S(ℒ_1)={C∈ V_P∘α_S(ℒ_1):C⊂ U}= (U∩α_S(ℒ_1)) and U∩ V_P∘α_S(ℒ_1)={C∈ V_P∘α_S(ℒ_1):C∩ U≠∅}= (U∩α_S(ℒ_1)). Similarly for U∈β_2^S. Hence, V_P∘α_S(ℒ_1) is a pairwise subspace of V_P(S). Since α_S(ℒ_1) is a pairwise Boolean subspace of S, by Theorem <ref> we have V_P∘α_S(ℒ_1) is a pairwise Boolean space. Henceforth, V_P∘α_S(ℒ_1) is a pairwise closed subspace of V_P(S).Now we show that V_P∘α_S satisfies the conditions given in the object part of Definition <ref>. If α_S(ℒ)=S then V_P∘α_S(ℒ)=V_P(S).Let ℒ_1,ℒ_2,ℒ_3∈𝔖_ℒ. If ℒ_1=ℒ_2∩ℒ_3 then we show that V_P(α_S(ℒ_1))=V_P(α_S(ℒ_2))∩ V_P(α_S(ℒ_3)). Now V_P(α_S(ℒ_1))=V_P(α_S(ℒ_2∩ℒ_3))=V_P(α_S(ℒ_2)∩α_S(ℒ_3)). The element structure of V_P(α_S(ℒ_2)∩α_S(ℒ_3)) is of the form P∩ (α_S(ℒ_2)∩α_S(ℒ_3)) and Q∩ (α_S(ℒ_2)∩α_S(ℒ_3)), where P and Q are τ_1^S-closed set and τ_2^S-closed set, respectively. The elements of V_P(α_S(ℒ_2))∩ V_P(α_S(ℒ_3)) are of the form (P_1∩α_S(ℒ_2))∩ (P_2∩α_S(ℒ_3)) and (Q_1∩α_S(ℒ_2))∩ (Q_2∩α_S(ℒ_3)), where P_1, P_2 are τ_1^S-closed and Q_1, Q_2 are τ_2^S-closed. Then it is easy to show that V_P(α_S(ℒ_2)∩α_S(ℒ_3))⊆ V_P(α_S(ℒ_2))∩ V_P(α_S(ℒ_3)) and V_P(α_S(ℒ_2))∩ V_P(α_S(ℒ_3))⊆ V_P(α_S(ℒ_2)∩α_S(ℒ_3)). Let f:(S_1,τ_1^S_1,τ_2^S_1,α_S_1)→ (S_2,τ_1^S_2,τ_2^S_2,α_S_2) be an arrow in PBS_ℒ. Then V_ℒ^bi(f) is an arrow in PBS_ℒ.Given that f is a pairwise continuous map from a pairwise Boolean space S_1 to a pairwise Boolean space S_2. Let K∈ V_P(S_1). Then K is a pairwise closed subset of S_1 and hence K is pairwise compact. Now V_ℒ^bi(f)(K)=f[K] is a pairwise compact subset of S_2. Since S_2 is a pairwise Boolean space, f[K] is a pairwise closed subset of S_2. As a result, V_ℒ^bi(f)(K)∈ V_P(S_2). To show that V_ℒ^bi(f) is pairwise continuous, let U∈β_1^S_2 and V∈β_2^S_2. Then V_ℒ^bi(f)^-1( U) ={K∈ V_P(S_1):V_ℒ^bi(f)(K)∈ U}={K∈𝒦(S_1):f(K)⊆ U}={K∈𝒦(S_1):K⊆ f^-1(U)}= f^-1(U)and V_ℒ^bi(f)^-1( U) ={K∈ V_P(S_1):V_ℒ^bi(f)(K)∈ U}={K∈𝒦(S_1):f(K)∩ U≠∅}={K∈𝒦(S_1):K∩ f^-1(U)≠∅}= f^-1(U)Similarly, V_ℒ^bi(f)^-1( V)= f^-1(V) and V_ℒ^bi(f)^-1( V)= f^-1(V). Therefore, V_ℒ^bi(f) is pairwise continuous. It is still necessary to demonstrate thatV_ℒ^bi(f) is subspace preserving. Let M∈ V_P∘α_S_1(ℒ_1), ℒ_1𝔖_ℒ. Then M⊆α_S_1(ℒ_1). As f is an arrow in PBS_ℒ, hence f is a subspace preserving. Thus, f(M)⊆α_S_2(ℒ_1). It shows that V_ℒ^bi(f)(M)⊆α_S_2(ℒ_1). Thus we have V_ℒ^bi(f)(M)∈ V_P∘α_S_2(ℒ_1). We introduce two functors 𝔅 and ℭ between the categories PRBS_ℒ and COALG(V_ℒ^bi) to show that these two categories are isomorphic. We define a functor 𝔅:PRBS_ℒ→ COALG(V_ℒ^bi) as follows: * For an object (S,α_S,ℛ) in PRBS_ℒ, define 𝔅(S,α_S,ℛ)=(S,α_S,ℛ[-]), where ℛ[-]:(S,α_S)→ V_ℒ^bi(S,α_S) is an arrow in PBS_ℒ defined by ℛ[s]={p∈ S:sℛp}, s∈ S;* For an arrow f:(S_1,α_S_1,ℛ_1)→ (S_2,α_S_2,ℛ_2) in PRBS_ℒ, define 𝔅(f):(S_1,α_S_1,ℛ_1[-])→ (S_2,α_S_2,ℛ_2[-]) by 𝔅(f)=f. The well-definedness of the functor 𝔅 is shown by the following two lemmas: Let (S,α_S,ℛ) be an object in PRBS_ℒ. Then 𝔅(S,α_S,ℛ) is an object in COALG(V_ℒ^bi) .We shall show that ℛ[-]:(S,α_S)→ V_ℒ^bi(S,α_S) is an arrow in PBS_ℒ. By the conditions given in the object part of Definition <ref>, we know that for each s∈ S, ℛ[s] is pairwise compact subset of S. As S is pairwise Boolean space, hence ℛ[s] is a pairwise closed subset of S. Thus ℛ[s]∈ V_P(S). Let U∈β_1^S. Then ℛ[-]^-1( U)={s∈ S:ℛ[s]∈ U}={s∈ S:ℛ[s]⊆ U}=[ℛ]U∈β_1^S [by Definition <ref>]and ℛ[-]^-1( U)={s∈ S:ℛ[s]∈ U}={s∈ S:ℛ[s]∩ U≠∅}=⟨ℛ⟩ U∈β_1^S[by Definition <ref>]Similarly, for U∈β_2^S, ℛ[-]^-1( U)=[ℛ]U∈β_2^S and ℛ[-]^-1( U)=⟨ℛ⟩ U∈β_2^S. Henceforth, ℛ[-] is a pairwise continuous map. Now we show that ℛ[-] is subspace preserving. Let s∈α_S(ℒ'), ℒ'∈𝔖_ℒ. As we know from Definition <ref> that ℛ[s] is apairwise compact subset of α_S(ℒ'). Since α_S(ℒ') is itself a pairwise Boolean space, thus we have ℛ[s]∈ V_P∘α_S(ℒ'). Therefore, β(S,α_S,ℛ) is a V_ℒ^bi-coalgebra. Let f:(S_1,α_S_1,ℛ_1)→ (S_2,α_S_2,ℛ_2) be an arrow in PRBS_ℒ. Then 𝔅(f) is an arrow in COALG(V_ℒ^bi).As f is an arrow in PBS_ℒ, 𝔅(f)=f:(S_1,α_S_1,ℛ_1[-])→ (S_2,α_S_2,ℛ_2[-]) is apairwise continuous map. Now using the conditions mentioned in the arrow part of Definition <ref>, it is straightforward to verify that ℛ_2[-]∘ f=V_ℒ^bi∘ℛ_1[-]. Thus 𝔅(f) is an arrow in COALG(V_ℒ^bi). We define a functor ℭ:COALG(V_ℒ^bi)→ PRBS_ℒ as follows: * For an object ((C,α_C),ζ) in COALG(V_ℒ^bi), define ℭ((C,α_C),ζ)=(C,α_C,ℛ_ζ), where ℛ_ζ is a binary relation on C defined by d∈ℛ_ζ[c] d∈ζ(c), c,d∈ C;* For an arrow f:((C_1,α_C_1),ζ_1)→ ((C_2,α_C_2),ζ_2) in COALG(V_ℒ^bi), define ℭ(f):(C_1,α_C_1,ℛ_ζ_1)→ (C_2,α_C_2,ℛ_ζ_2) by ℭ(f)=f. The well-definedness of the functor ℭ is shown by Lemma <ref> and Lemma<ref>. For an object ((C,α_C),ζ) in COALG(V_ℒ^bi), ℭ((C,α_C),ζ)=(C,α_C,ℛ_ζ) is an object in PRBS_ℒ.In order to show that ℭ((C,α_C),ζ) is an object in PRBS_ℒ, we must verify that ℭ((C,α_C),ζ) satisfies the conditions given in the object part of Definition <ref>. For each c∈ C, ℛ_ζ[c]=ζ(c)∈ V_P(C). Hence, ℛ_ζ[c] is a pairwise closed subset of C. Thus ℛ_ζ[c] is pairwise compact. Let U∈β_1^C. Then [ℛ_ζ](U)={c∈ C:ℛ_ζ⊆ U}={c∈ C:ζ(c)⊆ U}={c∈ C:ζ(c)∈ U}=ζ^-1( U)∈β_1^Cand ⟨ℛ_ζ⟩ U={c∈ C:ℛ_ζ∩ U≠∅}={c∈ C:ζ(c)∩ U≠∅}={c∈ C:ζ(c)∈ U}=ζ^-1( U)∈β_1^CFinally, let m∈α_C(ℒ') for ℒ'∈𝔖_ℒ. As ζ is a subspace preserving map from (C,α_C) to V_ℒ^bi(C,α_C), we have ℛ_ζ[m]=ζ(m)∈ V_P∘α_C(ℒ'). Henceforth, ℛ_ζ[m]⊂α_C(ℒ'). For an arrow f:((C_1,α_C_1),ζ_1)→((C_2,α_C_2),ζ_2) in COALG(V_ℒ^bi), ℭ(f):(C_1,α_C_1,ℛ_ζ_1)→ (C_2,α_C_2,ℛ_ζ_2) is an arrow in PRBS_ℒ.It is straightforward to prove that ℭ is an arrow in PRBS_ℒ. Now we obtain the following result: The categories PRBS_ℒ and COALG(V_ℒ^bi) are isomorphic.We shall show that the categories PRBS_ℒ and COALG(V_ℒ^bi) are isomorphic via the functors 𝔅 and ℭ. Let (S,α_S,ℛ) be an object in PRBS_ℒ. Then ℭ∘𝔅(S,α_S,ℛ)=ℭ(S,α_S,ℛ[-])=(S,α_S,ℛ_ℛ[-]). Now t∈ℛ_ℛ[-](s) t∈ℛ[s]. Thus (S,α_S,ℛ)=ℭ∘𝔅(S,α_S,ℛ). Let ((C,α_C),ζ) be an object in COALG(V_ℒ^bi). 𝔅∘ℭ((C,α_C),ζ)=𝔅(C,α_C,ℛ_ζ)=((C,α_C),ℛ_ζ[-]). We have c_2∈ℛ_ζ[c_1] c_2∈ζ(c_1). As a result, ((C,α_C),ζ)=𝔅∘ℭ((C,α_C),ζ). It is clear that for an arrow f in COALG(V_ℒ^bi), 𝔅∘ℭ(f)=f and for an arrow f in PRBS_ℒ, ℭ∘𝔅(f)=f. Using Theorems <ref> and <ref>, we arrive at the following duality theorem:The categories ℳ𝒜_ℒ and COALG(V_ℒ^bi) are dually equivalent.Thus the modal semi-primal duality for algebras of Fitting’s Heyting-valuedmodal logic (for more information, see [<cit.>]) can potentially be represented in terms of the coalgebras of ℒ-biVietoris functor V_ℒ^bi.Finally, based on the preceding theorems, we can conclude:Fitting’s Heyting-valued modal logic is sound and complete with respect to coalgebras of the bi-Vietoris functor V_ℒ^bi.§ CONCLUSION We have introduced the category PRBS_ℒ and found a duality for the class of all algebras of a version of Fitting's Heyting-valued modal logic in bitopological language. This has led to an extension of the natural duality theory for modal algebras. We have demonstrated how the theory of coalgebras can be used to characterize the category PRBS_ℒ and thus obtained a coalgebraic description of the bitopological duality for Fitting's Heyting-valued modal logic. In the current work, we have explicitly constructed the Vietoris functor on the category PBS_ℒ and we have finally concluded that coalgebras for this functor provide sound and complete semantics for Fitting's Heyting-valued modal logic.In light of our present work, we can suggest some future research directions.As an application of this coalgebraic duality, we may establish the existence of a final coalgebra and cofree coalgebras in the category COALG(V_ℒ^bi), and we can also develop the coalgebraic duality theorem for many-valued modal logics in a bitopological scenario. Another interesting line of research would be to show that coalgebras of an endofunctor V on the category BES of bi-topological Esakia spaces( the idea of bitopological Esakia spaces can be found in <cit.>) can characterise lattice-valued intuitionistic modal logic. However, it is unclear to us how to characterise the relation ℛ on bitopological Esakia spaces in terms of coalgebras of the functor V, and this appears to be an open problem at the moment. § DECLARATIONS 99 abramsky2011cook Abramsky, Samson. A Cook's tour of the finitary non well founded sets. arXiv preprint arXiv;1111.7148 (2011). burris1981course S.Burris, H.P. Sankappanavar. A course in universal algebra, Springer (1981). davey2002introduction Davey, Brian A., Priestley, Hilary A. Introduction to lattices and order, CUP (2002). adamek1990abstract J. Adámek, H. Herrlich, G. E. Strecker,. Abstract and concrete categories, Wiley-Interscience (1990). adamek2005introduction Adámek, J. Introduction to coalgebra. Theory and Applications of Categories, 14(8), 157-199(2005) salbany1974bitopological Salbany, Sergio. Bitopological spaces, compactifications and completions. Mathematical Monographs of the University of Cape Town No. 1, Department of Mathematics, University of Cape Town, Cape Town (1974). lauridsen2015bitopological Lauridsen, Frederik M'́ollerstr'́om. Bitopological Vietoris spaces and positive modal logic. Master’s thesis, University of Amsterdam, ILLC Master of Logic Thesis (2015). bezhanishvili2010bitopological G.Bezhanishvili, N. Bezhanishvili, D. Gabelaia, A. Kurz,. Bitopological duality for distributive lattices and Heyting algebras. Mathematical Structures in Computer Science, 20(3), 359-393, CUP (2010). fitting1991many Fitting, Melvin C. Many-valued modal logics. Fund. Inform. 15, 235-254 (1991). maruyama2009algebraic Maruyama, Yoshihiro. Algebraic study of lattice-valued logic and lattice-valued modal logic. Lecture Notes in Computer science, 5378, 172-186 (2009). maruyama2011dualities Maruyama, Yoshihiro. Dualities for algebras of Fitting's many-valued modal logics. Fundamenta Informaticae, 106(2-4), 273-294 (2011). das2021bitopological Das, Litan Kumar., Ray, Kumar Sankar. Bitopological duality for algebras of Fitting’s logic and natural duality extension. Acta Informatica, 58(5), 571-584(2021). adnadzhevich1987bicompactness Adnadzhevich, D. Bicompactness of bitopological spaces. J Math Sci 37, 1059-1063 (1987). clark1998natural Clark, David M., Davey, Brian A. Natural dualities for the working algebraist. CUP (1998). blackburn2001modal Blackburn, Patrick., De Rijke, Maarten., Venema, Yde. Modal Logic, CUP (2001).hansoul1983duality Hansoul, Georges. A duality for Boolean algebras with operators. Algebra Universalis, 17, 34-49 (1983). jonsson1951boolean Jónsson, Bjarni., Tarski, Alfred. Boolean algebras with operators. Part I, 73(4), 891-939 (1951). kupke2004stone Kupke, Clemens., Kurz, Alexander., Venema, Yde. Stone coalgebras. Theoretical computer science, 327(1-2), 109-134 (2004). kurz2006coalgebras Kurz, Alexander. Coalgebras and their logics. ACM SIGACT News, 37(2), 57-77 (2006). stone1938representation Stone, Marshall H. The representation of Boolean algebras. Bull. Amer. Math.Soc. 44, 807-816 (1938). esakia1974topological Esakia, Leo Leonidovich. Topological kripke models. Soviet Math. Dokl., 214(2), 298-301 (1974). palmigiano2004coalgebraic Palmigiano, Alessandra. A coalgebraic view on positive modal logic. Theoretical Computer Science, Elsevier, 327(1-2), 175-195 (2004).
http://arxiv.org/abs/2312.16276v1
{ "authors": [ "Litan Kumar Das", "Kumar Sankar Ray", "Prakash Chandra Mali" ], "categories": [ "cs.LO" ], "primary_category": "cs.LO", "published": "20231226171639", "title": "Bi-coalgebraic view of Fitting's Heyting-valued modal logic" }
William Q. EricksonDepartment of MathematicsBaylor UniversityOne Bear Place #97328Waco, TX [email protected] HunzikerDepartment of MathematicsBaylor UniversityOne Bear Place #97328Waco, TX [email protected] For a complex reductive group H with finite-dimensional representations W and U, the module of covariants for W of type U is the space of all H-equivariant polynomial functions W ⟶ U. In this paper, we take H to be one of the classical groups (V), (V), or Ø(V) arising inHowe's dual pair setting, where W is a direct sum of copies of V and V^*. Our main result is a uniform combinatorial model for Stanley decompositions of the modules of covariants, using visualizations we call jellyfish. Our decompositions allow us to interpret the Hilbert series as a positive combination of rational expressions which have concrete combinatorial interpretations in terms of lattice paths; significantly, this interpretation does not depend on the Cohen–Macaulay property. As a corollary, we recover a major result of Nishiyama–Ochiai–Taniguchi (2001) regarding the Bernstein degree of unitary highest weight (,K)-modules. We also extend our methods to compute the Hilbert series of the invariant rings for the groups (V) and (V), as well as the Wallach representations of type ADE. [2020]Primary 05E10; Secondary 13A50, 22E47, 17B10Stanley decompositions of modules of covariants Markus Hunziker January 14, 2024 =============================================== § INTRODUCTION§.§ Modules of covariantsLet H be a complex reductive group, and let W and U be finite-dimensional representations of H. Then ([W] ⊗ U)^H can be viewed as the space of all H-equivariant polynomial functions W ⟶ U. This space is called the module of covariants for W of type U. It is a module over the ring of invariants [W]^H via multiplication of functions.(In fact, [W]^H is itself the module of covariants of trivial type.) In this paper, for the most part, we consider the three classical groups arising in Howe's reductive dual pair setting, acting on an arbitrary number of vectors and covectors. (This is the setting for Weyl's fundamental theorems of classical invariant theory.) Thus H will be either the general linear group (V) acting on W = V^*p⊕ V^q, or the symplectic group (V) or orthogonal group Ø(V) acting on W = V^n.The main results in this paper yield an explicit combinatorial description of linear bases for the modules of covariants in this dual pair setting. For the groups (V) and (V) we can do this in full generality, where U is any rational representation; for the group Ø(V), our methods apply as long as U is an exterior power of V. The cases we study include (and widely generalize) the examples given in a recent expository article <cit.> about equivariant machine learning, which has grown rapidly as a research area during the last two years. For machine learning problems exhibiting symmetries (such as those arising naturally in physical sciences), restriction to equivariant functions leads to dramatic improvement over ordinary approaches; for a detailed summary, see <cit.> and the references therein. Therefore, a fundamental problem in equivariant machine learning is finding suitable parametrizations of equivariant functions between group representations. Our main result in this paper gives a very concrete parametrization of such functions, and we are hopeful that it may be of interest to experts in machine learning.Modules of covariants were studied during the peak of 19th-century classical invariant theory, and they received a flurry of renewed interest a century later in the 1990s. That relatively recent work, contained in papers by Brion <cit.>, Broer <cit.>, and Van den Bergh Vandenberg91,VandenBergh, largely focuses on the question of when modules of covariants possess the Cohen–Macaulay property. Roughly speaking, the Cohen–Macaulay property is generally tantamount to having a structure that is combinatorially “nice.” For example, suppose that M is a finitely generated graded module over a finitely generated graded -algebra R. If M is Cohen–Macaulay, then there exists a homogeneous system of parameters θ_1, …, θ_ℓ∈ R (being algebraically independent, by definition) such that M is a graded free module over [θ_1, …, θ_ℓ], and admits a Hironaka decompositionM = ⊕_i ∈ I[θ_1, …, θ_ℓ]η_i,where I is a finite set indexing certain homogeneous elements η_i ∈ M. A Hironaka decomposition (<ref>) immediately yields the Hilbert–Poincaré series in the formP(M; t) = ∑_i ∈ I t^η_i/∏_j=1^ℓ (1-t^θ_j).§.§ Stanley decompositions Our original goal in this paper was to write down Hironaka decompositions for modules of covariants which are Cohen–Macaulay. In addition to our theoretical interest in the problem, we learned that the recent success of equivariant machine learning has exploited Hironaka decompositions of certain modules of covariants; in fact, in <cit.>*p. 88 it is reported that in a Bayesian analysis of various networks,“beyond a minimum size, Hironaka decomposition networks always outperform minimal algebra generator networks and Weyl networks.”The Cohen–Macaulay property is extremely rare among modules of covariants. Gradually, we realized that we could obtain equally nice decompositions, and in complete generality, by allowing the θ's in (<ref>) to vary with each component. This led us to look instead for Stanley decompositions, which take the formM = ⊕_i ∈ I[θ_i,1, …, θ_i,ℓ_i]η_i,where, for each i ∈ I, the homogeneous elements θ_i,1, …, θ_i,ℓ_i∈ R are algebraically independent, and η_i ∈ M is homogeneous. Each component of (<ref>) is called a Stanley space. With respect to writing down and interpreting the Hilbert–Poincaré series, a Stanley decomposition is effectively just as elegant as a Hironaka decomposition, since (<ref>) yields the positive combinationP(M;t) = ∑_i ∈ It^η_i/∏_j=1^ℓ_i (1-t^θ_i,j). In the special case of the invariant rings [W]^H, Stanley decompositions are already known, and the Stanley spaces are parametrized by families of nonintersecting lattice paths. These were obtained in the 1990s by Sturmfels <cit.>, Bruns–Herzog <cit.>, and Conca <cit.>, in the context of determinantal rings (which are isomorphic to the invariant rings, by Weyl's fundamental theorems; see Section <ref>). Since modules of covariants generalize invariant rings, our task was to find a generalization of lattice path families which could parametrize Stanley decompositions for all modules of covariants in a uniform manner. §.§ Main result: a nontechnical overviewThe main result in this paper is a combinatorial description of Stanley decompositions (and Hilbert–Poincaré series) for modules of covariants in the dual pair setting, without any regard to the Cohen–Macaulay property or lack thereof. When the rank of H is greater than or equal to the number of copies of V and V^* in W, the modules of covariants are free, and so for the purposes of this paper these cases are trivial. (In this case, it is typically said in the literature that the parameters fall in the stable range, but see Remark <ref>.) In this paper, all of our results hold in the following range:V ≤min{p,q}, H = (V)withW = V^*p⊕ V^q,n/2, H = (V)withW = V^n,n, H = Ø(V)withW = V^n. On previewing the decompositions in our main results (Theorems <ref>, <ref>, and <ref>), the reader will see that we visualize the Stanley spaces using combinatorial objects which we call jellyfish; these are the desired generalizations of lattice path families mentioned above. As an example below, we show a typical jellyfish for H = (V), where k 1/2 V = 3, and n=8 so that W = V^8. On the left-hand side, we show the jellyfish, and on the right-hand side we give the Stanley space it represents inside the module of covariants ([V^8] ⊗ U_τ)^(V), where U_τ is the irreducible representation of (V) with highest weight τ = (5,4,2):corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 6pt, inner sep=2pt,text=white, font=] shadow=[rounded corners,line width=.5em,red!50!black,opacity=.2,cap=round] dot=[circle,fill=lightgray, minimum size = 4.5pt, inner sep=0pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt][scale=.3,baseline=(current bounding box.center),every node/.style=scale=.7][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3);in 2,...,8 in ,...,8[dot] at (10-,) ;[shadow] (4,8) – ++(-1,-1); [shadow] (6,8) – ++(-2,-2); [shadow] (8,7) – ++(-1,-1);at (0,8) 1; at (0,7) 2; at (0,6) 3; at (0,5) 4; at (0,4) 5; at (0,3) 6; at (0,2) 7; at (0,1) 8; at (1,9) 1; at (2,9) 2; at (3,9) 3; at (4,9) 4; at (5,9) 5; at (6,9) 6; at (7,9) 7; at (8,9) 8;[ultra thick] (2,8) – ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(2,0) – ++(0,-1) node[corner]– ++(1,0) /in 1/0,1/-1 – ++(,) node[endpt] ;[ultra thick] (4,8) – ++(1,0) – ++(0,-1) – ++(2,0) – ++(0,-1) – ++(1,0)/in 1/0,1/-1,1/-1,1/-3 – ++(,) node[endpt] ;[ultra thick] (6,8) – ++(2,0) – ++(0,-1) /in 1/0,1/0,1/-1,1/0,1/-1 – ++(,) node[endpt] ; [ 7.5cmf_12, f_13, f_23, f_24, f_34, f_35, f_45, f_46, f_47, f_57, f_58, f_14, f_15, f_25, f_26, f_27, f_37, f_38, f_16, f_17, f_18, f_28]_points inlying on the three lattice paths f_45 f_57_cornersϕ_T Combinatorially, the main features of the jellyfish above are the following: * The gray(n-1) × (n-1) staircase pattern is the Hasse diagram of a certain poset , whose elements are ordered pairs (i,j) where 1 ≤ i < j ≤ n. For H = (V) and (V), it turns out thatis isomorphic to the poset of positive noncompact roots of the Lie algebracorresponding to H via Howe duality.* We write each jellyfish in the form ⇉ T. Thedenotes (the poset elements contained in) a family of k lattice paths in , starting in the top row ofin columns 2,4,…, 2k. (Each lattice path is a saturated chain in .) * The red squares indicate the corners of , meaning the L-shaped turns. Note, however, that each starting point and endpoint “shadows” the sequence of L-turns (if any) to its southwest; the shadowed L-turns are not included as corners.* The T denotes the tentacles attached to these lattice paths, dangling from the right edge of . We say that this particular jellyfish has shape τ, since the tentacle lengths are given by τ = (5,4,2). The tentacles can be viewed as the rows of a semistandard Young tableau T of shape τ, with entries from the alphabet {1, …, 8}. (See Example <ref>.)To interpret the jellyfish as a Stanley space, we give the following dictionary between these combinatorial features and their algebraic meaning: * The elements (i,j) ∈ represent the quadratic contractions f_ij which (by Weyl's first fundamental theorems; see Section <ref>) generate the invariant ring [W]^H.* The points (i,j) ∈ represent the f_ij's which play the role of the θ's in (<ref>).* The role of η_i in (<ref>) is played by the product of the “corner” f_ij's (the red squares) and a natural covariant map ϕ_T: W ⟶ U_τ, to be defined in Sections <ref>–<ref>. In Section <ref>, we use these jellyfish to describe linear bases for each Stanley space. By flattening a jellyfish into an arc diagram, we can read off the weight of each basis element (under the action of the Lie algebraassociated to H via Howe duality) from the degree sequence of the arc diagram (see Proposition <ref>, along with Examples <ref> and <ref>).The starting point for our proofs (Section <ref>) is the standard monomial theory for reductive dual pairs developed in <cit.>. Standard monomial theory was developed by Seshadri <cit.>, as a way of determining explicit linear bases for sections of line bundles over generalized flag varieties; see also LakshimaiSeshadri,LakshmibaiRaghavan. In related work, De Concini–Procesi <cit.> wrote down canonical bases of standard monomials, in terms of (bi)tableaux, for the classical rings of invariants [W]^H; we also refer the reader to the treatments in <cit.>*Ch. 13 and <cit.>*Ch. 10–12. In <cit.>, the author generalizes this standard monomial theory to the ring [W]^N where N is a maximal unipotent subgroup of H. We begin our proofs by viewing [W]^N as the multiplicity-free direct sum of the modules of covariants ([W] ⊗ U_τ)^H as τ ranges over the H-spectrum of [W]. The jellyfish we define in this paper are unrelated to the jellyfish diagrams in <cit.> or the jellyfish tableaux in <cit.>, both of which have been introduced within the last few years. These previous instances of jellyfish terminology arose in the representation theory of the symmetric and alternating groups.§.§ Applications and extensions As mentioned above, our jellyfish generalize the Stanley decompositions of the invariant rings [W]^H for the three classical groups H arising in the dual pair setting. We give the explicit specialization in Corollary <ref>. We also extend our lattice path methods to treat the rings of invariants for the special linear group (V) and special orthogonal group (V) in Corollaries <ref> and <ref>.Our Stanley decompositions not only recover, but actually refine the main result of a major paper <cit.> by Nishiyama–Ochiai–Taniguchi. Recall (or see Theorem <ref> below) that for each reductive dual pair (H,), there is an injection τ↦λ such that ([W] ⊗ U_τ)^H is isomorphic to L_, the simple (,K)-module with highest weight λ. The main result of <cit.>*Thm. B expresses the Bernstein degree of L_ as the product of U_τ and the degree of a certain determinantal variety. This fact is a direct consequence of the form (<ref>) of the Hilbert–Poincaré series given by our Stanley decompositions; see Corollary <ref>.As (,K)-modules, the rings of invariants [W]^H are instances of the Wallach representations, which are the coordinate rings of the closures of the K-orbits inside ^+ ⊂. These representations are defined for all Hermitian symmetric pairs (,)̨, not only those arising in the dual pair setting. In the cases whereis simply laced, we have found that Stanley decompositions can be obtained using the same lattice path method we used for the classical groups: namely, for the kth Wallach representation, we parametrize the Stanley spaces by families of k maximal nonintersecting lattice paths in the Hasse diagram of the poset of positive noncompact roots of . We carry this out in Section <ref>, where we reinterpret the Hilbert–Poincaré series of the Wallach representations as recorded in <cit.> and <cit.>, for all Hermitian symmetric pairs of type ADE. §.§ Complications with the orthogonal groupAs mentioned earlier, when H = (V) or (V), our results encompass all modules of covariants ([W] ⊗ U)^H, where U is any rational representation of H. When H = Ø(V), however, it seems that a fully general Stanley decomposition via jellyfish may be out of reach, and so we restrict our attention to modules of covariants where U is an exterior power of V.There are several complications that arise in the Ø(V) case; these difficulties can already be glimpsed from the standard monomial theory in <cit.>, discussed above. Because Ø(V) is disconnected and certain irreducible representations contain two independent highest weight vectors, it is not quite true in general that [W]^N is the multiplicity-free direct sum of modules of covariants. For this reason, the author of <cit.> imposes the condition V > 2n in the orthogonal case. (The same condition is imposed in the subsequent paper <cit.>.) This condition automatically excludes all non-free modules of covariants, which are the only ones of interest for us in the present paper. Therefore we have adapted the arguments of <cit.> (particularly the notion of “split” monomial generators) to the orthogonal group, as far as possible (see Definition <ref> below).An even more serious difficulty is the fact that the irreducible representations U_τ of Ø(V) have an especially complicated structure as quotients of (V)-representations, due to the orthogonal trace relations which arise whenever the partition τ has more than one column in its Young diagram; see <cit.>. These trace relations seem to be incompatible with our approach to defining jellyfish in terms of lattice paths. Another difference between Ø(V) and the other two groups (see Section <ref>) is that its associated Lie algebra (via Howe duality) is = _2n, which is not simply laced, and whose poset of positive noncompact roots is not isomorphic to the poset .§ PRELIMINARIES§.§ Fundamental theorems of classical invariant theoryLet H be one of the complex classical groups (V), (V), or Ø(V), to be defined below, and letWV^*p⊕ V^q, H = (V),V^n, H = (V)or Ø(V).There is a natural action by H on W, acting diagonally on all of the (co)vectors. In this setting, Hermann Weyl <cit.> gave a uniform treatment of the first and second fundamental theorems (FFT and SFT) of invariant theory, which describe generators and relations (respectively) for the ring of invariants[W]^H {f ∈[W] : f(hw) = f(w)for allh ∈ H, w ∈ W}.In each case, the generators are certain contractions f_ij of degree 2, and the relations among them are given by the vanishing of determinants or Pfaffians. We present the details below for each classical group. (We will use the name π^* for a certain surjective homomorphism of algebras in the following discussions; the reason for this notation will become clear in Section <ref>, where we introduce a map π : W ⟶^+ in the context of Howe duality.)Throughout the paper, we write _p,q for the space of p × q complex matrices, where _n _n,n. We write _n and _n for the spaces of symmetric and alternating matrices, respectively.§.§.§ The general linear group Let V be a complex vector space of dimension k. The general linear group (V) is the group of all invertible linear operators on V. We also write _k to emphasize the dimension of V, since (V) can be identified with the group of invertible matrices in _k once we fix a basis for V. Let (v^*_1, …, v^*_p, v_1, …, v_q) ∈ V^*p⊕ V^q. The FFT states that [V^*p⊕ V^q]^(V) is generated by the contractionsf_ij : (v^*_1, …, v^*_p, v_1, …, v_q) ⟼ v^*_i(v_j),1 ≤ i ≤ p, 1 ≤ j ≤ q.Now let [z_ij] [z_ij : 1 ≤ i ≤ p, 1 ≤ j ≤ q], and define the algebra homomorphismπ^*: [z_ij]⟶[V^*p⊕ V^q]^(V),z_ij ⟼ f_ij.The SFT states that π^* is the ideal generated by the determinants of the (k+1)-minors of the matrix of indeterminates [z_ij]. It follows that [V^*p⊕ V^q]^(V)≅[z_ij] / π^* ≅[_p,q^⩽ k], where the right-hand side denotes the coordinate ring of the determinantal variety _p,q^⩽ k{A ∈_p,q : rank A ≤ k}.Note that if k ≥min{p,q}, then ^⩽ k_p,q = _p,q.§.§.§ The symplectic group Let V be a complex vector space of dimension 2k, equipped with a nondegenerate skew-symmetric bilinear form ω( , ).The symplectic group (V) = (V,ω), also written as _2k when working with coordinates, is the group of all invertible linear operators on V that preserve ω. The FFT states that [V^n]^(V) is generated by the contractionsf_ij : (v_1, …, v_n) ⟼ω(v_i, v_j),1 ≤ i < j ≤ n.Now let [z_ij] = [z_ij : 1 ≤ i < j ≤ n], and define the algebra homomorphismπ^* : [z_ij]⟶[V^n]^(V),z_ij ⟼ f_ij.The SFT states that π^* is the ideal generated by the 2(k+1)-Pfaffians of the matrix of indeterminates [z_ij]. It follows that [V^n]^(V)≅[z_ij] / π^* ≅[_n^⩽ 2k], where the right-hand side denotes the coordinate ring of the alternating determinantal variety _n^⩽ 2k{ A ∈_n : rank A ≤ 2k}. §.§.§ The orthogonal group Let V be a complex vector space of dimension k, equipped with a nondegenerate symmetric bilinear form b(,). The orthogonal group Ø(V) = Ø(V,b), also written as Ø_k when working with coordinates, is the group of all invertible linear operators on V that preserve b. The FFT states that [V^n]^Ø(V) is generated by the contractionsf_ij : (v_1, …, v_n) ⟼ b(v_i, v_j),1 ≤ i ≤ j ≤ n.Now let [z_ij] = [z_ij : 1 ≤ i ≤ j ≤ n], and define the algebra homomorphismπ^*: [z_ij]⟶[V^n]^Ø(V),z_ij ⟼ f_ij.The SFT states that π^* is the ideal generated by the determinants of the (k+1)-minors of the matrix of indeterminates [z_ij]. It follows that [V^n]^Ø(V)≅[z_ij] / π^* ≅[_n^⩽ k],where the right-hand side denotes the coordinate ring of the symmetric determinantal variety _n^⩽ k{ A ∈_n : rank A ≤ k}. §.§ Semistandard tableaux and representations of the classical groups The highlights of this section are the maps ϕ_T, defined for each classical group in (<ref>), (<ref>), and (<ref>) below. These maps will be fundamental ingredients in our main result.A partition is a finite, weakly decreasing sequence of positive integers. Given a partition π = (π_1, …, π_m), we write |π| ∑_i π_i for its size, and we write ℓ(π)m for its length. We write (a^m)(a, …, a), and we write 0 for the empty partition. We also adopt the notation π (π_m, …, π_1) to reverse a partition, and sometimes for contrast we will write ππ. We use the shorthand π - k to denote the tuple obtained by subtracting k from every coordinate of π.The Young diagram associated to a partition π is a collection of |π| many boxes, arranged in left-justified rows such that the ith row from the top contains π_i boxes. A Young tableau refers to a Young diagram in which the boxes are filled with entries from a certain alphabet, most often [n] {1, …, n}. Assuming that the alphabet is totally ordered, a tableau is called semistandard if the entries are weakly increasing within each row, and strictly increasing within each column. The shape of a tableau is the partition corresponding to its underlying Young diagram. We write(π, n) {semistandard Young tableaux with shape π, in the alphabet [n]}.We show an example below of a partition π along with a semistandard tableau of shape π:smalltableaux,centertableauxπ = (4,3,3,1,1) 1225,234,446,5,6∈(π,n),where n ≥ 6.We write ∅ for the empty diagram or the empty tableau. Since π' is commonly written for the conjugate of a partition π (i.e., the partition whose Young diagram is obtained from that of π by reflecting about the main diagonal), we write π'_i to denote the length of the ith column of the Young diagram of π. (This notation appears in Table <ref> below.)§.§.§ The general linear groupIn this paper, when working in coordinates, we set V = ^k with the standard basis {e_1, …, e_k}. This induces the dual basis {e_1^*, …, e^*_k} for the contragredient representation V^*. Then (V) can be viewed as the group _k of invertible k × k complex matrices, and we let N be the maximal unipotent subgroup consisting of upper-triangular matrices with 1's along the diagonal. Then the maximal torus consists of the diagonal matricest =diag(t_1, …, t_k),t_i ∈^×.Given a representation of _k on a vector space U, we say that a vector u ∈ U is a weight vector for _k, with weight τ = (τ_1, …, τ_k), if u is a simultaneous eigenvector under the action of the maximal torus as follows:t.u = t_1^τ_1⋯ t_k^τ_k· u. The finite-dimensional irreducible polynomial representations U_π of _k are labeled by their highest weights, which range over all partitions π such that ℓ(π) ≤ k. It is well known that U_π has a natural basis parametrized by (π, k). In this paper, however, we treat the broader class of rational representations U_τ of _k, where τ = (τ_1, …, τ_k) ranges over not only partitions, but all weakly decreasing k-tuples of integers. Let τ^+ denote the partition whose parts are the positive entries of τ; likewise, let τ^- denote the partition whose parts are the negative entries of τ (which are reversed and made positive in order to obtain a partition). For example, if τ = (5,3,3,2,0,0,-1,-4,-6), then τ^+ = (5,3,3,2) and τ^- = (6,4,1).Stembridge <cit.> generalizes the bases of tableaux from polynomial representations to rational representations as follows. (See also an equivalent construction by King <cit.>.) If T is a tableau, then let T_i,j denote the entry in row i and column j. Following <cit.>*Def. 2.2, a rational tableau of shape τ is a pair (R^+, R^-) ∈(τ^+, k) ×(τ^-, k) such that#{i : R^-_i,1≤ℓ} + #{j : R^+_j,1≤ℓ}≤ℓ, for all 1 ≤ℓ≤ k.In other words, for each ℓ, the first column of R^+ and the first column of R^- must (together) contain no more than ℓ entries which are less than or equal to ℓ. Note that if τ^- = ∅, then τ is a partition, and U_τ is a polynomial representation of _k; accordingly, the condition (<ref>) is fulfilled vacuously by every tableau in (τ,k). The same is true if τ^+ = ∅, in which case U_τ is the dual of a polynomial representation.For 1 ≤ i ≤ k, let ϵ_i denote the k-tuple with 1 as the ith coordinate and 0's elsewhere. Let ω_i ∑_j=1^i ϵ_j denote the ith fundamental weight. Let c^+_i and c^-_i be the coefficients of τ^+ and τ^- in the expansions in terms of fundamental weights:τ^+ = ∑_i=1^ℓ(τ^+) c^+_i ω_i, τ^- = ∑_i=1^ℓ(τ^+) c^-_i ω_i.The rational representation U_τ is a certain subspace (to be specified below) of⊗_i=1^ℓ(τ^+) S^c^+_i (^i V) ⊗⊗_i=1^ℓ(τ^-) S^c^-_i (^i V^*).The following is a main result of <cit.>*Prop. 2.4:U_τ = #{rational tableaux of shape τ with alphabet [k]}. In fact, the rational tableaux furnish a natural basis of weight vectors for U_τ, as follows. Let (R^+, R^-) be a rational tableau of shape τ. Note that for each 1 ≤ i ≤ℓ(τ^±), the tableau R^± has c^±_i columns of length i. Therefore, by viewing each length-i column {a_1, …, a_i} in R^+ as the vector 𝐞_a_1, …, a_i e_a_1∧⋯∧ e_a_i, then taking their product inside S^c_i^+(^i V), and then tensoring together these products for all 1 ≤ i ≤ℓ(τ^+), we can identify R^+ with an element of the first of the two main tensor factors in (<ref>). In the same way (just replacing each e_j by e^*_j), we can identify R^- with an element in the second tensor factor in (<ref>). For example, letting k=8, below we give a rational tableau of shape τ along with its corresponding basis element in the subspace U_τ of (<ref>):τ = (6,5,2,2,0,-1,-3,-5), (233578,34468,56,88, 12367,458,6,) (𝐞_2358𝐞_3468) ⊗ (𝐞_34𝐞_56𝐞_78) ⊗𝐞_8_S^2(∧^4 V) ⊗ S^3(∧^2 V) ⊗ V⊗𝐞^*_146⊗ (𝐞^*_25𝐞^*_38) ⊗𝐞^*_6 𝐞^*_7_∧^3 V^* ⊗ S^2(∧^2 V^*) ⊗ S^2 (V^*).We observe that under the action of the torus, the weight of the basis vector corresponding to (R^+,R^-) is the difference cont(R^+) -cont(R^-) between the contents, where the content cont(T) is the vector whose ith component is the number of occurence of the entry i in a tableau T. In the example displayed above, the weight is(0,1,3,2,2,2,1,4)_ cont(R^+) - (1,1,1,1,1,2,1,1)_ cont(R^-) = (-1,0,2,1,1,0,0,3). We now use the same construction to define certain _k-equivariant polynomial functions V^*p⊕ V^q ⟶ U_τ, indexed by pairs of (not necessarily rational) tableaux of shape τ. Let τ be as given in the top row of Table <ref> below, and let (T^+, T^-) ∈(τ^+, q) ×(τ^-, p). Then the mapϕ_(T^+, T^-) : V^*p⊕ V^q ⟶ U_τis defined in exactly the same way as the map illustrated in (<ref>), except with 𝐞_a_1, …, a_ℓ replaced by v_a_1∧⋯∧ v_a_ℓ, and 𝐞^*_a_1, …, a_ℓ replaced by v^*_a_1∧⋯∧ v^*_a_ℓ. In particular, this gives a map from V^*p⊕ V^q into the space (<ref>), and the map ϕ_(T^+, T^-) is obtained by then projecting onto the subspace U_τ. Note that ϕ_(T^+, T^-) is a polynomial function of degree |τ^+| + |τ^-|.§.§.§ The symplectic groupWhen working in coordinates, we will view (V) ≅_2k⊂_2k as the subgroup preserving the skew-symmetric bilinear form ω on V = ^2k given by the block matrix 0 I_k -I_k 0, where I_k is the k × k identity matrix. We take N to be the maximal unipotent subgroup of _2k consisting of matrices of the form[ U A; 0 L ]for U,L,A ∈_k, where U (resp., L) is upper-triangular (resp., lower-triangular) with 1's on the diagonal. Then the maximal torus of _2k consists of diagonal matrices of the formdiag(t_1, …, t_k, t_1^-1, …, t_k^-1). We mention here a different coordinate convention, which will be useful in proving Corollary <ref>. Following King <cit.>*p. 496, one can instead give V the ordered basis {e_1, e_1̅, e_2, e_2̅, …, e_k, e_k̅}, and define the form ω by the matrix consisting of k blocks 0 1 -1 0 along the diagonal, with 0's elsewhere. Then King defines a semistandard symplectic tableau to be a semistandard tableau with entries in the alphabet 1 < 1̅ < ⋯ < k < k̅, such that the entries in the ith row are greater than or equal to i, for each 1 ≤ i ≤ k. The semistandard tableaux of shape τ furnish a basis of weight vectors for U_τ, such that the weight is again given by the content of the tableau, where the content is the vector whose ith component is the difference between the number of i's and the number of 's. From this we have the following analogue of (<ref>): U_τ = #{semistandard symplectic tableaux of shape τ with alphabet {1, 1̅, …, k, k̅}}. The irreducible finite-dimensional representations U_τ of _2k are labeled by their highest weights, which range over all partitions τ such that ℓ(τ) ≤ k. There is a canonical way to realize the representation U_τ of _2k as a quotient of the irreducible representation of _2k with highest weight τ, which we constructed in the previous subsection. (Note that to avoid cluttered notation in this paper, we are using the same symbol U_τ in the context of each group H. This will not cause confusion since we always treat the groups separately, but in this paragraph one could write something like “U^_2k_τ is a quotient of U^_2k_τ.”) Therefore, in the setting where H = _2k, given a tableau T ∈(τ,n), it is natural to define the H-equivariant mapϕ_T : V^n ⟶ U_τby taking the map ϕ_(T,∅) in the H = _2k setting (where now p=0 and q=n), and composing with the canonical projection U^_2k_τ⟶ U^_2k_τ. Note that ϕ_T is a polynomial function of degree |τ|.§.§.§ The orthogonal groupWhen working with coordinates, we will view Ø(V) ≅Ø_k ⊂_k as the subgroup preserving the standard dot product on V = ^k. In this paper, we consider only the irreducible representations U_τ of Ø_k afforded by the exterior powers ^m V, which are labeled by the one-column shapes τ = (1^m). A tableau T ∈((1^m), n) is just a subset of [n] of size m, say T = {a_1, …, a_m}, which defines an Ø_k-equivariant map in the obvious way:φ_T : V^n⟶^m V,(v_1, …, v_n)⟼ v_a_1∧⋯∧ v_a_m.Note that ϕ_T is a polynomial function of degree m. §.§ Hermitian symmetric pairsLet G_ be a noncompact real reductive Lie group with complexified Lie algebra , and let K_ be a maximal compact subgroup of G_with complexified Lie algebra $̨. Furthermore, letKdenote the complexification ofK_. A (,K)-module is acomplex vector space carryingrepresentations of bothandKsuch thatKacts locally finitely and the actions ofandKare compatible. Harish-Chandra showed HC55,HC56 thatG_admits irreducible(,K)-moduleshaving a highest weight vectorif and only ifG_/K_is a Hermitian symmetric space.From now on, supposeG_/K_is an irreducibleHermitian symmetric space. From the general theory, there exists a distinguished elementh_0 ∈𝔷()̨such thatad h_0acts onwith eigenvalues0and±1, which yields a triangular decomposition = ^- ⊕⊕̨^+,where^± = { x ∈: [h_0, x] = ±x}. The subalgebra= ⊕̨^+is a maximal parabolic subalgebra of, with Levi subalgebra$̨ and abelian nilradical ^+.Parabolic subalgebras of complex simple Lie algebras that arise in this way are called parabolic subalgebras of Hermitian type, and (,)̨ is called a (complexified) Hermitian symmetric pair. The group K acts on ^+, and the closures of the K-orbits in ^+ form a chain of algebraic varieties{0} = _0 ⊂_1 ⊂⋯⊂_r = ^+.If (,)̨ is one of the three types in Table <ref> below, then [_k] is isomorphic to the coordinate ring of the determinantal variety ^⩽ k_p,q, ^⩽ 2k_n, or ^⩽ k_n, respectively. In each case, it follows from (<ref>), (<ref>), or (<ref>) that[W]^H ≅[_k]. Suppose (,)̨ is a Hermitian symmetric pair, and letbe a Cartan subalgebra of bothand $̨. LetΦbe the root system of the pair(,), and_the root space corresponding to∈Φ. Then we writeΦ(^+) { ∈Φ: _⊆^+}, which is a poset equipped with the standard partial order on roots.For the pairs(,)̨in Table <ref>, we adopt the conventions (regarding the indexing of roots and weights) spelled out explicitly in <cit.>*3.3.In particular, for(,)̨ = (_2n,_n)or(_2n, _n), weights forare written asn-tuples, while for(,)̨ = (_p+q, _p ⊕_q), we write a weight as a(p+q)-tuple in which the firstpcoordinates are separated from the lastqcoordinates by a vertical line.We writeL_to denote the simple-module with highest weight.We writeto denote the set of$̨-dominant integral weights. §.§ Howe dualityFor complete details behind the facts presented in this section, we refer the reader to Howe's paper <cit.>, as well as the analytic perspective taken by Kashiwara–Vergne <cit.>. In each of the three Howe duality settings we will describe, we begin with a real Lie group G_, defined below:(p,q) :={g∈(p+q,) : g [I_p0;0 -I_q ]g^* =[I_p0;0 -I_q ]}, Ø^*(2n):= {g∈(2n,) : g [ 0 I_n; I_n 0 ]g^t =[ 0 I_n; I_n 0 ]}∩(n,n), (2n,) :={g∈(2n,) : g [0I_n; -I_n0 ]g^t =[0I_n; -I_n0 ]}∩(n,n).Note that we have(p,q) ∩(p+q) ≅(p)×(q), and Ø^*(2n)∩(2n)≅(n), and (2n,)∩(2n) ≅(n), where in last two cases (n)is embedded block-diagonally as follows: {[a0;0 (a^-1)^t ] : a∈(n)}≅(n).It follows that Ø^*(2n)⊂(2n,) and (2n,)⊂(2n,).For this reason, many authors write ^*(2n) to denote Ø^*(2n). We also point out that our definition of (2n,) above differs from the more standard definitionby a Cayley transform as follows. Let 𝐜∈(2n,) be the matrix given by𝐜:=1/√(2)[I_n √(-1)I_n; √(-1)I_nI_n ].Then 𝐜(2n,)𝐜^-1⊆(2n,), with equality if and only if n=1.Let _ be the Lie algebra of of one of the three Lie groups G_ above, with complexification . In this context, Howe duality refers to a phenomenon wherebyis naturally paired with one of the classical groups H from the previous sections, in the following manner. (See Table <ref> for a summary of the data for each case of Howe duality.)Let H be one of the three classical groups in Table <ref>, acting naturally on the space W in (<ref>). Let 𝒟(W) be the Weyl algebra of polynomial-coefficient differential operators on W, and denote the subalgebra of H-invariant operators by 𝒟(W)^H. For each group H, there is a Lie algebra _ whose complexificationis isomorphic to a generating set for the algebra 𝒟(W)^H. Let ω : ⟶𝒟(W)^H denote this Lie algebra homomorphism whose image generates 𝒟(W)^H. Adopting the language of Howe, we call (H, ) a reductive dual pair. Let r denote the rank of (,)̨: specifically, we haver = min{p,q}, H = _k, ⌊ n/2 ⌋, H = _2k,n, H = Ø_k.Let = ^- ⊕⊕̨^+ be the triangular decomposition described in (<ref>). Define an H-invariant polynomial map π: W ⟶^+ as follows: (H = _k)π: _p,k ⊕_k,q ⟶_p,q,(Y,X) ⟼YX, (H = _2k)π: _2k,n ⟶_n,X ⟼X^t ([0I_k; -I_k0 ]) X, (H = Ø_k)π: _k,n ⟶_n,X ⟼X^t X.Then π(W) ⊆^+ is either all of ^+ (if k ≥ r), or else is the determinantal variety ^⩽ k_p,q,^⩽ 2k_n, or ^⩽ k_n, respectively. This induces the comorphism π^* : [^+] ⟶[W]^H given by z_ij↦ f_ij, where the z_ij are the standard coordinates on ^+. As in Section <ref>, Weyl's SFT states that π^* is generated by the determinants of sufficiently large minors or Pfaffians.It follows from all this thatacts on [W] via its image ω() ⊂𝒟(W)^H. Similarly, if U is any representation of H, then 𝒟(W)^H and henceacts on([W]⊗ U)^H. Explicitly, the action of an invariantdifferential operator D∈𝒟(W)^H on ([W]⊗ U)^H is given by D·(∑_i f_i⊗ u_i)=∑_i (Df_i)⊗ u_i.Furthermore, the $̨-action integrates to aK-action and hence([W]⊗U)^Hcan be viewed as a(,K)-module. Assume one of the three settings in Table <ref>. We have the following multiplicity-free decomposition of [W] as a (,K) × H-module:[W] ≅⊕_τ∈Σ([W]⊗ U_τ)^H⊗ U_τ^*,where Σ{τ∈H : ([W]⊗ U_τ)^H ≠ 0}. Furthermore, there is an injective map Σ⟶, τ↦λ, such that as a (,K)-module([W]⊗ U_τ)^H ≅ L_simple -module with highest weightand L_ is unitarizable with respect to _. See the appendix for further details, where we record explicitly the group actions on[W], along with the coordinate functions onWwhich will be convenient later in the paper. In the appendix we also write down the Lie algebra homomorphismω: →𝒟(W)^H, using a block matrix form that we find especially helpful in understanding the differential operators by whichacts on[W]. Our main results will be valid for all k ≤ r, where r is given in (<ref>). In <cit.>, this is called the “stable range”; we should point out, however, that the term is more often used in the literature to refer to (nearly) the opposite condition, namely k ≥ p+q, k ≥ n, and k ≥ 2n for the groups H = _k, _2k, and Ø_k, respectively. We used this latter sense of the term “stable range” in Section <ref>. As described in Section <ref>, the(,K)-modules([W] ⊗U_τ)^Hin (<ref>) relate to classical invariant theory, via the following identification:([W] ⊗ U_τ)^H≅{H-equivariant polynomial functions W ⟶ U_τ}the module of covariants of W, of type U_τ,f ⊗ u↦ (w ↦ f(w)u).Note that([W] ⊗U_τ)^His a module over the invariant ring[W]^Hvia multiplication of functions.§ STANLEY DECOMPOSITIONS OF MODULES OF COVARIANTS§.§ Preview We start with a more general preview of our main results than the example given in the introduction. LetMbe a finitely generated gradedS-module, whereSis a polynomial ring over. A Stanley decomposition ofMis a finite family(S_i, η_i)_i ∈Iwhereη_i ∈Mis homogeneous, andS_iis a graded-algebra retract ofSsuch thatS_i ∩Ann η_i = 0, and M = ⊕_i ∈ I S_i η_ias a graded vector space. (This type of decomposition was originally introduced by Stanley <cit.>*Thm. 5.2 for commutative monoids.) Each component in (<ref>) is called a Stanley space. Now assume one of the three reductive dual pair settings in Table <ref>, wherek ≤ras defined in (<ref>). For each groupH, and for everyτ∈Σ, our main results give Stanley decompositions for the modules of covariants([W] ⊗U_τ)^H, over the ringS = S(^-). These decompositions lead to a combinatorial interpretation of the Hilbert–Poincaré series, as outlined below.The natural habitat for our method is a certain two-dimensional lattice≅Φ(^+), whose elements are ordered pairs(i,j). (See Remark <ref> and Section <ref>, where we view this lattice as the Hasse diagram of a certain poset, thus motivating the “” notation. In fact, forH = _kand_2k, we have≅Φ(^+)as posets.) By a lattice path inwe mean a monotonic path consisting of horizontal and vertical steps. We will always write⊆to denote a family ofknonintersecting lattice paths in. We designate certain L-shaped turns in a path as its corners, and we write()for the union of the corners of all the lattice paths in. We letdenote the collection of all possible sets of endpointsEfor various's. We write⇉Eif the familyhas endpoints given byE. Lettingd_Edenote the number of points in any family⇉E, we define the rational expressionP_E(t) ∑_⇉ E (t^2)^#()/(1-t^2)^d_E.The reason for thet^2is the fact that the generatorsf_ijhave degree 2.We will allow a lattice path to extend beyond its endpoint in, where it becomes a (non-rectilinear) path lying outside. Our primary objects — those which will index our Stanley spaces — are families⇉Tofknonintersecting paths, whereis the intersection withas described above, andTis the complement; since the paths inTresemble tentacles dangling from the edge of, we give the name jellyfish to these families⇉T. Moreover, the system of tentacles inTcan be naturally viewed as a semistandard Young tableau (or pair, for_k). If the tableauThas shapeτ, we say that a jellyfish⇉Thas shapeτas well. We partition the set of tableaux into binsτ_E, such thatT ∈τ_Eif and only if the tentacles inTalign with the endpointsE.Our main result is a Stanley decomposition of the form([W] ⊗ U_τ)^H ≅⊕_⇉ T[f_ij : (i,j) ∈] f_()·ϕ_T,wheref_() ∏_(i,j) ∈() f_ij. This decomposition then yields the following Hilbert–Poincaré series:P(([W] ⊗ U_τ)^H; t) = t^|τ|∑_E ∈#τ_E P_E(t).In the next three subsections, we fill in this preview by providing complete details for all three groupsH. §.§ The general linear group LetVbe a complexk-dimensional vector space, and letH = (V). LetW = V^*p ⊕V^q, so that(,)̨ = (_p+q, _p ⊕_q), and assume thatk ≤min{p,q}. Then≅Φ(^+)is depicted as ap ×qgrid of elements(i^*, j), where the rows are indexed from top to bottom with the starred numbers1^*, …, p^*, and the columns are indexed from left to right with the numbers1, …, q. We will writeto denote a family ofknonintersecting lattice paths in, beginning at the points(1^*,1), …, (k^*,1)and ending anywhere along the bottom or right edges. More precisely,is the set of all points inwhich lie in the union of the paths. The possible endpoints along the right edge ofare given by their row indices in[p]^* {1^*, …, p^*}; likewise, the possible endpoints along the bottom edge ofare given by their column indices in[q] {1, …, q}. Hence the endpoints of each familyare encoded by a subsetE ⊂[p]^* ∪[q]such that#E = k. We write the endpoint relationship as⇉E. (See the diagram in Example <ref> below.)Let { E : }. A subset E ⊂ [p]^* ∪ [q], with #E = k, belongs toif and only if #E_ℓ≤ℓ for all 1 ≤ℓ≤ k, whereE_ℓ{i^* ∈ E : i > p - ℓ}∪{j ∈ E : j > q - ℓ}. Visually, the lemma says that a set ofkpoints along the bottom and right edges ofis actually the set of endpoints for some family, if and only if at mostℓof those points lie less thanℓunits away from the lower-right corner of, for all1 ≤ℓ≤k. Suppose that #E_ℓ≤ℓ for all 1 ≤ℓ≤ k. We will construct a familysuch that ⇉ E. Let A_ℓ⊂ denote the ℓth antidiagonal, counting from the lower-right corner of . We will construct a chain of subfamilies _1 ⊂_2 ⊂⋯⊂_k ⊂, such that each _ℓ consists of #E_ℓ lattice paths, each starting on A_ℓ, and _ℓ⇉ E_ℓ. By definition (where ℓ = 1), E can contain at most one of the elements p^* and q; if E contains one of them, then define _1 to be the corresponding singleton, and set _1 = ∅ otherwise. To construct _ℓ from _ℓ-1, add the (0, 1, or 2) new endpoints in E_ℓ∖ E_ℓ-1, and also extend each path in _ℓ-1 by one element from A_ℓ-1 to A_ℓ. Finally, extend _k to a full familyby adding any new endpoints in E ∖ E_k, and extending the resulting k lattice paths (from northeast to southwest) to the starting points (1^*,1), …, (k^*, 1). Note that the construction of each _ℓ is possible without intersecting the paths, if and only if #E_ℓ≤ℓ: this is because #E_ℓ is the number of paths in _ℓ, and #A_ℓ = ℓ is the number of available starting points for _ℓ. Therefore the conditions #E_ℓ≤ℓ are equivalent to E being the endpoints for some . Next we define the corners of a family. We say that a lattice path inhas an L-turn at(i^*, j)if the path contains both((i-1)^*, j)and(i^*, j+1). Not all of these L-turns will count as corners, however; we introduce the notion of a “shadow” in order to visualize which L-turns are excluded. Given⇉E, we define the shadow of an endpoint inEto be the set of L-turns which lie in an unbroken line directly southwest of that endpoint. That is, if the endpoint has coordinates(i^*,j) ∈, then its shadow is the largest collection of L-turns which has the form{((i+1)^*, j-1), ((i+2)^*, j-2), …, ((i+ℓ)^*, j-ℓ)}for some positive integerℓ. We also say that the L-turns in this collection are shadowed by the endpoint. Note that any endpoint on the bottom edge ofautomatically has an empty shadow. For the sake of clarity in our diagrams, we will depict (nonempty) shadows emanating toward the southwest from their endpoint, even though the shadow technically does not include the endpoint itself. We then define the corners ofto be the elements of the set() {L-turns inwhich are not shadowed}.Let k=5, p=8, and q=10. Below we show an example of a familyinside :corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 5pt, inner sep=0pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] shadow=[rounded corners,line width=.65em,red!50!black,nearly transparent,cap=round] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [scale=.35] [thick, lightgray] (1,1) grid (10,8); in 1,2,...,10 in 1,2,...,8 [dot] at (,) ;[shadow] (10,4) – ++(-1,-1); at (0,8) 1^*;at (0,7) 2^*;at (0,6) 3^*;at (0,5) 4^*;at (0,4) 5^*;at (0,3) 6^*;at (0,2) 7^*;at (0,1) 8^*;at (1,9) 1;at (2,9) 2;at (3,9) 3;at (4,9) 4;at (5,9) 5;at (6,9) 6;at (7,9) 7;at (8,9) 8;at (9,9) 9;at (10,9) 10;[ultra thick] (1,8) – ++(7,0) – ++(0,-1) node[corner]– ++(2,0);[ultra thick] (1,7) – ++(5,0) – ++(0,-1) node[corner]– ++(2,0) – ++(0,-1) node[corner] – ++(2,0) – ++(0,-1);[ultra thick] (1,6) – ++(3,0) – ++(0,-1) node[corner]– ++(2,0) – ++(0,-1) node[corner]– ++(3,0) – ++(0,-1) – ++(1,0) – ++(0,-1);[ultra thick] (1,5) – ++(2,0) – ++(0,-1) node[corner]– ++(1,0) – ++(0,-2) node[corner]– ++(1,0) – ++(0,-1) node[corner]– ++(1,0);[ultra thick] (1,4) – ++(1,0) – ++(0,-2) node[corner]– ++(1,0) – ++(0,-1);Here we have ⇉ E, where the endpoint set is E = {2^*, 5^*, 7^*, 3, 6}. Note that 5^* is the only endpoint with a nonempty shadow, since there is an L-turn immediately southwest of it. The elements of (), as defined in (<ref>), are marked with red squares. By (<ref>), this familycontributes (t^2)^9 = t^18 to the numerator of P_E(t), since there are 9 corners; as for the denominator, we have d_E = 54 since the paths incontain 54 points in . En route to defining jellyfish as familieswhich extend as tentacles outside of, we next show how to depict a pair of semistandard Young tableaux as a family of these tentacles. Recall that in the Howe duality setting forH = _k, the setΣconsists of pairs of partitionsτ, such thatℓ(τ^+) + ℓ(τ^-) ≤k, andℓ(τ^+) ≤qandℓ(τ^-) ≤p. Given a weakly decreasingk-tupleτ, consider all pairs of tableaux(T^+, T^-) ∈(τ^+, q) ×(τ^-, p).Each row ofT^+is depicted as a tentacle beneath, by viewing the tableau entries as column indices in, then drawing dots in these columns in a downward sequence, and then connecting the dots. Likewise, each row ofT^-is depicted as a tentacle to the right of, by viewing the tableau entries as row indices in, then drawing dots in these rows in a rightward sequence, and then connecting the dots. The result isℓ(τ^+)many tentacles dangling beneath, andℓ(τ^-)many tentacles dangling to the right of. The following definition will allow us to associate each tableau pair (i.e., each possible configuration of tentacles) to a unique set of endpointsEon the border of; then a “jellyfish” will be obtained by connecting the tentacles to the ends of the paths in any familysuch that⇉E.[Jellyfish for H = (V)]Let (T^+,T^-) ∈(τ^+, q) ×(τ^-, p). We write E ⇉ (T^+, T^-) to express that E is the element ofconstructed as follows:*Set E ∩ [p]^* equal to the first column of T^-.*Let E' ∈ be such that E' ∩ [p]^* = E ∩ [p]^*, and E' ∩ [q] contains the largest possible elements such that each E'_ℓ≤ℓ, as defined in Lemma <ref>. (Visually, one pushes the bottom endpoints in E' as far right as possible, such that no antidiagonal contains more than one endpoint.)*As an increasing sequence, set E ∩ [q] equal to the pointwise minimum of E' ∩ [q] and the first column of T^+.If ⇉ E ⇉ (T^+, T^-), then we use the shorthand ⇉ (T^+,T^-), and we say that the pair ⇉ (T^+,T^-) is a jellyfish of shape τ. Roughly speaking,E ⇉(T^+, T^-)means thatEis the union of the first column ofT^-(with starred entries) and the maximal column of lengthk - ℓ(τ^-)which forms a semistandard tableau when joined to the left ofT^+. Following this tableau perspective, the visual depiction of the jellyfish is formed by connecting each tentacle to one of the paths in: each tentacle to the right ofis connected to the endpoint lying immediately to its left, and the tentacles beneathare connected (one by one, from left to right) to the leftmost endpoints on the bottom edge of. As a result, a jellyfish can be viewed as a family ofknonintersecting lattice paths inside, some of which continue outsideas tentacles. Let k=6, p=q=8, and τ = (3,3,-3,-4). Below, on the left-hand side, we show an example of a tableau pair (T^+, T^-).In the middle, we depict (T^+, T^-) as tentacles, and we circle the corresponding endpoints E = {2^*, 7^*, 3, 5, 6, 8} such that E ⇉ (T^+, T^-). On the right-hand side, we show an example of a jellyfish ⇉ (T^+, T^-):smalltableaux,centertableauxcorner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 5pt, inner sep=0pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] end=[circle, draw=black, thick, minimum size = 4pt, inner sep=0pt] shadow=[rounded corners,line width=.6em,red!50!black,nearly transparent,cap=round] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [baseline=(current bounding box.center)]at (1,1) ( 335,788,2367,778 ); [scale=.35, baseline=(current bounding box.center)][thick, lightgray] (1,1) grid (8,8); in 1,2,...,8 in 1,2,...,8 [dot] at (,) ; at (0,8) 1^*;at (0,7) 2^*;at (0,6) 3^*;at (0,5) 4^*;at (0,4) 5^*;at (0,3) 6^*;at (0,2) 7^*;at (0,1) 8^*;at (1,9) 1;at (2,9) 2;at (3,9) 3;at (4,9) 4;at (5,9) 5;at (6,9) 6;at (7,9) 7;at (8,9) 8;[ultra thick] (9,7) / in 0/0,1/-1,1/-3,1/-1– ++(,) node[endpt];[ultra thick] (9,2) / in 0/0,1/0,1/-1– ++(,) node[endpt];[ultra thick] (7,0) / in 0/0,1/-1,0/-1– ++(,) node[endpt];[ultra thick] (3,0) / in 0/0,0/-1,2/-1– ++(,) node[endpt]; at (3,1) [end] ;at (5,1) [end] ;at (6,1) [end] ;at (8,1) [end] ;at (8,7) [end] ;at (8,2) [end] ;[scale=.35, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (8,8); in 1,2,...,8 in 1,2,...,8 [dot] at (,) ; at (0,8) 1^*;at (0,7) 2^*;at (0,6) 3^*;at (0,5) 4^*;at (0,4) 5^*;at (0,3) 6^*;at (0,2) 7^*;at (0,1) 8^*;at (1,9) 1;at (2,9) 2;at (3,9) 3;at (4,9) 4;at (5,9) 5;at (6,9) 6;at (7,9) 7;at (8,9) 8;[shadow] (8,7) – ++(-2,-2); [shadow] (8,2) – ++(-1,-1);[ultra thick] (1,8) – ++(7,0) – ++(0,-1) / in 1/0,1/-1,1/-3,1/-1– ++(,) node[endpt];[ultra thick] (1,7) – ++(6,0) – ++(0,-1) – ++(1,0) – ++(0,-4) / in 1/0,1/0,1/-1– ++(,) node[endpt];[ultra thick] (1,6) – ++(5,0) – ++(0,-1) – ++(1,0) – ++(0,-4) –++(1,0);[ultra thick] (1,5) – ++(3,0) – ++(0,-1) node[corner] – ++(1,0) – ++(0,-1) node[corner] – ++(1,0) – ++(0,-2);[ultra thick] (1,4) – ++ (1,0) – ++(0,-1) node[corner] – ++(2,0) – ++(0,-1) node[corner] – ++ (1,0) – ++ (0,-1) / in 2/-1,1/-1,0/-1– ++(,) node[endpt];[ultra thick] (1,3) – ++ (0,-1) node[corner] – ++(1,0) – ++(0,-1) node[corner] – ++(1,0) / in 0/-1,0/-1,2/-1– ++(,) node[endpt];Following Definition <ref>, we obtained E by first taking E' ∩ [q] = {4,5,6,8}, then inspecting the first column of T^+ to obtain E ∩ [q] = {min{4,3}, min{5,7}, 6, 8 }.[Bins for H = (V)]Let τ be a weakly decreasing k-tuple. For each E ∈ we define the binτ_E {(T^+,T^-) ∈(τ^+, q) ×(τ^-, p) : E ⇉ (T^+, T^-)}. It follows from Definition <ref> that bin membership depends only on the first column ofT^+and ofT^-. Note that the binsτ_Epartition the set(τ^+, q) ×(τ^-, p), although some of the bins are empty ifℓ(τ^+) + ℓ(τ^-) < k. For instance, takingτas in Example <ref>, ifE = {2^*, 7^*, 1, 2, 3, 4}, then we haveτ_E = ∅. This is becauseE' ∩[q] = {4,5,6,8}, so for anyT^+with first column{a,b}, we will haveE ∩[q] = { min{4,a}, min{5,b}, 6, 8}, which can never be the set{1,2,3,4}.Let#denote the size of a family, meaning the total number of points inlying on the paths in. Then#depends only on the endpoints of; hence, as in (<ref>), we writed_E #for anysuch that⇉E.The theorem below states that the jellyfish of shapeτmodel the Stanley spaces in the decomposition of the module of covariants of typeU_τ. In the theorem, we writef_() ∏_(i^*,j) ∈() f_ij. Let kV ≤min{p,q}, and let τ be a weakly decreasing k-tuple. Then we have a Stanley decomposition([V^*p⊕ V^q] ⊗ U_τ)^(V) = ⊕_⇉ (T^+, T^-)[f_ij : (i^*,j) ∈] f_()ϕ_(T^+,T^-),where *the direct sum ranges over all jellyfish of shape τ, as in Definition <ref>;*the contractions f_ij are defined in (<ref>);*the set () is defined in (<ref>);*the map ϕ_(T^+, T^-) is defined in (<ref>).Furthermore, we have the Hilbert–Poincaré seriesP(([V^*p⊕ V^q] ⊗ U_τ)^(V); t ) = t^|τ^+|t^ |τ^-|∑_E ∈#τ_E P_E(t),with τ_E and P_E(t) as in Definition <ref> and (<ref>), respectively.Consider the adjoint representation _k of _k. Then we have _k ≅⊕_k ≅ U_0 ⊕ U_τ, where U_0 is the trivial representation of _k, and τ = (1, 0, …, 0, -1). It follows that ([V^*p⊕ V^q] ⊗_k)^_k = ([V^*p⊕ V^q] ⊗ U_0)^_k⊕ ([V^*p⊕ V^q] ⊗ U_τ)^_k.The first summand on the right-hand side is equivalent to the ring of invariants [V^*p⊕ V^q]^_k. To obtain its Stanley decomposition (anticipating Corollary <ref>), we observe that any jellyfish of shape 0 is just a familywhose endpoints are the k easternmost points along the southern edge of . For the second summand on the right-hand side above, since τ^+ = τ^- = (1), any jellyfish of shape τ has exactly two tentacles, both with length 1, where one is attached to the topmost path in(somewhere along the eastern edge of ), and the other is attached to the bottommost path in(somewhere along the southern edge of ). As always, the remaining k-2 endpoints lie as far to the east as possible along the southern edge of , following Definition <ref>.As a small concrete example, we take k=3, p=3, and q=4. For U_0, we have only one jellyfish (a consequence of the fact that k=min{p,q} in this example, so that =), which means that there is only one Stanley space in the decomposition (<ref>) of the invariant ring. Below, we show this jellyfish along with the Stanley space it represents, followed by the resulting Hilbert–Poincaré series (<ref>) of the invariant ring: smalltableaux,centertableauxcorner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 5pt, inner sep=0pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] end=[circle, draw=black, thick, minimum size = 4pt, inner sep=0pt] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3);in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (3,0) – ++ (0,-2); [ultra thick] (1,2) – ++ (2,0) – ++ (0,-1); [ultra thick] (1,1) – ++ (1,0); [V^*3⊕ V^4]^_3 = [f_11, …, f_34],P([V^*3⊕ V^4]^_3; t)= 1/(1-t^2)^12.For U_τ, we have 19 jellyfish, which we display below in three rows according to the size of the family(which is either 12, 11, or 10), along with the contribution to the Hilbert–Poincaré series (<ref>) of ([V^*3⊕ V^4] ⊗ U_τ)^_3:smalltableaux,centertableauxcorner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 5pt, inner sep=0pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] end=[circle, draw=black, thick, minimum size = 4pt, inner sep=0pt] shadow=[rounded corners,line width=.5em,red!50!black,nearly transparent,cap=round] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (3,0) – ++ (0,-2) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1);[ultra thick] (1,1) – ++ (1,0) – ++(2,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (3,0) – ++ (0,-2) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1);[ultra thick] (1,1) – ++ (1,0) – ++(1,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (3,0) – ++ (0,-2) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1);[ultra thick] (1,1) – ++ (1,0) – ++(0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[shadow] (4,2) – ++(-1,-1);[ultra thick] (1,3) – ++ (3,0) – ++ (0,-1) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1) –++ (1,0);[ultra thick] (1,1) – ++ (1,0) – ++(2,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[shadow] (4,2) – ++(-1,-1);[ultra thick] (1,3) – ++ (3,0) – ++ (0,-1) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1) –++ (1,0);[ultra thick] (1,1) – ++ (1,0) – ++(1,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[shadow] (4,2) – ++(-1,-1);[ultra thick] (1,3) – ++ (3,0) – ++ (0,-1) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1) –++ (1,0);[ultra thick] (1,1) – ++ (1,0) – ++(0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (4,0)node[endpt] ;[ultra thick] (1,2) – ++ (3,0) – ++ (0,-1);[ultra thick] (1,1) – ++(2,0) – ++ (1,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (4,0)node[endpt] ;[ultra thick] (1,2) – ++ (3,0) – ++ (0,-1);[ultra thick] (1,1) – ++(2,0) – ++ (0,-1) node[endpt] ;[right=5pt of current bounding box.east,anchor=west ]8(1-t^2)^12; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (3,0) – ++ (0,-2) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1);[ultra thick] (1,1)– ++(0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (3,0) – ++ (0,-2) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (1,0) – ++ (0,-1) node[corner] – ++(1,0);[ultra thick] (1,1)– ++(0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (2,0) – ++ (0,-1) node[corner] – ++(1,0) – ++ (0,-1) – ++ (1,0) node[endpt] ;[ultra thick] (1,2) – ++ (1,0) – ++ (0,-1) node[corner] – ++(1,0);[ultra thick] (1,1)– ++(0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[shadow] (4,2) – ++(-1,-1);[ultra thick] (1,3) – ++ (3,0) – ++ (0,-1) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1) –++ (1,0);[ultra thick] (1,1) – ++(0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (3,0) – ++ (0,-1) – ++(1,0) node[endpt] ;[ultra thick] (1,2) – ++ (1,0) – ++ (0,-1) node[corner] –++ (2,0);[ultra thick] (1,1) – ++(0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (2,0) – ++ (0,-1) node[corner] – ++(2,0) node[endpt] ;[ultra thick] (1,2) – ++ (1,0) – ++ (0,-1) node[corner] –++ (2,0);[ultra thick] (1,1) – ++(0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (4,0)node[endpt] ;[ultra thick] (1,2) – ++ (3,0) – ++ (0,-1);[ultra thick] (1,1) – ++(1,0) – ++ (0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (4,0)node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1) node[corner] – ++(1,0);[ultra thick] (1,1) – ++(1,0) – ++ (0,-1) node[endpt] ;[right=5pt of current bounding box.east,anchor=west ]3 + 3t^2 + 2t^4(1-t^2)^11; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (4,0)node[endpt] ;[ultra thick] (1,2) – ++ (3,0) – ++ (0,-1);[ultra thick] (1,1) – ++ (0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (4,0)node[endpt] ;[ultra thick] (1,2) – ++ (2,0) – ++ (0,-1) node[corner] – ++(1,0);[ultra thick] (1,1) – ++ (0,-1) node[endpt] ; [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (4,0)node[endpt] ;[ultra thick] (1,2) – ++ (1,0) – ++ (0,-1) node[corner] – ++(2,0);[ultra thick] (1,1) – ++ (0,-1) node[endpt] ;[right=5pt of current bounding box.east,anchor=west ]1 + 2t^2(1-t^2)^10; To illustrate Theorem <ref>, we choose one of these 19 jellyfish and write down its corresponding Stanley space below:smalltableaux,centertableauxcorner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 5pt, inner sep=0pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] end=[circle, draw=black, thick, minimum size = 4pt, inner sep=0pt] shadow=[rounded corners,line width=.5em,red!50!black,nearly transparent,cap=round] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [scale=.3, baseline=(current bounding box.center)] [thick, lightgray] (1,1) grid (4,3); in 1,2,...,4 in 1,2,3 [dot] at (,) ;[ultra thick] (1,3) – ++ (2,0) – ++ (0,-1) node[corner] – ++(2,0) node[endpt] ;[ultra thick] (1,2) – ++ (1,0) – ++ (0,-1) node[corner] –++ (2,0);[ultra thick] (1,1) – ++(0,-1) node[endpt] ;[below=5pt of current bounding box.south,anchor=north ]⇉ (1, 2);[right=0pt of current bounding box.east,anchor=west ][f_11, f_12, f_13, f_21, f_22, f_23, f_24, f_31, f_32, f_33, f_34_f_ij : (i^*,j) ∈]f_23 f_32_f_()ϕ_(.71, 2); Combining the results above for U_0 and U_τ, the Stanley decomposition of ([V^*3⊕ V^4] ⊗_3)^_3 is given by (<ref>) where the direct sum ranges over the 20 jellyfish shown above. The Hilbert–Poincaré series is the sum of the four rational expressions displayed above:P( ([V^*3⊕ V^4] ⊗_3)^_3; t ) = 1/(1-t^2)^12 + 8/(1-t^2)^12 + 3 + 3t^2 + 2t^4/(1-t^2)^11 + 1 + 2t^2/(1-t^2)^10 = 13-4t^4/(1-t^2)^12.We conclude the example by observing (from the negative sign in the numerator of the reduced form of the Hilbert–Poincaré series above) that this module of covariants is not Cohen–Macaulay. The maximum size of a familyis d_max = pq-(p-k)(q-k),which is the size of any family with endpoints given byE = {q-k+1, q-k+2, …, q}, for example. It is possible, however, that two familiesand', composed of different lattice paths, may actually be the same subset of; therefore there are several distinct endpoint setsEwhich give families of sized_max. Let τ be a weakly decreasing k-tuple, and let τ_max denote the union of all bins τ_E such that d_E = d_max.Then we have #τ_max =U_τ. We claim that subtracting p-k from every entry of T^-, and subtracting q-k from every entry of T^+, yields a bijection between τ_max and the set of all rational tableaux of shape τ in the alphabet [k], as defined in (<ref>). Upon proving this claim, the result will follow from (<ref>).Let (T^+, T^-) ∈τ_max, with E such that E ⇉ (T^+, T^-). As in the proof of Lemma <ref>, let A_ℓ denote the ℓth antidiagonal in , counting from the lower-right corner. If there is somesuch that ⇉ E, then the maximum size ofimplies thatcontains every point in the union of antidiagonals A_1 ⊔⋯⊔ A_k. Hence for each 1 ≤ℓ≤ k, we must have #E_ℓ = ℓ (with notation as in Lemma <ref>). For each 1 ≤ℓ≤ k, then, E contains exactly one of the two elements (p-ℓ+1)^* and q-ℓ+1. Equivalently, upon reversing the index via ℓ↦ k - ℓ + 1, the set E contains exactly one of the two elements (p-k+ℓ)^* and q-k+ℓ. Since the first column of T^- equals E ∩ [p]^*, and since the first column of T^+ is (pointwise) greater than or equal to E ∩ [q], it follows that #{ i : T^-_i,1≤ p-k+ℓ} + #{j :T^+_j,1≤ q-k+ℓ}≤ℓfor all 1 ≤ℓ≤ k. But (<ref>) is just the rational tableau condition (<ref>) shifted by p-k and by q-k. Therefore, upon subtracting p-k (resp., q-k) from every entry of T^- (resp., T^+), we obtain a pair (R^+, R^-) ∈(τ^+, k) ×(τ^-, k) satisfying (<ref>), so this pair is a rational tableau of shape τ. This process is clearly invertible, so that every rational tableau (R^+, R^-) can be shifted (by adding p-k and q-k to R^- and R^+, respectively) to obtain a pair (T^+, T^-) ∈τ_max. This proves the bijection which we initially claimed, and the result follows.If U_τ is a polynomial representation of _k (i.e., if τ is a true partition, so that τ^- = 0), then all jellyfish take the form ⇉ E ⇉ (T^+, ∅), where E ∩ [p]^* = ∅. As a result, all endpoints oflie along the bottom edge of , so there are no nontrivial shadows cast by E, and every L-turn belongs to (). In this case, we observe a connection to work by Bruns, Herzog, and Trung HerzogTrung,Herzog: when U_τ is polynomial, our rational expressions P_E(t) can be viewed as the Hilbert series of certain determinantal rings, where the defining ideal is generated by the minors of an order ideal of the set of bitableaux. In particular, in the language of the papers cited above, our P_E(t) is the Hilbert series of the order ideal generated by the bitableau (E, {1,…, k}).§.§ The symplectic group LetV = 2k, and letH = (V). LetW = V^n, so that= _2nand=̨ _n. Assume thatk ≤n/2. Thenis depicted as an(n-1) ×(n-1)staircase consisting of elements(i,j)for1 ≤i < j ≤n. We place(1,2)in the upper-left, with(n-1,n)in the lower-right; see the diagrams in Example <ref>. We writeto denote a family ofknonintersecting lattice paths in, beginning at the points(1,2), (1,4), …, (1, 2k)and ending anywhere along the right edge of. The possible endpoints are given by their row indices in. Letdenote the set of all subsetsE ⊂[n-1]such that#E = kand there exists some familywith⇉E. A much simpler version of the argument in Lemma <ref> shows thatE ∈if and only ifEprecedes{n-2k+1, n-2k+3, n-2k+5, …, n-1}when viewed as columns of a semistandard tableau (which we call the tableau order).The corners of a familyare defined just as for_kin (<ref>), except now the shadows are cast by the starting points as well. Given a partitionτwithℓ(τ) ≤k, each tableauT ∈(τ,n)can be depicted as a family ofℓ(τ)many tentacles attached to the right edge of, by viewing the entries in each row ofTas the sequence of row indices (in) of the corresponding tentacle.[Jellyfish and bins for H = (V)]Let T ∈(τ, n). We write E ⇉ T if E is the maximal element insuch that E forms a semistandard tableau when adjoined to the left side of T. For each E ∈ we define the binτ_E {T ∈(τ,n) : E ⇉ T}.If ⇉ E ⇉ T, then we say that ⇉ T is a jellyfish of shape τ. To complete the depiction of a jellyfish⇉E ⇉T, one connects the tentacles to the topmostℓ(τ)many endpoints inE.Let k = 3 and n = 8. Below we give examples of jellyfish for two different partitions τ: corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 6pt, inner sep=2pt,text=white, font=] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] shadow=[rounded corners,line width=.6em,red!50!black,opacity=.2,cap=round] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [scale=.35,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[shadow] (4,8) – ++(-1,-1); [shadow] (6,8) – ++(-2,-2); [shadow] (8,7) – ++(-1,-1); at (0,8) 1;at (0,7) 2;at (0,6) 3;at (0,5) 4;at (0,4) 5;at (0,3) 6;at (0,2) 7;at (0,1) 8;at (1,9) 1;at (2,9) 2;at (3,9) 3;at (4,9) 4;at (5,9) 5;at (6,9) 6;at (7,9) 7;at (8,9) 8;[ultra thick] (2,8) – ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1) node[corner]– ++(2,0) – ++(0,-1) node[corner] – ++(1,0) / in 1/0,1/-1– ++(,) node[endpt];[ultra thick] (4,8) – ++(1,0) – ++(0,-1) – ++(2,0) – ++(0,-1) – ++(1,0)/ in 1/0,1/-1,1/-1,1/-3– ++(,) node[endpt];[ultra thick] (6,8) – ++(2,0) – ++(0,-1) / in 1/0,1/0,1/-1,1/0,1/-1– ++(,) node[endpt];[below=5pt of current bounding box.south,anchor=north ]τ = (5,4,2), T = 22334,3458,56;[scale=.35,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[shadow] (4,8) – ++(-1,-1); [shadow] (6,8) – ++(-2,-2); [shadow] (8,4) – ++(-1,-1); at (0,8) 1;at (0,7) 2;at (0,6) 3;at (0,5) 4;at (0,4) 5;at (0,3) 6;at (0,2) 7;at (0,1) 8;at (1,9) 1;at (2,9) 2;at (3,9) 3;at (4,9) 4;at (5,9) 5;at (6,9) 6;at (7,9) 7;at (8,9) 8;[ultra thick] (2,8) – ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1) – ++(2,0) – ++(0,-2) node[corner]– ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1);[ultra thick] (4,8) – ++(1,0) – ++(0,-1) – ++(2,0) – ++(0,-2) node[corner]– ++(1,0) – ++(0,-1)/ in 1/-2,1/-1– ++(,) node[endpt];[ultra thick] (6,8) – ++(2,0) – ++(0,-1) / in 1/0,1/-1,1/0,1/-1– ++(,) node[endpt];[below=5pt of current bounding box.south,anchor=north ]τ = (4,2), T = 2334,78;By (<ref>), the left-hand familycontributes (t^2)^3 to the numerator of P_E(t), where E = {2,3,5}. The right-hand familycontributes (t^2)^2 to the numerator of P_E(t), where E = {2,5,7}. Let V = 2k ≤ n. Then we have a Stanley decomposition([V^n] ⊗ U_τ)^(V) = ⊕_⇉ T[f_ij : (i,j) ∈] f_()ϕ_T,where*the direct sum ranges over all jellyfish of shape τ, as in Definition <ref>;*the contractions f_ij are defined in (<ref>);*the set () is defined in (<ref>), where shadows are cast by both the starting points and the endpoints of ;*the map ϕ_T is defined in (<ref>).Furthermore, we have the Hilbert–Poincaré seriesP(([V^n] ⊗ U_τ)^(V); t ) = t^|τ|∑_E ∈#τ_E P_E(t),with τ_E and P_E(t) as in Definition <ref> and (<ref>), respectively.Let k=2 and n=6, with τ = (2,1). We will compute the Hilbert–Poincaré series of ([V^6] ⊗ U_τ)^_4 using the interpretation (<ref>) in Theorem <ref>. First we need the bin sizes #τ_E for each E ∈:smalltableaux,centertableauxτ_{1,2} = {11,2, 12,2, 13,2, 14,2, 15,2, 16,2}, τ_{1,3} = {11,3, 12,3, 13,3, 14,3, 15,3, 16,3}, τ_{1,4} = {11,4, 12,4, 13,4, 14,4, 15,4, 16,4}, τ_{2,3} = {22,3, 23,3, 24,3, 25,3, 26,3}, τ_{2,4} = {22,4, 23,4, 24,4, 25,4, 26,4}, 12,3τ_{3,4} = {33,4, 34,4, 35,4, 36,4}, τ_{1,5} = {11,5, 12,5, 13,5, 14,5, 15,5, 16,5, 11,6, 12,6, 13,6, 14,6, 15,6,16,6}, τ_{2,5} = {22,5, 23,5, 24,5, 25,5, 26,5, 22,6, 23,6, 24,6, 25,6, 26,6}, τ_{3,5} = {33,5, 34,5, 35,5, 36,5, 44,5, 45,5, 46,5, 33,6, 34,6, 35,6, 36,6, 44,6, 45,6, 46,6, 55,6, 56,6}.Next we compute P_E(t) for each E ∈, by counting corners in families ⇉ E. We show this below for three of the E's, which suffices to demonstrate the general process:corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 5pt, inner sep=2pt,text=white, font=] shadow=[rounded corners,line width=.5em,red!50!black,opacity=.2,cap=round] dot=[circle,fill=lightgray, minimum size = 3.5pt, inner sep=0pt] [xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(5,6)–(5,3)–(6,3)–(6,2); [ultra thick] (6,6)–(6,4);[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(4,6)–(4,5)–(5,5)–(5,3)–(6,3)–(6,2); [ultra thick] (6,6)–(6,4); at (4,5) [corner] ;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(4,6)–(4,4)–(5,4)–(5,3)–(6,3)–(6,2); [ultra thick] (6,6)–(6,4); at (4,4) [corner] ;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(3,6)–(3,5)–(5,5)–(5,3)–(6,3)–(6,2); [ultra thick] (6,6)–(6,4); at (3,5) [corner] ;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(3,6)–(3,5)–(4,5)–(4,4)–(5,4)–(5,3)–(6,3)–(6,2); [ultra thick] (6,6)–(6,4); at (3,5) [corner] ;at (4,4) [corner] ;[right=0pt of current bounding box.east,anchor=west ]P_{1,5}(t) = 1 + 3t^2 + t^4(1-t^2)^12;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3); [shadow] (5,6) – (4,5);[ultra thick] (2,6)–(4,6)–(4,5)–(5,5)–(5,3)–(6,3)–(6,2); [ultra thick] (5,6)–(6,6)–(6,4);[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(4,6)–(4,4)–(5,4)–(5,3)–(6,3)–(6,2); [ultra thick] (5,6)–(6,6)–(6,4); at (4,4) [corner] ;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(3,6)–(3,5)–(5,5)–(5,3)–(6,3)–(6,2); [ultra thick] (5,6)–(6,6)–(6,4); at (3,5) [corner] ;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(3,6)–(3,5)–(4,5)–(4,4)–(5,4)–(5,3)–(6,3)–(6,2); [ultra thick] (5,6)–(6,6)–(6,4); at (3,5) [corner] ;at (4,4) [corner] ;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(4,6)–(4,4)–(5,4)–(5,3)–(6,3)–(6,2); [ultra thick] (5,6)–(5,5)–(6,5)–(6,4); at (4,4) [corner] ;at (5,5) [corner] ;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3);[ultra thick] (2,6)–(3,6)–(3,5)–(4,5)–(4,4)–(5,4)–(5,3)–(6,3)–(6,2); [ultra thick] (5,6)–(5,5)–(6,5)–(6,4); at (4,4) [corner] ;at (5,5) [corner] ;at (3,5) [corner] ;[right=0pt of current bounding box.east,anchor=west ]P_{2,5}(t) = 1 + 2t^2 + 2t^4 + t^6(1-t^2)^13;[xscale=-1,scale=.3, baseline,rotate=90][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (4,6) – (3,5); [shadow] (6,4) – (5,3);[ultra thick] (2,6)–(3,6)–(3,5)–(5,5)–(5,3)–(6,3)–(6,2); [ultra thick] (4,6)–(6,6)–(6,4);[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3); [shadow] (4,6) – (3,5);[ultra thick] (2,6)–(3,6)–(3,5)–(4,5)–(4,4)–(5,4)–(5,3)–(6,3)–(6,2); [ultra thick] (4,6)–(6,6)–(6,4); at (4,4) [corner] ;[xscale=-1,rotate=90,scale=.3, baseline][lightgray, thick] (2,6) – (6,6) (3,5) – (6,5) (4,4) – (6,4) (5,3) – (6,3) (6,6) – (6,2) (5,6) – (5,3) (4,6) – (4,4) (3,6) – (3,5); in 2,...,6 in ,...,6 [dot] at (8-,) ;[shadow] (6,4) – (5,3); [shadow] (4,6) – (3,5);[ultra thick] (2,6)–(3,6)–(3,5)–(4,5)–(4,4)–(5,4)–(5,3)–(6,3)–(6,2); [ultra thick] (4,6)–(5,6)–(5,5)–(6,5)–(6,4); at (4,4) [corner] ;at (5,5) [corner] ;[right=0pt of current bounding box.east,anchor=west ]P_{3,5}(t) = 1 + t^2 + t^4(1-t^2)^14; Continuing in this way for all E ∈, and then computing t^|τ|·∑_E #τ_E P_E(t) as in (<ref>), one obtains the Hilbert–Poincaré seriesP(([V^6] ⊗ U_(2,1))^_4; t ) = 70t^3 - 14t^5 - 14t^7 + 6t^9/(1-t^2)^14.To conclude the example, as a preview of Corollary <ref>, we point out a general phenomenon resulting from our construction above. Let τ↦λ be the injective map described in Theorem <ref>; from Table <ref> we see that in this example λ = (-2,-2,-2,-2,-3,-4). The Bernstein degree of the _12-module L_ is given by evaluating the numerator of its Hilbert–Poincaré series at t=1, which yields 48. But this can be seen just from the endpoint set E = {3,5},which is maximal in the sense that the families ⇉ E have strictly larger size (namely, 14) than the families with any other set of endpoints. In particular, for this maximal E, the Bernstein degree is simply the product of the bin size #τ_E and the numerator (evaluated at t=1) of P_E(t), which gives 16 · 3 = 48. (See also Lemma <ref> below.) The maximum size of a familyis d_max = k(2n-2k-1),and the familieswith this size are precisely those such that⇉{n - 2k + 1, n-2k+3, …, n-1}. Let τ_maxτ_{n - 2k + 1, n-2k+3, …, n-1}. We have #τ_max =U_τ. By (<ref>), it suffices to exhibit a bijection from τ_max to the set of semistandard symplectic tableaux with entries in {1, 1, …, k, k}. We claim that this bijection is given by subtracting n-2k from each entry in T ∈τ_max.To see this, let T ∈τ_max, so that {n - 2k + 1, n-2k+3, …, n-1}⇉ T. Then by Definition <ref>, we must have T_i,1≥ n-2k+(2i-1) for all 1 ≤ i ≤ℓ(τ). In particular, we must have T_1,1≥ n-2k+1, which means that every entry in T is at least n - 2k +1. Hence, since T ∈(τ, n), subtracting n - 2k from each entry in T produces a tableau T∈(τ, 2k), such that T_i,1≥ 2i-1 for all 1 ≤ i ≤ℓ(τ). Finally, upon substituting the (ordered) alphabet {1, 1, …, k, k} for {1, 2, …, 2k}, we obtain a tableau which satisfies the condition for a semistandard symplectic tableau, namely that the entries in the ith row are greater than or equal to i.§.§ The orthogonal group LetV = k, and letH = Ø(V). LetW = V^n, so that(, )̨ = (_2n, _n). Assume thatk ≤n. Thenis depicted as ann ×nstaircase consisting of elements(i,j)for1 ≤i ≤j ≤n. We index the rows1, …, nfrom top to bottom, and the columns1, …, nfrom left to right; see the diagrams in Example <ref> below. We again writeto denote a family ofknonintersecting lattice paths in, but with two substantial differences from the other two groups above:*lattice paths start from the diagonal of , and consists of steps to the north and to the east;*not only do we allow the endpoints to vary along the right edge of , but we also allow the starting points to vary along the entire diagonal of .There are now two types of corners in. First, writing “L” for a reflected letter L, we say that a L-turn in a path is a point(i,j)such that both(i,j-1)and(i-1,j)also lie in the path. Second, we define a vertical starting point to be the starting point of a path whose first step is vertical; specifically, this is a starting point(i,i)such that(i-1,i)is also in the path. (See the diagrams in Example <ref> below; note that if one duplicatesby reflecting it across the diagonal, then the vertical starting points become L-turns.) There is no need for shadows, so we simply have() {L-turns and vertical starting points in }.Since any choice ofkpoints along the right edge ofcan be the endpoints for some family, we letbe the set of all subsetsE ⊆[n]such that#E = k, and we again identify endpoints with their row indices in.As mentioned above, in this paper we consider only thoseτof the form(1^m), for0 ≤m ≤k; hence each tableauT ∈((1^m), n)is just a single column{a_1, …, a_m}. We therefore depictTby attaching a dot to the right of each of the points(a_i, n), for1 ≤i ≤m. These can be viewed asm(very short) tentacles.[Jellyfish for H = Ø(V)]Let τ = (1^m), and let T ∈((1^m), n). We write E ⇉ T if E is the minimal element in(with respect to the lexicographical order) which contains T. For each E ∈ we define the binτ_E {T ∈(τ,n) : E ⇉ T}.If ⇉ E ⇉ T, then we say that ⇉ T is a jellyfish of shape (1^m). Let V = k ≤ n, and let 0 ≤ m ≤ k. Then we have a Stanley decomposition([V^n] ⊗^m V)^Ø(V) = ⊕_⇉ T[f_ij : (i,j) ∈] f_()ϕ_T,where*the direct sum ranges over all jellyfish of shape (1^m), as in Definition <ref>;*the contractions f_ij are defined in (<ref>);*the set () is defined in (<ref>);*the map ϕ_T is defined in (<ref>).Furthermore, we have the Hilbert–Poincaré seriesP( ([V^n] ⊗^m V)^Ø(V); t ) = t^m ∑_E ∈#τ_E P_E(t),with τ_E and P_E(t) as in Definition <ref> and (<ref>), respectively. The maximum size of a familyis d_max = k/2(2n-k+1),and the families with this size are precisely those⇉{1, …, k}. Let τ = (1^m), and let τ_maxτ_{1, …, k}. We have #τ_max =U_τ = km. Let T ∈ ((1^m), n). Since {1, …, k} is the minimal element of , it follows from Definition <ref> that {1, …, k}⇉ T if and only if T ⊆{1, …, k}. Therefore τ_max = ((1^m), k), which has cardinality km, which is the dimension of U_τ = ^m V.Let m=1, so that τ = (1) and U_τ = V is the defining representation of Ø_k. Then each T ∈((1),n) is just a single element of [n], where we identify a one-box tableau with its entry. By Definition <ref>, a bin τ_E is nonempty if and only if E ⊃{1, …, k-1}. In particular, by Lemma <ref>, we have τ_maxτ_{1, …, k} = {1, …, k}, while for T > k we have τ_{1, …, k-1, T} = {T}. This means that for each family ⇉{1, …, k}, there are k jellyfish ⇉ T, namely, one jellyfish for each choice of tentacle T = 1, …, k. Meanwhile, for each family ⇉{1, …, k-1, T} with T > k, there is a unique jellyfish ⇉ T. These two cases comprise all jellyfish. It follows that the Hilbert–Poincaré series (<ref>) takes the formP(([V^n] ⊗ V)^Ø(V); t) = k t P_{1, …, k}(t) + t ∑_T = k+1^n P_{1, …, k-1, T}(t).As a concrete example below, we take k=3 and n=7. First we show an example of a family ⇉{1,2,3}, along with the three jellyfish ⇉ T:corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 4pt, inner sep=2pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] dot=[circle,fill=lightgray, minimum size = 3pt, inner sep=0pt] [scale=.25,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[ultra thick] (9,8) node[endpt] – ++(-4,0) – ++(0,-1) node[corner] – ++ (-2,0);[ultra thick] (8,7) – ++(-2,0) – ++(0,-1) node[corner] – ++ (-1,0) – ++ (0,-1) node[corner] ;[ultra thick] (8,6) – ++ (-1,0) – ++ (0,-2) node[corner] – ++ (-1,0);[scale=.25,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[ultra thick] (8,8) – ++(-3,0) – ++(0,-1) node[corner] – ++ (-2,0);[ultra thick] (9,7) node[endpt] – ++(-3,0) – ++(0,-1) node[corner] – ++ (-1,0) – ++ (0,-1) node[corner] ;[ultra thick] (8,6) – ++ (-1,0) – ++ (0,-2) node[corner] – ++ (-1,0);[scale=.25,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[ultra thick] (8,8) – ++(-3,0) – ++(0,-1) node[corner] – ++ (-2,0);[ultra thick] (8,7) – ++(-2,0) – ++(0,-1) node[corner] – ++ (-1,0) – ++ (0,-1) node[corner] ;[ultra thick] (9,6) node[endpt] – ++ (-2,0) – ++ (0,-2) node[corner] – ++ (-1,0);Next, for each of the four remaining tentacles T = 4, …, 7, we show an example of a jellyfish ⇉ T:corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 4pt, inner sep=2pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] dot=[circle,fill=lightgray, minimum size = 3pt, inner sep=0pt] [scale=.25,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[ultra thick] (8,8) – ++(-3,0) – ++(0,-1) node[corner] – ++ (-2,0);[ultra thick] (8,7) – ++(-2,0) – ++(0,-1) node[corner] – ++ (-1,0) – ++ (0,-1) node[corner] ;[ultra thick] (9,5) node[endpt] – ++ (-1,0) – ++ (0,-1) node[corner] – ++(-1,0) – ++ (0,-1) node[corner] ;[scale=.25,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[ultra thick] (8,8) – ++(-3,0) – ++(0,-1) node[corner] – ++ (-2,0);[ultra thick] (8,7) – ++(-2,0) – ++(0,-1) node[corner] – ++ (-1,0) – ++ (0,-1) node[corner] ;[ultra thick] (9,4) node[endpt] – ++ (-2,0) – ++ (0,-1) node[corner] ;[scale=.25,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[ultra thick] (8,8) – ++(-3,0) – ++(0,-1) node[corner] – ++ (-2,0);[ultra thick] (8,7) – ++(-2,0) – ++(0,-1) node[corner] – ++ (-1,0) – ++ (0,-1) node[corner] ;[ultra thick] (9,3) node[endpt] – ++ (-1,0) – ++ (0,-1) node[corner] ;[scale=.25,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[ultra thick] (8,8) – ++(-3,0) – ++(0,-1) node[corner] – ++ (-2,0);[ultra thick] (8,7) – ++(-2,0) – ++(0,-1) node[corner] – ++ (-1,0) – ++ (0,-1) node[corner] ;[ultra thick] (9,2) node[endpt] – ++ (-1,0);Every jellyfish ⇉ T of shape τ = (1) is such that the endpoints ofand the tentacle T form one of the seven configurations shown above. (Recall, however, that the starting points ofare allowed to vary along the diagonal of .) To compute the Hilbert–Poincaré series (<ref>), we wrote a simple code in Mathematica to count the corners of all lattice paths with given endpoints, which instantly returns the following expressions:P_{1,2,3}(t)= ( 1 + 10t^2 + 55t^4 + 220t^6 + 225t^8 + 126t^10 + 35t^12)/(1-t^2)^18P_{1,2,4}(t)= (1 + 11t^2 + 66t^4 + 251t^6 + 350t^8 + 251t^10 + 66t^12 + 11t^14 + t^16)/(1-t^2)^17P_{1,2,5}(t)= (1 + 12t^2 + 78t^4 + 245t^6 + 343t^8 + 234 t^10 + 36t^12 + 3t^14)/(1-t^2)^16P_{1,2,6}(t)= (1+ 13t^2 + 91t^4 + 210t^6 + 238t^8 + 130t^10 + 16t^12 + t^14)/(1-t^2)^15P_{1,2,7}(t)= (1+ 14t^2 + 105t^4 + 175t^6 + 119t^8 + 15t^10 + t^12)/(1-t^2)^14.Combining these as in (<ref>), we obtainP ([V^7] ⊗ V)^Ø_3; t ) = 7 t (1 + 10t^2 + 55t^4 + 108t^6 + 81t^8 + 30t^10 + 3t^12)/(1-t^2)^18.We conclude this example just as we did Example <ref>, by previewing Corollary <ref>. Since τ = (1), we see from Table <ref> thatλ = -(1,0,0,0,0,0,0) - 3/2 = (-3/2,-3/2,-3/2,-3/2,-3/2, -3/2, -5/2).The Bernstein degree of the simple _14-module L_ can be found by evaluating the numerator of (<ref>) at t=1, which yields 2016. But because the exponent in the denominator is strictly the largest (namely, 18) in P_{1,2,3}(t), the Bernstein degree can be obtained just by multiplying #τ_{1,2,3} = 3 by the numerator of P_{1,2,3} at t=1, which yields 3 · 672 = 2016. § LINEAR BASES AND ARC DIAGRAMS In the previous section, we gave Stanley decompositions for modules of covariants([W] ⊗U_τ)^H, where the Stanley spaces are indexed by the jellyfish of shapeτ. As a result, each module of covariants has an (infinite) basis in which each element is the product of someϕ_Twith some monomial in thef_ij's, and each basis element lies in the Stanley space indexed by a unique jellyfish. In this section, we will obtain each basis element from its unique jellyfish⇉Tby populating the points inwith additionalf_ij's, which amounts to multiplication by functions in the ring[f_ij : (i,j) ∈]. Then we will flatten this picture into an arc diagram, from which we can read off the weight (under the action of the Lie algebra) via the degree sequence.To adapt the following definition to the caseH = _k, simply replaceTby(T^+, T^-), and replace(i,j)by(i^*, j)everywhere:[Jellyfish diagram of a monomial] Let ⇉ T be a jellyfish. Let 𝐦 = ∏_i,j f_ij^d_ij be a monomial in [f_ij : (i, j) ∈], and consider the covariant ψ = 𝐦· f_()ϕ_T. The jellyfish diagram of ψ is obtained from the jellyfish ⇉ T by placing a circle labeled d_ij at each point (i, j) ∈ such that d_ij≠ 0. We make these circles sufficiently small so that one can still see the squares behind them, which still represent the elements of (). [Arc diagram of a monomial: H = _k]Let ψ be as in Definition <ref>. The arc diagram of ψ is obtained from its jellyfish diagram by reflecting the right edge of(along with its tentacles) across the 135-degree axis, and then joining it to the left side of the bottom edge of . This yields vertices 1^*, …, p^*, 1, …, q arranged from left to right. For each (i^*, j) ∈(), one draws an arc from vertex i^* to vertex j. Then for all i and j, one draws d_ij many arcs from vertex i^* to vertex j. On each half (starred and unstarred) of the arc diagram, each horizontal cross section of the tentacles is viewed as a hyperedge connecting the vertices directly above its dots. [Arc diagrams of a monomial: H = _2k or Ø_k]Let ψ be as in Definition <ref>.The arc diagram of ψ is obtained from its jellyfish diagram by reflecting the right edge of(along with its attached tentacles) across the 135-degree axis. This yields vertices 1, …, n arranged from left to right. For each (i,j) ∈(), one draws an arc from vertex i to vertex j. Then for all i and j, one draws d_ij many arcs from vertex i to vertex j. Each horizontal cross section of the tentacles is viewed as a hyperedge connecting the vertices directly above its dots. As usual, the degree sequence of an arc diagram is the sequence ( 1^*, …,p^* | 1, …,q), H = _k,( 1, …,n), H = _2k or Ø_k,whereiis the number of edges (including hyperedges) which are incident to vertexi. The following proposition, whose proof we defer until Section <ref>, allows us to read off the weight (under the action of) from the degree sequence of an arc diagram. Let (H, ) be a reductive dual pair in Table <ref>. Let ψ be a monomial covariant as in Definition <ref>. Then ψ is a weight vector under the action of , whose weight wt_(ψ) is obtained from the degree sequence of the arc diagram of ψ as follows:wt_(ψ) =(-1^*, …, -p^* | 1, …,q) - (k^p | 0^q), (H, ) = (_k, _p+q),(-1, …, -n) - (k^n),(H,) = (_2k, _2n), (-1, …, -n) - ((k2)^n), (H,) = (Ø_k, _2n).Let H = _k, where k = 3, p=5, q=6, and τ = (3, -3, -4). On the left-hand side below, we show the jellyfish diagram of a basis element ψ, lying in the Stanley space indexed by the jellyfish ⇉ (T^+, T^-). The numbers along the paths are the exponents d_ij in Definition <ref>, and the squares (as usual) depict the corners of . On the right-hand side, we show the arc diagram of ψ, with the hyperedges highlighted below the vertices, and with wt_(ψ) computed using Proposition <ref>: corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 8pt, inner sep=2pt,text=white, font=] newcircle=[circle,draw=black,thin,fill=red!70!gray, minimum size = 6pt, inner sep=1pt,text=white, font=] endpt=[circle,fill=black, text=white, font=, minimum size = 5pt, inner sep=0pt] highlight=[rounded corners,line width=.8em,green!50!blue,nearly transparent,cap=round] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [scale=.35,baseline=(current bounding box.center)][thick, lightgray] (1,1) grid (6,5); in 1,2,...,6 in 1,2,...,5 [dot] at (,) ; at (0,5) 1^*;at (0,4) 2^*;at (0,3) 3^*;at (0,2) 4^*;at (0,1) 5^*;at (1,6) 1;at (2,6) 2;at (3,6) 3;at (4,6) 4;at (5,6) 5;at (6,6) 6;[ultra thick] (1,5) – ++(5,0) / in 1/0,1/-1,1/0,1/-1– ++(,) node[endpt];[ultra thick] (1,4) – ++(4,0) – ++(0,-1) node[corner]– ++(1,0) – ++(0,-1) / in 1/0,1/-1,1/0– ++(,) node[endpt];[ultra thick] (1,3) – ++(2,0) – ++(0,-1) node[corner]1– ++(1,0) – ++(0,-1) / in 0/-1,1/-1,0/-1– ++(,) node[endpt]; at (3,5) [newcircle] 1;at (1,4) [newcircle] 2;at (2,4) [newcircle] 1;at (5,4) [newcircle] 1;at (6,2) [newcircle] 3;at (3,3) [newcircle] 2;[below=5pt of current bounding box.south,anchor=north, align=center ]ψ = f_13 f_21^2 f_22 f_25 f_33^2 f_43 f_46^3_𝐦∈[f_ij : (i^*,j) ∈]f_35f_43_f_()ϕ__(T^+,T^-) (T^+,T^-) = (455,, 1223,455);[-,auto, thick,plain node/.style=minimum size=12pt,circle,draw,font=, fill = gray!30, inner sep=0pt,painted hypernode/.style=minimum size=4pt,inner sep=0pt,circle,draw,fill=black,bend left = 60, scale=.5,baseline=(current bounding box.center)][highlight] (-4,-1) – (-1,-1); [highlight] (-3,-2) – (0,-2); [highlight] (-3,-3) – (0,-3); [highlight] (-2,-4) – (-2,-4); [highlight] (5,-1) – (5,-1); [highlight] (6,-2) – (6,-2); [highlight] (6,-3) – (6,-3);[ultra thick] (-4,0) / in 0/-1,1/-1,0/-1,1/-1– ++(,) node[painted hypernode]; [ultra thick] (-1,0) / in 0/-1,1/-1,0/-1– ++(,) node[painted hypernode]; [ultra thick] (5,0) / in 0/-1,1/-1,0/-1– ++(,) node[painted hypernode];[plain node] (5*) at (0,0) 5*; [plain node] (4*) at (-1,0) 4*; [plain node] (3*) at (-2,0) 3*; [plain node] (2*) at (-3,0) 2*; [plain node] (1*) at (-4,0) 1*;(divider) at (1,0) |; [plain node] (1) at (2,0) 1; [plain node] (2) at (3,0) 2; [plain node] (3) at (4,0) 3; [plain node] (4) at (5,0) 4; [plain node] (5) at (6,0) 5; [plain node] (6) at (7,0) 6; [bend left=70] (1*) to (3);[bend left = 50] (2*) to (1);[bend left=40] (2*) to (1);[bend left = 55] (2*) to (2);(2*) to (5);[bend left = 70] (3*) to (5);(3*) to (3);[bend left=65] (3*) to (3);[bend left = 50] (4*) to (3);[bend left = 45] (4*) to (3);(4*) to (6);[bend left=65] (4*) to (6);[bend left=70] (4*) to (6);[below=-10pt of current bounding box.south,anchor=north,align=center ]wt_(ψ) = -(-2,-6,-4,-6,-2 | 2,1,5,1,4,3) -(-3,-3,-3,-3,-3 | 0,0,0,0,0,0) -(-5,-9,-7,-9,-5 | 2,1,5,1,4,3);We give another example of a jellyfish diagram and arc diagram, this time where H = _2k, with k = 3 and n = 8, and τ = (3,3,2):corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 7pt, inner sep=2pt,text=white, font=] newcircle=[circle,draw=black,thin,fill=red!70!gray, minimum size = 6pt, inner sep=1pt,text=white, font=] endpt=[circle,fill=black, text=white, font=, minimum size = 5pt, inner sep=0pt] shadow=[rounded corners,line width=.6em,red!50!black,opacity=.2,cap=round] highlight=[rounded corners,line width=.8em,green!50!blue,nearly transparent,cap=round] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [scale=.35,baseline=(current bounding box.center)][very thick, lightgray] (2,8) – (8,8) – (8,2) (3,8) – (3,7) – (8,7) (4,8) – (4,6) – (8,6) (5,8) – (5,5) – (8,5) (6,8) – (6,4) – (8,4) (7,8) – (7,3) – (8,3); in 2,...,8 in ,...,8 [dot] at (10-,) ;[shadow] (4,8) – ++(-1,-1); [shadow] (6,8) – ++(-2,-2); at (0,8) 1;at (0,7) 2;at (0,6) 3;at (0,5) 4;at (0,4) 5;at (0,3) 6;at (0,2) 7;at (0,1) 8;at (1,9.5) 1;at (2,9.5) 2;at (3,9.5) 3;at (4,9.5) 4;at (5,9.5) 5;at (6,9.5) 6;at (7,9.5) 7;at (8,9.5) 8;[ultra thick] (2,8) – ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1) node[corner]– ++(1,0) – ++(0,-1) node[corner]– ++(2,0) / in 1/0,1/-2– ++(,) node[endpt];[ultra thick] (4,8) – ++(1,0) – ++(0,-1) – ++(1,0) – ++(0,-1) node[corner]– ++(2,0) – ++(0,-1)/ in 1/0,1/0,1/-1– ++(,) node[endpt];[ultra thick] (6,8) – ++(1,0) – ++(0,-1) node[corner] – ++(1,0) / in 1/0,1/-1,1/0– ++(,) node[endpt]; at (2,8) [newcircle] 1;at (4,8) [newcircle] 2;at (7,7) [corner] 2;at (4,6) [newcircle] 1;at (6,6) [corner] ;at (5,5) [corner] ;at (6,4) [corner] ;at (8,4) [newcircle] 2;[below=5pt of current bounding box.south,anchor=north, align=center ]ψ = f_12 f_14^2 f_27^2 f_34 f_58^2_𝐦∈[f_ij : (i,j) ∈]f_27 f_36 f_45 f_56_f_()φ__T T = 233,445,57; [-,auto, thick,plain node/.style=minimum size=12pt,circle,draw,font=, fill = gray!30, inner sep=0pt,painted hypernode/.style=minimum size=4pt,inner sep=0pt,circle,draw,fill=black,bend right = 70, scale=.55,baseline=(current bounding box.center)][highlight] (-3,-.75) – ++(-3,0); [highlight] (-1,-1.5) – ++(-4,0); [highlight] (-3,-2.25) – ++(-2,0);[ultra thick] (-6,0) / in 0/-.75,1/-.75,0/-.75– ++(,) node[painted hypernode]; [ultra thick] (-4,0) / in 0/-.75,0/-.75,1/-.75– ++(,) node[painted hypernode]; [ultra thick] (-3,0) / in 0/-.75,2/-.75– ++(,) node[painted hypernode];[plain node] (8) at (0,0) 8; [plain node] (7) at (-1,0) 7; [plain node] (6) at (-2,0) 6; [plain node] (5) at (-3,0) 5; [plain node] (4) at (-4,0) 4; [plain node] (3) at (-5,0) 3; [plain node] (2) at (-6,0) 2; [plain node] (1) at (-7,0) 1; [bend right = 50] (2) to (1);[bend right = 60] (4) to (1);[bend right = 70] (4) to (1);[bend right = 60] (7) to (2);[bend right = 70] (7) to (2);[bend right = 80] (7) to (2);[bend right = 50] (4) to (3);[bend right = 60] (6) to (3);[bend right = 50] (5) to (4);[bend right = 50] (6) to (5);[bend right = 70] (8) to (5);[bend right = 60] (8) to (5);[below=5pt of current bounding box.south,anchor=north,align=center ]wt_(ψ) = -(3,5,4,6,6,2,4,2) -(3,3,3,3,3,3,3,3) -(6,8,7,9,9,5,7,5);§ SPECIAL CASES AND APPLICATIONS§.§ Rings of invariants for all five classical groups The ring of invariants[W]^His the module of covariants([W] ⊗U_τ)^Hin the special case whereτ= 0. Therefore, forH = _k,_2k, andØ_k, we obtain a Stanley decomposition and Hilbert–Poincaré series for[W]^Hby specializing our main results to theτ= 0case. By extending our methods above, we are also able to obtain similar results whereHis the special linear group_kor the special orthogonal group_k.§.§.§ The general linear, symplectic, and orthogonal groupsLet H = _k, _2k, or Ø_k. If k ≥min{p,q}, ⌊ n/2 ⌋, or n, respectively, then [W]^H is the polynomial ring [f_ij : (i,j) ∈]. Otherwise, we have a Stanley decomposition[W]^H = ⊕_⇉ E_max[f_ij : (i,j) ∈] f_(),whereE_max = {q-k+1, q-k+2, …, q}, H = _k, {n-2k+1, n-2k+3, …, n-1}, H = _2k,{1, …, k}, H = Ø_k.Furthermore, we have the Hilbert–Poincaré seriesP([W]^H; t) = P_E_max(t) =∑_⇉ E_max (t^2)^#()/(1-t^2)^d_max,where d_max is given in (<ref>), (<ref>), and (<ref>), for _k, _2k, and Ø_k, respectively. When τ = 0, the only tableau (or tableau pair) of shape τ is the empty tableau ∅. It is clear from Definitions <ref>, <ref>, and <ref> that E_max is the unique endpoint set E such that E ⇉∅. This proves the decomposition (<ref>), which yields the Hilbert–Poincaré series (<ref>).There is another way to view the families ⇉ E_max, which lines up with much of the work on lattice paths in the 1990s; see Billera,Stembridge,Sturmfels,HerzogTrung,Herzog,Conca94,ConcaHerzog94,KrattenthalerTurnsPaper,KM95,Ghorpade,Kulkarni. In particular, we viewas a (rotated) Hasse diagram where the minimal element is in the upper-left (for H = _k or _2k) or upper-right (for H = Ø_k). The kth order complex Δ_k = Δ_k() is the abstract simplicial complex whose faces are the subsets ofhaving width at most k (where width has the usual sense, namely the size of the largest antichain). Then the families ⇉ E_max are the facets of Δ_k. We say that the complex Δ_k is pure, meaning that all its facets have the same cardinality, which we have called d_max above. Moreover, there is a shelling of Δ_k in which the restriction of each facetis precisely the set (). This shelling then induces the Stanley decomposition (<ref>). Note that our shadows (which, for the invariants, appear only for H = _2k) guarantee that there exists a unique facet with empty restriction (i.e., zero corners); this facet is first in the shelling order. We will adapt these ideas in Sections <ref> and <ref>. §.§.§ The special linear group For_k ⊂_kacting onW = V^*p ⊕V^q, the invariants are precisely the_k-semiinvariants, namely, those functionsf ∈[W]such thatf(hw) = (h)^m f(w)for allh ∈_k, allw ∈W, and for somem ∈ℤ. In other words, an_k-invariant is nothing other than a_k-equivariant polynomial function fromWinto the one-dimensional_k-moduleU_(m^k), for some integerm. Therefore the ring of_k-invariants is the direct sum, ranging over allm ∈ℤ, of these modules of(V)-covariants, which we decompose via the jellyfish of Theorem <ref>:[V^*p⊕ V^q]^(V) = ⊕_m ∈ℤ([V^*p⊕ V^q] ⊗ U_(m^k))^(V)= ⊕_m ∈ℤ(⊕_⇉ T :T has shape (m^k)[f_ij : (i^*,j) ∈] f_()ϕ_T),where (strictly speaking) one hasT =(T, ∅)with T ∈((m^k), q), m ≥ 0,(∅, T)with T ∈((m^k), p), m < 0.A Stanley decomposition, unlike (<ref>), has only finitely many components; to obtain such a decomposition from (<ref>), we must “collapse” the infinitely many rectangular tableauxTso that eachTis associated to a unique maximal chain𝐂of tableau columns. This is entirely analogous to the way that each monomial in thef_ij's was associated to a unique family.Without loss of generality, assume thatm ≥0, so that eachT ∈((m^k), q). GivenE ∈such thatE ∩[p]^* = ∅, let𝒞_≥Edenote the set of all length-kcolumns in the alphabet[q]which can lie to the right ofEin a semistandard tableau (equivalently, which are≥Ewith respect to the product order). IfE ⇉T, thenTis just a multichain in𝒞_≥Ewith minimal elementE. Now let𝐂be a maximal chain in𝒞_≥E, which means𝐂 = { E ⋖⋯⋖ E_max},where the covering relationA ⋖Bmeans thatBis obtained fromAby increasing a single entry by 1. To be even more specific, we writeA i⋖ B ⟺B is obtained from A by adding 1 to the ith entry.Then we define(𝐂)to be the set of allC ∈𝐂such that𝐂contains the patternA i⋖ C j⋖ B,i < j.Note that(𝐂)is itself a tableau. A straightforward calculation shows that any maximal chain𝐂in𝒞_≥Ehas the same size, namelyc_E size of any maximal chain in 𝒞_≥ E = k/2(2q - k -1) - ∑_e ∈ E e.(The discussion above can be rephrased as follows: the order complex of𝒞_≥Eis a pure shellable complex whose facets are the maximal chains𝐂, and we have a shelling in which(𝐂)is the restriction of𝐂.) As a result, for eachE ∈such thatE ∩[p]^* = ∅, we have⊕_m > 0(⊕_T: E ⇉ T,T has shape (m^k)ϕ_T ) = ϕ_E ⊕_max. chains 𝐂 in 𝒞_≥ ℰ[ϕ_A : A ∈𝐂]ϕ_(𝐂),where the extra factorϕ_Eguarantees that everyTin the expansion is nonempty (since eachThere must containEas its first column). Summing over allE's such thatE ∩[p]^* = ∅, with a view toward substituting in (<ref>), we can consolidate the triple direct sum via⊕_E ⊕_⇉ E⊕_𝐂 -3pt↑ max. chains in 𝒞_≥ E⊕_⇉𝐂,so that the objects indexing our Stanley decomposition are the jellyfish⇉𝐂with “maximal” tentacles, in the sense that thektentacles (which all have the same length) end as far to the right as possible, and each horizontal cross section of the tentacles is obtained from the one above it by moving exactly one of the dots to the right by one unit. The identical construction can be used to substitute for them<0summands in (<ref>); we decorate the notation with stars in order to distinguish from them>0case. Hence we write𝐂^*for a maximal chain in𝒞_≥E^*whereE^* ∈withE^* ∩[q] = ∅. In this case, eachA^* ∈𝐂^*is really the one-column rational tableau(∅, A). We again definec_E^*to be the maximal chain size just as in (<ref>), withqreplaced byp. Note that them=0summand in (<ref>) is just the ring of_k-invariants from Theorem <ref>.Finally, givenE ∈such thatE ∩[p]^* = ∅, define the rational expressionQ_E(t) ∑_𝐂 max. in 𝒞_≤ E (t^k)^#(𝐂)/(1-t^k)^c_E.ForE^* ∈such thatE^* ∩[q] = ∅, defineQ_E^*(t)identically by adding stars everywhere toEand𝐂. Let k =V ≤min{p,q}. We have a Stanley decomposition[V^*p⊕ V^q]^(V) = ⊕ ⊕_⇉ E_max[f_ij : (i^*,j) ∈] f_() ⊕ ⊕_⇉𝐂ϕ_E [f_ij, ϕ_A : (i^*,j) ∈, A ∈𝐂] f_()ϕ_(𝐂) ⊕ ⊕_⇉𝐂^*ϕ_E^*[f_ij, ϕ_A^* : (i^*,j) ∈, A^* ∈𝐂^*] f_()ϕ_(𝐂^*)where the last two sums range over all jellyfish with maximal tentacles, as described above: in particular, 𝐂 is a maximal chain in 𝒞_≥ E, where E is the endpoint set such that ⇉ E (and likewise for 𝐂^* and E^*). Furthermore, we have the Hilbert–Poincaré seriesP([V^*p⊕ V^q]^(V); t) = P_E_max(t) + t^k ∑_E ∈: E ∩ [p]^* = ∅ P_E(t) Q_E(t) + t^k ∑_E^* ∈: E^* ∩ [q] = ∅ P_E^*(t) Q_E^*(t).Let k=3, p=8, and q=10. Below is an example of a jellyfish ⇉𝐂^* indexing one of the Stanley spaces in the third direct sum in Corollary <ref>. The tentacles represent a maximal chain 𝐂^* in 𝒞_≥ E^*, where E^* = {2^*, 4^*, 5^*}. As before, the red squares mark the elements of (), and now the blue rectangles enclose the elements of (𝐂^*): corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 5pt, inner sep=0pt] endpt=[circle,fill=black, minimum size = 5pt, inner sep=0pt] dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] [scale=.35] [thick, lightgray] (1,1) grid (10,8); in 1,2,...,10 in 1,2,...,8 [dot] at (,) ; at (0,8) 1^*;at (0,7) 2^*;at (0,6) 3^*;at (0,5) 4^*;at (0,4) 5^*;at (0,3) 6^*;at (0,2) 7^*;at (0,1) 8^*;at (1,9) 1;at (2,9) 2;at (3,9) 3;at (4,9) 4;at (5,9) 5;at (6,9) 6;at (7,9) 7;at (8,9) 8;at (9,9) 9;at (10,9) 10;[ultra thick] (1,8) – ++(9,0) – ++(0,-1) / in 1/0,1/0,1/0,1/0,1/-1,1/-1,1/0,1/0,1/-1,1/0,1/-1– ++(,) node[endpt];;[ultra thick] (1,7) – ++(5,0) – ++(0,-1) node[corner]– ++(2,0) – ++(0,-1) – ++(2,0) / in 1/0,1/0,1/-1,1/0,1/0,1/0,1/-1,1/0,1/0,1/-1,1/0– ++(,) node[endpt];;[ultra thick] (1,6) – ++(3,0) – ++(0,-1) node[corner]– ++(2,0) – ++(0,-1) node[corner]– ++(4,0) / in 1/0,1/-1,1/0,1/-1,1/0,1/0,1/0,1/-1,1/0,1/0,1/0– ++(,) node[endpt]; (A) at (13,7) ;(B) at (13,3) ;(C) at (17,2) ;(D) at (17,5) ;(E) at (16,2) ;(F) at (16,5) ;(G) at (19,1) ;(H) at (19,4) ; [draw=blue!70!green, very thick, fit=(A.center) (B.center)] ; [draw=blue!70!green, very thick, fit=(C.center) (D.center)] ; [draw=blue!70!green, very thick, fit=(E.center) (F.center)] ; [draw=blue!70!green, very thick, fit=(G.center) (H.center)] ; One can verify from the diagram that the tentacles form a maximal chain 𝐂^*, because with each step to the right, exactly one tentacle descends by exactly one unit, until all k=3 tentacles end as far down as possible. To visualize (<ref>), observe that an element of (𝐂^*) is a column such that the incoming descending tentacle lies above the outgoing descending tentacle. In terms of the Hilbert–Poincaré series (<ref>) of the _3-invariants, the familycontributes (t^2)^3 / (1-t^2)^35 to P_E^*(t), since #() = 3 and d_E^* = # = 35. Meanwhile, the maximal chain 𝐂^* contributes (t^3)^4 / (1-t^3)^11 to Q_E^*(t), since #(𝐂^*) = 4 and c_E^* = #𝐂^* = 11.Let k=3, p=3, and q=4. Implementing the formula (<ref>) leads to the following Hilbert–Poincaré series of the invariant ring [V^*3⊕ V^4]^_3:+ P_E_max(t)+ t^3 ( P__{2,3,4}(t) · Q__{2,3,4}(t) + P__{1,3,4}(t) · Q__{1,3,4}(t) + P__{1,2,4}(t) · Q__{1,2,4}(t) + P__{1,2,3}(t) · Q__{1,2,3}(t) )+ t^3 (P__{1^*, 2^*, 3^*}(t) · Q__{1^*, 2^*, 3^*}(t) )=+1/(1-t^2)^12 + t^3 (1/(1-t^2)^12·1/(1-t^3) + 1+t^2+t^4/(1-t^2)^11·1/(1-t^3)^2 + 1 + 2t^2/(1-t^2)^10·1/(1-t^3)^3 + 1/(1-t^2)^9·1/(1-t^3)^4) + t^3 (1/(1-t^2)^12·1/(1-t^3))=+1 + 3 t^2 + 2 t^3 + 6 t^4 + 3 t^5 + 8 t^6 + 3 t^7 + 6 t^8 + 2 t^9 +3 t^10 + t^12/(1 - t^2)^9 (1 - t^3)^3 (1 - t^6).This reduced form is somewhat surprising, since the denominator contains the factor (1-t^6) in addition to the expected factors (1-t^2) and (1-t^3). Furthermore, the numerator is palindromic but not unimodal. (We verified this reduced Hilbert–Poincaré series using Macaulay2.)§.§.§ The special orthogonal group Since the ring of(V)-invariants is a direct sum of theØ(V)-invariants and theØ(V)-covariants of type^k V(i.e., theØ(V)-semiinvariants), its Stanley decomposition and Hilbert–Poincaré series are obtained immediately from our results above. In writing down theØ(V)-covariants of type^k V, we may as well use the more explicit notation_E ϕ_E: (v_1, …, v_n) ↦(v_e_1, …, v_e_k), for each endpoint setE = {e_1, …, e_k}. Let V = k ≤ n. Then we have a Stanley decomposition [V^n]^(V) = ⊕_⇉{1, …, k}[f_ij : (i,j) ∈] f_()⊕⊕_⇉ E[f_ij : (i,j) ∈] f_()_E, where the direct sums range over the Ø_k-jellyfish in Definition <ref>, and () is defined as in (<ref>).Furthermore, we have the Hilbert–Poincaré series P([V^n]^(V); t) = P_{1, …, k}(t) + t^k ∑_E ∈ P_E(t).§.§ Bernstein degree The jellyfish indexing our Stanley decompositions, expressed in Theorems <ref>, <ref>, and <ref>, yield an elementary proof of the main result in <cit.>, which relates the Bernstein degree of the simple(,K)-moduleL_to the degree of the orbit closure_k(defined in (<ref>)) as an algebraic variety. Assume one of the three dual pair settings in Table <ref>, and if H = Ø_k then suppose τ = (1^m) for some 0 ≤ m ≤ k. Then the Bernstein degree of L_λ is given byDeg(L_) =U_τ·_k.The Bernstein degree of L_ is obtained by evaluating the numerator of its reduced Hilbert–Poincaré series at t=1. By our main theorems, this reduced Hilbert–Poincaré series of L_ is obtained by finding a common denominator and summing the rational expressions in (<ref>), (<ref>), or (<ref>). In all three cases, the rational expression P_E_max(t) has a strictly larger exponent in the denominator, namely d_max, than any other P_E(t). (For H = _k, this maximal exponent appears in all P_E(t) such that d_E = d_max.) Hence the numerator of the reduced Hilbert–Poincaré series of L_ equals#τ_max·numerator of P_E_max(t) + terms divisible by (1-t^2).Evaluating this numerator at t=1, we obtainDeg(L_) = #τ_max·(numerator of P_E_max(t))|_t=1.By Lemma <ref> or <ref> or <ref>, we have #τ_max =U_τ. By Corollary <ref>, P_E_max(t) is the Hilbert–Poincaré series of [W]^H, which is isomorphic to [_k] by (<ref>). Therefore, evaluating the numerator at t=1 gives the degree of _k, and the result follows. § WALLACH REPRESENTATIONS OF TYPE ADE§.§ The Wallach representations Assume the general Hermitian symmetric setting described in Section <ref>, whereris the rank of(,)̨. Recall that two roots, are called strongly orthogonal if neither+ nor- is a root. We adopt Harish-Chandra's maximal set{γ_1, …, γ_r}of strongly orthogonal roots inΦ(^+), which are defined recursively as follows. Letγ_1be the lowest root inΦ(^+). Then fori > 1, letγ_ibe the lowest root inΦ(^+)that is strongly orthogonal to each ofγ_1, …, γ_i-1.For any partitionn_1 ≥⋯≥n_r ≥0, the weight-∑_i n_i γ_iis$̨-dominant and integral. We have the following multiplicity-free decomposition due to Schmid <cit.>:[^+] ≅⊕_n_1 ≥⋯≥ n_r ≥ 0 F_-n_1 γ_1 -⋯ - n_rγ_r,where F_μ denotes the simple $̨-module with highest weightμ. In particular, we have^- ≅F_-γ_1as a$̨-module. More generally, for each 0 ≤ k ≤ r, the coordinate ring of _k, described above in (<ref>), can be viewed as a truncation of the Schmid decomposition <cit.>:[_k] ≅⊕_n_1 ≥⋯≥ n_k ≥ 0 F_-n_1 γ_1 -⋯ - n_rγ_k. Let c 1/2(ρ,γ_2^∨ - γ_1^∨), where as usual ρ is half the sum of the positive roots of , and (,) is the nondegenerate bilinear form on ^* induced from the Killing form of . Let ζ be the unique fundamental weight ofthat is orthogonal to the roots of $̨. Then for any Hermitian symmetric pair(,)̨, andk ≤r, the kth Wallach representation is the simple-moduleL_-kcζ. The determinantal variety_kis the associated variety of thekth Wallach representation.Whenarises in one of the three dual pairs(H,)in Table <ref>, the weightλ= -kcζis the image ofτ= 0under the mapΣ⟶in Theorem <ref>. In this case,U_τis the trivial representation ofH, and it follows from (<ref>) that thekth Wallach representation is isomorphic to[W]^Has a(,K)-module. Hence, in the dual pair setting, we have already seen (Corollary <ref>) how to interpret the Hilbert–Poincaré series ofL_-kcζin terms of lattice paths. These Hilbert series were previously computed in EW,EnrightHunziker04 by using Enright–Shelton reduction to compute generalized BGG resolutions. In this section, we extend our lattice path interpretation beyond the dual pair setting, to reinterpret the Hilbert–Poincaré series of the Wallach representations for all Hermitian symmetric pairs of simply laced type (i.e., whereis one of the Cartan types,, or). §.§ Lattice paths in Φ(^+) In the dual pair setting, the two pairs(H,)withsimply laced are(_k, _p+q)and(_2k, _2n). We observe that in these two cases, we have an isomorphism of posets⟶Φ(^+), given explicitly by(i^*,j) ⟼(ϵ_i| - ϵ_j), (H, ) = (_k, _p+q),(i,j) ⟼ϵ_i + ϵ_j, (H, ) = (_2k, _2n).Hence in our jellyfish diagrams in Sections <ref>–<ref>, the upper-left corner ofcorresponds to the minimal noncompact rootγ_1 ∈Φ(^+), while the lower-right corner corresponds to the highest root inΦ(^+). Revisiting Corollary <ref> and Remark <ref> in this light, one can view the families⇉E_maxas the maximal width-ksubsets ofΦ(^+).It is a general fact for Hermitian symmetric pairs that the Hasse diagram ofΦ(^+)is a planar distributive lattice, and therefore its order complexes are shellable <cit.>*Thm. 7.1. Therefore, guided by Remark <ref>, we are able to generalize our previous construction in the formP(L_-kcζ;t) = ∑_ t^#()/(1-t)^d,where the sum ranges over all facetsof thekth order complex ofΦ(^+), anddis the common size of all facets. (Since we are now outside the dual pair setting, there is no reason to make the substitutiont ↦t^2which arose from the degree-2 mapπ^*: z_ij ↦f_ij.) Below, we apply (<ref>) to obtain a lattice path interpretation of the Hilbert–Poincaré series of the Wallach representations for the Hermitian symmetric pairs of simply laced type. In each case, we rotate the Hasse diagram ofΦ(^+)so that the minimal elementγ_1is in the upper-left. The Hilbert–Poincaré series below were written down in <cit.>*6.6–6.8, to which we refer the reader for further details; see also <cit.>. §.§ First Wallach representation of (_n, _n-1) For(,)̨ = (_n, _n-1),the Hasse diagram ofΦ(^+)has2(n-1)vertices, arranged as a diamond that connects two paths of lengthn-2. There is only one Wallach representationL_, corresponding tok=1, where= -(n+2)ω_1. Sincek=1, the facetsare just maximal chains (i.e., lattice paths) inΦ(^+). The corners are still the L-turns.There are only two facets_0and_1:dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] corner=[rectangle,fill=red!70!gray, draw=black, minimum size = 5pt, inner sep=0pt] [scale=.3, baseline] [lightgray, thick] (5,1) – (1,1) – (1,5) (1,2) – (2,2) – (2,1); at (1,1) [dot] ;at (1,2) [dot] ;at (1,3) [dot] ;at (1,4) [dot] ;at (1,5) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (2,2) [dot] ;[left=5pt of current bounding box.west,anchor=east ]Φ(^+) =; [scale=.3, baseline] [lightgray, thick] (5,1) – (1,1) – (1,5) (1,2) – (2,2) – (2,1); at (1,1) [dot] ;at (1,2) [dot] ;at (1,3) [dot] ;at (1,4) [dot] ;at (1,5) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (2,2) [dot] ;[ultra thick] (1,5) – (1,2) – (2,2) – (2,1) – (5,1);[left=5pt of current bounding box.west,anchor=east ]_0 =; [scale=.3, baseline] [lightgray, thick] (5,1) – (1,1) – (1,5) (1,2) – (2,2) – (2,1); at (1,1) [dot] ;at (1,2) [dot] ;at (1,3) [dot] ;at (1,4) [dot] ;at (1,5) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (2,2) [dot] ;[ultra thick] (1,5) – (1,1) – (5,1); at (1,1) [corner] ;[left=5pt of current bounding box.west,anchor=east ]_1 =; Note that each facet has size2n-3, which is one less than the size ofΦ(^+). Then using (<ref>), we obtain the following Hilbert–Poincaré series, which coincides with <cit.>*Thm. 26:P(L_;t) = 1+t/(1-t)^2n-3.§.§ First Wallach representation of (_6, _5) For(,)̨ = (_6, _5),there is only one Wallach representationL_, corresponding tok=1, where= -3ζ. Sincek=1, the facetsare again just maximal lattice paths inΦ(^+). The corners are L-turns, and there is one permanent shadow (which ensures that the first facet in the shelling, in which we step east whenever possible, has zero corners):dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] corner=[rectangle,fill=red!70!gray, draw=black, minimum size = 5pt, inner sep=0pt] shadow=[rounded corners,line width=.7em,red!50!black,nearly transparent,cap=round] [scale=.3, baseline][lightgray, thick] (1,1) – (5,1) – (5,-1) – (6,-1) – (6,-4) (3,1) – (3,0) – (4,0) – (4,-2) – (6,-2) (4,1) – (4,0) – (5,0) (4,-1) – (5,-1) – (5,-2); at (1,1) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (3,0) [dot] ;at (4,0) [dot] ;at (5,0) [dot] ;at (4,-1) [dot] ;at (5,-1) [dot] ;at (6,-1) [dot] ;at (4,-2) [dot] ;at (5,-2) [dot] ;at (6,-2) [dot] ;at (6,-3) [dot] ;at (6,-4) [dot] ;[shadow] (5,-1) – ++(0,0);[left=0pt of current bounding box.west,anchor=east ]Φ(^+) =; [scale=.3, baseline][lightgray, thick] (1,1) – (5,1) – (5,-1) – (6,-1) – (6,-4) (3,1) – (3,0) – (4,0) – (4,-2) – (6,-2) (4,1) – (4,0) – (5,0) (4,-1) – (5,-1) – (5,-2); at (1,1) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (3,0) [dot] ;at (4,0) [dot] ;at (5,0) [dot] ;at (4,-1) [dot] ;at (5,-1) [dot] ;at (6,-1) [dot] ;at (4,-2) [dot] ;at (5,-2) [dot] ;at (6,-2) [dot] ;at (6,-3) [dot] ;at (6,-4) [dot] ;[shadow] (5,-1) – ++(0,0);[ultra thick] (1,1) – ++(2,0) – ++(0,-1) – ++(2,0) – ++(0,-2) – ++(1,0) – ++(0,-2); at (3,0) [corner] ;at (5,-2) [corner] ; [left=0pt of current bounding box.west,anchor=east ]Example of a facet:; Counting up the corners in the all twelve facets, we apply (<ref>) to obtain the following Hilbert–Poincaré series, which coincides with <cit.>*Thm. 28:P(L_;t) = 1+5t+5t^2+t^3/(1-t)^11.§.§ First Wallach representation of (_7, _6) For(, )̨ = (_7, _6)there are two Wallach representations. Fork=1, we have= -4ζ, and the facets are again lattice paths inΦ(^+):dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] corner=[rectangle,fill=red!70!gray, draw=black, minimum size = 5pt, inner sep=0pt] shadow=[rounded corners,line width=.7em,red!50!black,nearly transparent,cap=round] [scale=.3, baseline][lightgray, thick] (0,1) – (5,1) – (5,-1) – (6,-1) – (6,-3) (3,1) – (3,0) – (4,0) – (4,-2) – (6,-2) (4,1) – (4,0) – (5,0) (4,-1) – (5,-1) – (5,-3) (4,-2) – (4,-3) – (8,-3) – (8,-7) (6,-2) – (8,-2) – (8,-3) (7,-2) – (7,-4) – (8,-4); at (0,1) [dot] ;at (1,1) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (3,0) [dot] ;at (4,0) [dot] ;at (5,0) [dot] ;at (4,-1) [dot] ;at (5,-1) [dot] ;at (6,-1) [dot] ;at (4,-2) [dot] ;at (5,-2) [dot] ;at (6,-2) [dot] ;at (6,-3) [dot] ;at (4,-3) [dot] ;at (5,-3) [dot] ;at (7,-2) [dot] ;at (8,-2) [dot] ;at (7,-3) [dot] ;at (8,-3) [dot] ;at (7,-4) [dot] ;at (8,-4) [dot] ;at (8,-5) [dot] ;at (8,-6) [dot] ;at (8,-7) [dot] ;[shadow] (5,-1) – ++(0,0); [shadow] (6,-2) – ++(0,0);[left=0pt of current bounding box.west,anchor=east ]Φ(^+) =; [scale=.3, baseline][lightgray, thick] (0,1) – (5,1) – (5,-1) – (6,-1) – (6,-3) (3,1) – (3,0) – (4,0) – (4,-2) – (6,-2) (4,1) – (4,0) – (5,0) (4,-1) – (5,-1) – (5,-3) (4,-2) – (4,-3) – (8,-3) – (8,-7) (6,-2) – (8,-2) – (8,-3) (7,-2) – (7,-4) – (8,-4); at (0,1) [dot] ;at (1,1) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (3,0) [dot] ;at (4,0) [dot] ;at (5,0) [dot] ;at (4,-1) [dot] ;at (5,-1) [dot] ;at (6,-1) [dot] ;at (4,-2) [dot] ;at (5,-2) [dot] ;at (6,-2) [dot] ;at (6,-3) [dot] ;at (4,-3) [dot] ;at (5,-3) [dot] ;at (7,-2) [dot] ;at (8,-2) [dot] ;at (7,-3) [dot] ;at (8,-3) [dot] ;at (7,-4) [dot] ;at (8,-4) [dot] ;at (8,-5) [dot] ;at (8,-6) [dot] ;at (8,-7) [dot] ;[shadow] (5,-1) – ++(0,0); [shadow] (6,-2) – ++(0,0);[ultra thick] (0,1) – ++(4,0) – ++(0,-2) – ++(2,0) – ++(0,-1) – ++(1,0) – ++(0,-2) – ++(1,0) – ++(0,-3); at (4,-1) [corner] ;at (7,-4) [corner] ;[left=0pt of current bounding box.west,anchor=east ]Example of a facet:; This gives us78facets, all having size17. Counting the corners of each, we obtainP(L_; t) =1+10t+28t^2 +28t^3 +10t^4 +t^5/(1-t)^17,which agrees with <cit.>*Thm. 29. §.§ Second Wallach representation of (_7, _6) The posetΦ(^+)is the same as above; this time, fork=2, we have= -8ζ. Each facet is a family ofk=2maximal nonintersecting paths inΦ(^+). In fact, there are only three facets, each of size 26:dot=[circle,fill=lightgray, minimum size = 4pt, inner sep=0pt] corner=[rectangle,fill=red!70!gray, draw=black, minimum size = 5pt, inner sep=0pt] shadow=[rounded corners,line width=.7em,red!50!black,nearly transparent,cap=round] [scale=.3, baseline][lightgray, thick] (0,1) – (5,1) – (5,-1) – (6,-1) – (6,-3) (3,1) – (3,0) – (4,0) – (4,-2) – (6,-2) (4,1) – (4,0) – (5,0) (4,-1) – (5,-1) – (5,-3) (4,-2) – (4,-3) – (8,-3) – (8,-7) (6,-2) – (8,-2) – (8,-3) (7,-2) – (7,-4) – (8,-4); at (0,1) [dot] ;at (1,1) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (3,0) [dot] ;at (4,0) [dot] ;at (5,0) [dot] ;at (4,-1) [dot] ;at (5,-1) [dot] ;at (6,-1) [dot] ;at (4,-2) [dot] ;at (5,-2) [dot] ;at (6,-2) [dot] ;at (6,-3) [dot] ;at (4,-3) [dot] ;at (5,-3) [dot] ;at (7,-2) [dot] ;at (8,-2) [dot] ;at (7,-3) [dot] ;at (8,-3) [dot] ;at (7,-4) [dot] ;at (8,-4) [dot] ;at (8,-5) [dot] ;at (8,-6) [dot] ;at (8,-7) [dot] ;[shadow] (5,-1) – ++(0,0); [shadow] (6,-2) – ++(0,0);[ultra thick] (0,1) – ++(5,0) – ++(0,-2) – ++(1,0) – ++(0,-1) – ++(2,0) – ++(0,-5) (3,0) – ++(1,0) – ++(0,-2) – ++(1,0) – ++(0,-1) – ++(2,0) – ++(0,-1);[left=0pt of current bounding box.west,anchor=east ]_0 =; [scale=.3, baseline][lightgray, thick] (0,1) – (5,1) – (5,-1) – (6,-1) – (6,-3) (3,1) – (3,0) – (4,0) – (4,-2) – (6,-2) (4,1) – (4,0) – (5,0) (4,-1) – (5,-1) – (5,-3) (4,-2) – (4,-3) – (8,-3) – (8,-7) (6,-2) – (8,-2) – (8,-3) (7,-2) – (7,-4) – (8,-4); at (0,1) [dot] ;at (1,1) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (3,0) [dot] ;at (4,0) [dot] ;at (5,0) [dot] ;at (4,-1) [dot] ;at (5,-1) [dot] ;at (6,-1) [dot] ;at (4,-2) [dot] ;at (5,-2) [dot] ;at (6,-2) [dot] ;at (6,-3) [dot] ;at (4,-3) [dot] ;at (5,-3) [dot] ;at (7,-2) [dot] ;at (8,-2) [dot] ;at (7,-3) [dot] ;at (8,-3) [dot] ;at (7,-4) [dot] ;at (8,-4) [dot] ;at (8,-5) [dot] ;at (8,-6) [dot] ;at (8,-7) [dot] ;[shadow] (5,-1) – ++(0,0); [shadow] (6,-2) – ++(0,0);[ultra thick] (0,1) – ++(5,0) – ++(0,-2) – ++(1,0) – ++(0,-1) – ++(2,0) – ++(0,-5) (3,0) – ++(1,0) – ++(0,-3) – ++(3,0) – ++(0,-1); at (4,-3) [corner] ;[left=0pt of current bounding box.west,anchor=east ]_1 =; [scale=.3, baseline][lightgray, thick] (0,1) – (5,1) – (5,-1) – (6,-1) – (6,-3) (3,1) – (3,0) – (4,0) – (4,-2) – (6,-2) (4,1) – (4,0) – (5,0) (4,-1) – (5,-1) – (5,-3) (4,-2) – (4,-3) – (8,-3) – (8,-7) (6,-2) – (8,-2) – (8,-3) (7,-2) – (7,-4) – (8,-4); at (0,1) [dot] ;at (1,1) [dot] ;at (2,1) [dot] ;at (3,1) [dot] ;at (4,1) [dot] ;at (5,1) [dot] ;at (3,0) [dot] ;at (4,0) [dot] ;at (5,0) [dot] ;at (4,-1) [dot] ;at (5,-1) [dot] ;at (6,-1) [dot] ;at (4,-2) [dot] ;at (5,-2) [dot] ;at (6,-2) [dot] ;at (6,-3) [dot] ;at (4,-3) [dot] ;at (5,-3) [dot] ;at (7,-2) [dot] ;at (8,-2) [dot] ;at (7,-3) [dot] ;at (8,-3) [dot] ;at (7,-4) [dot] ;at (8,-4) [dot] ;at (8,-5) [dot] ;at (8,-6) [dot] ;at (8,-7) [dot] ;[shadow] (5,-1) – ++(0,0); [shadow] (6,-2) – ++(0,0);[ultra thick] (0,1) – ++(5,0) – ++(0,-3) – ++(3,0) – ++(0,-5) (3,0) – ++(1,0) – ++(0,-3) – ++(3,0) – ++(0,-1); at (4,-3) [corner] ;at (5,-2) [corner] ;[left=0pt of current bounding box.west,anchor=east ]_2 =; Hence the Hilbert–Poincaré series isP(L_; t) = 1+t+t^2/(1-t)^26,as given in <cit.>*Thm. 30.§ PROOFS OF MAIN RESULTS§.§ The ring of covariants Our first task is to prove Theorems <ref>, <ref>, and <ref>. The starting point of our proofs is the standard monomial theory developed in <cit.>. The language and notation in <cit.>, along with the general viewpoint on modules of covariants, differ from the present paper, and so we first take care to line up the two approaches before giving the main proofs.In the following discussion, letH = _kor_2k. (The groupØ_krequires special treatment in Section <ref>.) LetNbe the maximal unipotent subgroup ofHdefined in Sections <ref>–<ref>. Then[W]^Nis called the ring of covariants. This ring effectively gathers all the modules of covariants into a single object, as follows. Taking theN-invariants in the Howe duality decomposition (<ref>), we have[W]^N≅⊕_τ∈Σ ([W] ⊗ U_τ)^H ⊗ (U_τ)^N,≅⊕_τ∈Σ ([W] ⊗ U_τ)^H ⊗ u^*_τ,whereu^*_τis a highest weight vector inU^*_τ. Hence as a(, K)-module, we have[W]^N = ⊕_τ∈Σ[W]^N_τ^*≅⊕_τ∈Σ ([W] ⊗ U_τ)^H,where[W]^N_τ^*is defined to contain precisely those functions in[W]^Nwhich are weight vectors under the action of the maximal torus ofH, having weight equal to the highest weight ofU_τ^*. (In terms of highest weights, the distinction betweenU_τandU_τ^*is relevant only whenH = _k.) Explicitly, the isomorphism above is given byΨ: ([W] ⊗ U_τ)^H⟶[W]^N_τ^*, ϕ ⟼ u^*_τ∘ϕ,whereϕis viewed as a functionϕ: W ⟶U_τ.We start with the following sketch of our upcoming proofs. The main result of <cit.> is a standard monomial theory for the ring of covariants[W]^N. In brief, this means that[W]^Nis a quotient of the polynomial algebra generated by thef_ij's and by certain functions_I[x_ij]and_J[y_ij](to be defined below), where the defining idealℐis generated by certain monomials. The monomials lying outsideℐare called standard monomials, and they furnish a linear basis for[W]^N(after we identify them with their images in the quotient). The results in <cit.> give the minimal set of generators forℐ, which we will reinterpret in terms of our jellyfish diagrams (where products offunctions are still depicted by tentacles). We then will show that the standard monomials — which are necessarily products off_ij's andfunctions — are precisely those monomials that lie inside some jellyfish. By restricting our attention to the component[W]^N_τ^*, we need consider only those jellyfish of shapeτ; then by putting a shelling order on the families, we obtain the desired Stanley decomposition, thereby proving that each standard monomial lies inside a unique jellyfish of shapeτ.We give the proof forH = _2kfirst, since it is simpler but identical in spirit to the proof for_k. §.§ The symplectic group Letx_ijbe the coordinate functions onW = V^ndescribed in Section <ref>.LetIbe a tableau column of lengthm ≤k, with entries in[n].We define_I[x_ij] ∈[W]^Nto be the determinant of them ×mminor ofXwhose column indices are the entries inI, and whose row indices arek+1, k+2, …, k+m. Therefore, as a weight vector for bothHand(see Section <ref>), we havewt_H(_I[x_ij])= ω_m, wt_(_I[x_ij])= - (k^n) - ∑_i ∈ Iϵ_i.Given a tableauT ∈(τ,n), let_T[x_ij]denote the product of the functions_I[x_ij], taken over all columnsIinT. It follows from the weights above thatwt_H(_T[x_ij])= τ,wt_(_T[x_ij])= - (k^n) -cont(T),wherecont(T)denotes the content ofTas defined above (<ref>). Let ϕ_T be the map defined in (<ref>). The image of this map under the isomorphism Ψ in (<ref>) is given byΨ: ϕ_T⟼_T[x_ij].Since the irreducible _2k-representation U_τ is a quotient of the irreducible _2k-representation with highest weight τ, the proof follows from that of Lemma <ref> below.In <cit.>*Def. 3.3.5, the functions which we call _I[x_ij] are denoted by β_I.Likewise, in the proof for H = _k we will introduce functions _J[y_ij] which in <cit.>*Def. 3.3.5 are denoted by α_J. Our products _T[x_ij] and _T[y_ij] are denoted there by m_β(T) and m_α(T), respectively. [see <cit.>*Def. 3.6.16] An _2k-split is an arrangement of positive integers from the set [n], taking the form=1.5pt [ a_1; ∨; ⋮; ∨; a_s; ∨; b_r > ⋯ > b_1 > c_1 > ⋯ > c_r > d_1 > ⋯ > d_t ]such that r ≥ s, and s+t ≤ 2k, and r + t = k+1. (The parameters r,s,t are allowed to be zero, meaning that the corresponding sequences are empty.) The monomial of a split is ∏_i=1^r f_c_i, b_i·_{d_t, …, d_1, a_s, …, a_1}[x_ij].We have[V^n]^N = [{f_ij}∪{_I[x_ij]}] / ℐ,where ℐ is the ideal generated by *all monomials of _2k-splits, and*all products _I[x_ij] _I'[x_ij] where I and I' are incomparable in the tableau order.Moreover, the images of the monomials lying outside ℐ (i.e., the standard monomials) furnish a linear basis for [V^n]^N. It follows from (<ref>) and Lemma <ref> that every standard monomial lying in[W]^N_τ^*must be a product off_ij's with_T[x_ij], for some tableauT ∈(τ, n); the semistandard condition is forced because a standard monomial cannot be divisible by a product of incomparablefunctions (otherwise it would lie insideℐ). We will say that the support of a monomial𝐦is the set of points(i,j) ∈such thatf_ijdivides𝐦. Moreover, to clarify the proof below, we makea poset equipped with the product order, whereby(i,j) ≤(i',j')if and only if bothi ≤i'andj ≤j'. Note that each lattice path in a familyis a chain. Recall that an antichain inis a subset in which each pair of elements is incomparable, meaning that one lies strictly northeast of the other. Hence, given an antichain contained in, each element in the antichain lies in a different lattice path of. Note that the support of the monomial (<ref>) of a split is an antichain of sizer. Let T ∈(τ,n), and let 𝐦∈[f_ij] _T[x_ij] be a monomial. Then 𝐦 is a standard monomial if and only if its support is contained in some family ⇉ T.First we assume that 𝐦 is not a standard monomial, and we will show that there is no family ⇉ T containing the support of 𝐦. Since 𝐦 is nonstandard, it must be the case that 𝐦∈ℐ is divisible by the monomial (<ref>) of some split, in particular a split in which {d_t, …, d_1, a_s, …, a_1} is (say) the first column of T. Let E be the unique endpoint set such that E ⇉ T; then by Definition <ref>, the column E is less than or equal to the first column of T in the tableau order. In particular, E contains at least t endpoints which are ≤ d_1.Now suppose that there is some family ⇉ T containing the support of 𝐦. Then by definition, we have ⇉ E. Since 𝐦 is divisible by the monomial of some split, the support of 𝐦 contains an antichain of size r, consisting of the elements (c_i,b_i); therefore these elements must lie on r distinct paths in , and moreover the (row index inof the) endpoint of each of these paths in E is at least as great as the corresponding row index c_i (simply because lattice paths travel toward the southeast). In particular, since c_r > d_1 in the split,E contains at least r endpoints which are > d_1.But since r + t = k+1 in any split, (<ref>) and (<ref>) together imply that E contains at least k+1 endpoints, which is a contradiction since any endpoint set E contains exactly k elements. Therefore there is no ⇉ T containing the support of 𝐦.Conversely, assume that 𝐦 is a standard monomial. Then for each 0 ≤ t ≤ℓ(τ), the support of 𝐦 contains no antichain of size rk-t+1 lying strictly below the row ingiven by the tth entry in the first column of T; otherwise, 𝐦 would be divisible by the monomial (<ref>) of the split given by this antichain {(c_1, b_1), …, (c_r, b_r)} where {d_t, …, d_1, a_s, …, a_1} is the first column of T, which contradicts the standardness of 𝐦. Hence for each t, any antichain in the support of 𝐦 lying strictly below the tth tentacle contains at most k-t elements, which can therefore be covered by the t-k remaining lattice paths ending at thebottommost t-k endpoints in E. It follows that the support of 𝐦 is contained in some family ⇉ T.In the proof below, we appeal to the theory of edge labelings and induced shellings described in <cit.>*Thm. 2.3 and <cit.>*Def. 2.1. See also <cit.>*Ch. 3 for a general reference on simplicial complexes and Stanley–Reisner rings. Let T ∈(τ,n). By Lemma <ref>, we have[V^n]^N_τ = ⊕_T ∈(τ,n) span{standard monomials in [f_ij] _T[x_ij]}= ⊕_T ∈(τ, n) span⋃_⇉ T{monomials in [f_ij : (i,j) ∈] _T[x_ij] }.Applying the inverse Ψ^-1 of the isomorphism in (<ref>) via Lemma <ref>, we have([V^n] ⊗ U_τ)^_2k = ⊕_T ∈(τ,n) span⋃_⇉ T{monomials in [f_ij : (i,j) ∈] }·ϕ_T. Let E be the unique endpoint set such that E ⇉ T. We consider the abstract simplicial complex Δ_E in which each face is a subset of some family ⇉ E. The monomials appearing in each component of (<ref>) are precisely those monomials whose support is a face of Δ_E. Therefore, letting [Δ_E] denote the Stanley–Reisner ring of Δ_E, we can rewrite (<ref>) as([V^n] ⊗ U_τ)^_2k = ⊕_E ∈[Δ_E] ⊕_T: E ⇉ Tϕ_T. The families ⇉ E are the facets of Δ_E, which ispure since each suchhas the same size d_E. To assign a shelling order to the facets, we define the following binary edge labeling on a lattice path. We label step with “0” if we step toward the east, or if an eastern step would have been impossible (because of another lattice path or the eastern edge of ); otherwise we label a step with “1.” In other words, the “1” labels are precisely the southern steps where an eastern step would have been available. In this way, each lattice path has a label sequence of 0's and 1's. Each facet, in turn, also has a label sequence, obtained by concatenating the label sequences of its k lattice paths, from southwest to northeast. A shelling order on the facets is obtained by imposing the lexicographical order on their label sequences. In this shelling order, the restriction of each facet is given by the set of descents in its label sequence (where a descent is an occurrence of a 1 followed by a 0), which are the elements of (). Note that we defined the shadows in (<ref>) precisely so that the first facet _0 in the shelling order would have the all-zero label sequence, which has no descents, and thus the restriction of _0 is ∅, as must be in the case in any shelling.As a concrete example before completing the proof, take k=2 and n=7, and let E = {2,4}. Then Δ_E has 17 facets. Below, we display the first few facets in the aforementioned shelling order, writing the label sequence beneath each facetand underlining each descent therein. In the label sequence for each facet , for the sake of clarity, we insert a space between the label sequence of the first and second lattice path in : corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 5pt, inner sep=2pt,text=white, font=] shadow=[rounded corners,line width=.5em,red!50!black,opacity=.2,cap=round] dot=[circle,fill=lightgray, minimum size = 3.5pt, inner sep=0pt] [scale=.3, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1); [shadow] (7,6) – ++(-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(3,0) – ++(0,-1) – ++(1,0) – ++(0,-1); [ultra thick] (4,7) – ++(3,0) – ++(0,-1);[below=0pt of current bounding box.south,anchor=north ]00000000 0000;<[scale=.3, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(3,0) – ++(0,-2) node[corner] – ++(1,0);[ultra thick] (4,7) – ++(3,0) – ++(0,-1);[below=0pt of current bounding box.south,anchor=north ]00000010 0000;<[scale=.3, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(2,0) – ++(0,-1) node[corner] – ++(2,0) – ++(0,-1);[ultra thick] (4,7) – ++(3,0) – ++(0,-1);[below=0pt of current bounding box.south,anchor=north ]00001000 0000;<[scale=.3, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(2,0) – ++(0,-1) node[corner] – ++(2,0) – ++(0,-1);[ultra thick] (4,7) – ++(2,0) – ++(0,-1) node[corner] – ++(1,0);[below=0pt of current bounding box.south,anchor=north ]000010000010;<[scale=.3, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(2,0) – ++(0,-1) node[corner] – ++(1,0) – ++(0,-1) node[corner] – ++(1,0);[ultra thick] (4,7) – ++(3,0) – ++(0,-1);[below=0pt of current bounding box.south,anchor=north ]00001010 0000;< ⋯The rest of the shelling order is shown below (where we suppress the label sequences to save space):corner=[rectangle,draw=black,thin,fill=red!70!gray, minimum size = 3.5pt, inner sep=2pt,text=white, font=] shadow=[rounded corners,line width=.3em,red!50!black,opacity=.2,cap=round] dot=[circle,fill=lightgray, minimum size = 3pt, inner sep=0pt] [scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(2,0) – ++(0,-1) node[corner] – ++(1,0) – ++(0,-1) node[corner] – ++(1,0);[ultra thick] (4,7) – ++(2,0) – ++(0,-1) node[corner] – ++(1,0);[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(2,0) – ++(0,-2) node[corner] – ++(2,0);[ultra thick] (4,7) – ++(3,0) – ++(0,-1) ;[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(2,0) – ++(0,-2) node[corner] – ++(2,0);[ultra thick] (4,7) – ++(2,0) – ++(0,-1) node[corner] – ++(1,0);[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(3,0) – ++(0,-1);[ultra thick] (4,7) – ++(3,0) – ++(0,-1) ;[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(3,0) – ++(0,-1);[ultra thick] (4,7) – ++(2,0) – ++(0,-1) node[corner] – ++(1,0);[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(3,0) – ++(0,-1);[ultra thick] (4,7) – ++(1,0) – ++(0,-1) node[corner] – ++(2,0) ;[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(2,0) – ++(0,-1) node[corner] – ++(1,0);[ultra thick] (4,7) – ++(3,0) – ++(0,-1) ;[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(2,0) – ++(0,-1) node[corner] – ++(1,0);[ultra thick] (4,7) – ++(2,0) – ++(0,-1) node[corner] – ++(1,0) ;[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(2,0) – ++(0,-1) node[corner] – ++(1,0);[ultra thick] (4,7) – ++(1,0) – ++(0,-1) node[corner] – ++(2,0) ;[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(1,0) – ++(0,-1) node[corner] – ++(2,0);[ultra thick] (4,7) – ++(3,0) – ++(0,-1) ;[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(1,0) – ++(0,-1) node[corner] – ++(2,0);[ultra thick] (4,7) – ++(2,0) – ++(0,-1) node[corner] – ++(1,0);[scale=.2, baseline=(current bounding box.center)][lightgray, thick] (2,7) – (7,7) – (7,2) (3,7) – (3,6) – (7,6) (4,7) – (4,5) – (7,5) (5,7) – (5,4) – (7,4) (6,7) – (6,3) – (7,3); in 2,...,7 in ,...,7 [dot] at (9-,) ;[shadow] (4,7) – ++ (-1,-1);[ultra thick] (2,7) – ++(1,0) –++(0,-1) – ++(1,0) – ++(0,-1) node[corner] – ++(1,0) – ++(0,-1) node[corner] – ++(2,0);[ultra thick] (4,7) – ++(1,0) – ++(0,-1) node[corner] – ++(2,0) ;By <cit.>*Lemma 1, each shelling of an abstract simplicial complex induces a Stanley decomposition of its Stanley–Reisner ring. In the present context, the shelling described above induces the following Stanley decomposition of [Δ_E]:[Δ_E] = ⊕_⇉ E[f_ij : (i,j) ∈] f_().Substituting this in (<ref>), and then combining the indices ⇉ E and E ⇉ T to obtain a single direct sum over all jellyfish ⇉ T, we obtain the Stanley decomposition (<ref>) in Theorem <ref>.It remains to prove the formula (<ref>) for the Hilbert–Poincaré series. This is done by re-expanding (<ref>) as the triple sum([V^n] ⊗ U_τ)^(V) = ⊕_E ∈⊕_T:E ⇉ Tϕ_T ⊕_: ⇉ E[f_ij : (i,j) ∈] f_().It is clear that P_E(t), as defined in (<ref>), is the Hilbert–Poincaré series of the innermost direct sum ⊕_⇉ E[f_ij : (i,j) ∈] f_(). (Recall that each f_ij has degree 2 as an element of [V^n], which explains the t^2.) There are as many copies of this component as there are summands in the second direct sum, namely the number of tableaux T ∈(τ,n) such that E ⇉ T, which by Definition <ref> equals #τ_E. As we observed after (<ref>), the map ϕ_T has degree |τ|, so we multiply through by t^|τ|, and the result (<ref>) follows.§.§ The general linear group All arguments below are identical to those for the symplectic group above, but there are several details that must be adapted.Letx_ijandy_ijbe the coordinate functions onW = V^*p ⊕V^qdescribed in Section <ref>.LetIbe a tableau column of lengthm ≤k, with entries in[q].We define_I[x_ij]to be the determinant of them ×mminor ofXwhose column indices are the entries inI, and whose rows are the lastmrows ofX. Similarly, letJbe a tableau column of lengthm ≤k, with entries in[p]. We define_J[y_ij]to be the determinant of them ×mminor ofYwhose row indices are the entries inJ, and whose columns are the firstmcolumns ofY. It is straightforward to check that_I[x_ij]and_J[y_ij]are elements of[W]^N. Moreover, as weight vectors (see Section <ref>), one haswt_H(_I[x_ij])= -ω_m, wt_H(_J[y_ij]) = ω_m,wt_(_I[x_ij])= (-(k^p)|∑_i ∈ Iϵ_i), wt_(_J[y_ij]) = (-(k^p) - ∑_j ∈ Jϵ_j| 0 ).Given(T^+, T^-) ∈(τ^+, q) ×(τ^-,p), let_T^+[x_ij]denote the product of the functions_I[x_ij], taken over all columnsIinT^+; likewise, let_T^-[y_ij]denote the product of the functions_J[y_ij], taken over all columnsJinT^-. It follows from the weights above thatwt_H(_T^+[x_ij])= -τ^+, wt_H(_T^-[y_ij]) = τ^-, wt_(_T^+[x_ij])= (-(k^p) | cont(T^+)), wt_(_T^-[y_ij]) = ( - (k^p) - cont(T^-)| 0^q ).Observe from (<ref>) thatwt_H(_T^+[x_ij] _T^-[y_ij]) = τ^- - τ^+ = -τis indeed the highest weight ofU_τ^*. Let ϕ_(T^+, T^-) be the map defined in (<ref>). The image of this map under the isomorphism Ψ in (<ref>) is given byΨ: ϕ_(T^+, T^-)⟼_T^+[x_ij] _T^-[y_ij].Let τ be the shape of (T^+, T^-). Let A^+ ∈(τ^+, k) be the tableau obtained by filling each column from bottom to top with the entries k, k-1, k-2, …, and let A^- ∈(τ^-, k) be the tableau obtained by filling each column from top to bottom with the entries 1, 2, 3, …. Then (A^+, A^-) is a rational tableau of shape τ, and thus is a basis element for U_τ via the identification (<ref>). The linear functional sending an element of U_τ to the coefficient of (A^+, A^-) has weight -τ, which is the highest weight of U_τ^*; therefore we can take u^*_τ to be this functional. A straightforward calculation verifies that _T^+[x_ij] _T^-[y_ij] is indeed the coefficient of (A^+, A^-) in ϕ_(T^+, T^-)(v^*_1, …, v^*_p, v_1, …, v_q). [see <cit.>*Def. 3.6.15]A _k-split is an arrangement of positive integers taking the form=1.5pt [ a^*_1; ∨; ⋮; ∨; a^*_s; ∨; b_r > ⋯ > b_1 ≥ c^*_1 > ⋯ > c^*_r > d^*_1 > ⋯ > d^*_t ][ a_1; ∨; ⋮; ∨; a_v; ∨; b^*_u > ⋯ > b^*_1 > c_1 > ⋯ > c_u > d_1> ⋯ > d_w, ]and subject to the following conditions:*the starred entries are elements of [p], and the unstarred entries are elements of [q];* r ≥ s and u ≥ v;* s + t ≤ k and v + w ≤ k;* r+t+u+w = k+1;* r + u > 0;* b_1 > c_1 and b^*_1 > c^*_1.(The parameters r,s,t,u,v,w are allowed to be zero, meaning that the corresponding sequences are empty.) The monomial of a split is ∏_i=1^r f_c^*_i, b_i∏_i=1^u f_b^*_i, c_i·_{d_w, …, d_1, a_v, …, a_1}[x_ij] _{d^*_t, …, d^*_1, a^*_s, …, a^*_1}[y_ij].We have[V^*p⊕ V^q]^N = [{f_ij}∪{_I[x_ij]}∪{_J[y_ij]}] / ℐ,where ℐ is the ideal generated by *all monomials of _k-splits;*all products _I[x_ij] _I'[x_ij] where I and I' are incomparable in the tableau order;*all products _J[y_ij] _J'[x_ij] where J and J' are incomparable in the tableau order;*all products _I[x_ij] _J[y_ij] where #I + #J > k.Moreover, the images of the monomials lying outside ℐ (i.e., the standard monomials) furnish a linear basis for [V^*p⊕ V^q]^N. It follows from Lemma <ref> that every standard monomial lying in[W]^N_τmust be a product off_ij's with some_T^+[x_ij]_T^-[y_ij], for some pair(T^+, T^-) ∈(τ^+, q) ×(τ^-, p). As for_2kin the previous section, we again makea poset where(i,j) ≤(i',j')if and only if bothi ≤i'andj ≤j'; in this way, each lattice path in a familyforms a chain. Note that the support of the monomial (<ref>) of a split is an antichain of sizer+u. Let (T^+, T^-) ∈(τ^+,q) ×(τ^-,p), and let𝐦∈[f_ij] _T^+[x_ij] _T^-[y_ij]be a monomial. Then 𝐦 is a standard monomial if and only if its support is contained in some family ⇉ (T^+, T^-).First we assume that 𝐦 is not a standard monomial, and we will show that there is no family ⇉ T containing the support of 𝐦. Since 𝐦 is nonstandard, it must be the case that 𝐦∈ℐ is divisible by the monomial (<ref>) of some split, in particular a split in which {d_w, …, d_1, a_v, …, a_1} is the first column of T^+, and {d_t^*, …, d_1^*, a_s^*, …, a_1^*} is the first column of T^-. Let E be the unique endpoint set such that E ⇉ (T^+, T^-); then by Definition <ref>, E ∩ [p]^* equals the first column of T^- and E ∩ [q] is less than or equal to the first column of T in the tableau order. In particular, E ∩ [p]^* contains t starred endpoints which are ≤ d_1^*, E ∩ [q] contains at least w unstarred endpoints which are ≤ d_1.Now suppose that there is some family ⇉ (T^+, T^-) containing the support of 𝐦. Then we have ⇉ E. Since 𝐦 is divisible by the monomial of some split, the support of 𝐦 contains an antichain of size r+u > 0, consisting of the elements (c_i^*,b_i) and (b_i^*, c_i); therefore these elements must lie on r+u many distinct paths in . But the southwesternmost element (b_u^*, c_u) lies strictly east of the endpoint d_1, while the northeasternmost element (c_r^*, b_r) lies strictly south of the endpoint d_1^*. Therefore,E contains at least r+u endpoints which are either starred and > d_1^*, or unstarred and > d_1.But by condition (4) in Definition <ref>, the facts (<ref>) and (<ref>) together imply that E contains at least k+1 endpoints, which is a contradiction since any endpoint set E contains exactly k elements. Therefore there is no ⇉ (T^+, T^-) containing the support of 𝐦. The converse is proved in the same way as in the proof of Lemma <ref>.In the same way as we obtained (<ref>), we canuse Lemma <ref> to obtain([V^*p⊕ V^q] ⊗ U_τ)^(V) = ⊕_E ∈[Δ_E] ⊕_(T^+, T^-): E ⇉ (T^+, T^-)ϕ_(T^+, T^-).The shelling of Δ_E is defined identically as in the proof of Theorem <ref>, and the rest of the proof proceeds analogously.§.§ The orthogonal group Ultimately, the proof for the orthogonal group is similar to the other two groups above, but due to technicalities involving highest weight vectors, we will not begin by working inside the ring[W]^N. For the same reason, the standard monomial theory developed forØ_kin <cit.> is restricted to the casek > 2n(see pp. 64 and 75), and so the analogue of the splits above were not defined. See also the discussion in our introduction (Section <ref>) regarding several complications with the representations ofØ(V). We begin by introducing an analogue of the splits for the orthogonal group: An Ø_k-split is an arrangement of positive integers from the set [n], taking the form=1.2pt [ a_1 ; ∧ ; ⋮ ; ∧ ; a_s ; ∧ ;b_1 < ⋯ < b_r <d_1< ⋯ < d_t; [origin=c]90≥[origin=c]90≥;c_1 < ⋯ < c_r ]such that r ≥ s, and s+t =m, and r + t = k+1. Note that each b_i ≤ c_i. (The parameters r,s,t are allowed to be zero, meaning that the corresponding sequences are empty.) The monomial of a split is ∏_i=1^r f_b_i, c_i·ϕ_{a_1, …, a_s, d_1, …, d_t}.Let V = k. Let A = {a_1, …, a_s}, B = {b_1, …, b_r}, C = {c_1, …, c_r}, and D = {d_1, …, d_t} define an Ø_k-split as in Definition <ref>. Let f_ij and ϕ_T be as defined in (<ref>) and (<ref>), respectively. Let _B,C[f_ij] denote the determinant of the r × r minor whose rows and columns are given by B and C, respectively. Then ∑_σ (-1)^|σ| σ(_B,C[f_ij] ϕ_A ∪ D) = 0,where the sum ranges over all σ which act by exchanging a (possibly empty) subset of D with a subset of B of equal size. The symbol |σ| denotes the common size of the exchanged subsets.We first claim that an analogous relation holds in the context of multilinear algebra, inside the space ^m V ⊗^r V:∑_σ (-1)^|σ| σ· (u_1 ∧⋯∧ u_s ∧v_1 ∧⋯∧ v_t_σ exchanges some subset of these vectors …) ⊗ (w_1 ∧⋯∧ w_r_… with the same number of these vectors) = 0,where |σ| denotes the number of v_i's exchanged by σ. To verify this, observe that the left-hand side of (<ref>) is alternating in the vectors v_1, …, v_t, and also (independently) alternating in the vectors w_1, …, w_r. The claim will follow if we show that the left-hand side of (<ref>) is, in fact, alternating in all r+t of these vectors, since r+t = k+1 > k =V. For this, it suffices to show that the left-hand side of (<ref>) is alternating in the two vectors v_1 and w_1. To this end, consider the effect of exchanging v_1 and w_1. This changes the sign of all terms in which v_1 and w_1 appear in the same tensor factor. Now consider a term in which v_1 and w_1 appear in opposite tensor factors. This term is the result of the action of a certain σ; exchanging v_1 and w_1 increments by ±1 the number of v_i's exchanged by σ, and therefore the sign of this term changes as well. Thus, exchanging v_1 and w_1 changes the sign of every term on the left-hand side of (<ref>), and so this expression is alternating in v_1 and w_1, as desired. This proves the claim (<ref>).Now we will translate (<ref>) into a relation involving polynomial functions V^n ⟶^m V. We will abbreviate w(v_1, …, v_n) ∈ V^n. The standard dot product on V = ^k extends canonically to a symmetric bilinear form ⟨,⟩ on ^r V, via⟨ v_b_1∧⋯∧ v_b_r, w_c_1∧⋯∧ w_c_r⟩[v_b_i· w_c_j].Hence we can rewrite the function _B,C[f_ij] : V^n ⟶ defined in the statement of this lemma, in terms of this form:_B,C[f_ij] : w ⟼⟨ϕ_B(w), ϕ_C(w) ⟩.Therefore, we can view the function _B,C[f_ij] ϕ_A ∪ D : V^n ⟶^m V as the result of factoring through the following contraction: _B,C[f_ij] ϕ_A ∪ D : V^n⟶^m V ⊗^r V ⊗^r V ⟶^m V,w⟼ϕ_A ∪ D(w) ⊗ϕ_B(w) ⊗ϕ_C(w) ⟼⟨ϕ_B(w), ϕ_C(w) ⟩ ϕ_A ∪ D(w).From this, we see that the relation (<ref>) inside ^m V ⊗^r V yields the relation (<ref>).The relation (<ref>) is known as a Garnir relation, which appears throughout the literature in various guises. For H = _2k and _k, the standard monomial results we used from <cit.> similarly depended on relations of this Garnir type; see equations (3.8.8) and (3.8.21) therein. Although we are not working inside a ring, nonetheless we can impose a monomial ordering on the set of all monomials𝐦 ∈⊕_T ∈((1^m), n) [f_ij] ϕ_T. Given two such monomials𝐦and𝐦', we first compare their factorsϕ_Tandϕ_T', so that𝐦 < 𝐦'ifT < T'with respect to lexicographical order. IfT = T', then we compare thef_ij's in the lexicographical order, with respect to the orderingf_11 > f_12 > ⋯> f_1n > f_22 > f_23 > ⋯> f_nn. The key fact is that the leading monomial on the left-hand side of (<ref>) is the monomial of the split. Hence the monomial of any split can be written as a linear combination of lesser monomials. Adapting the language from the previous two subsections, we say that a monomial in⊕_T ∈((1^m), n) [f_ij] ϕ_Tis a standard monomial if it is not divisible by the monomial of a split. It follows that the standard monomials furnish a linear basis for([V^n] ⊗^m V)^Ø_k.In the proof below, we now viewas a poset with a different partial order than we used for the other two groups: in particular, we declare(i,j) ≤(i', j')if and only ifi ≤i'andj ≥j'. With respect to this partial order, each lattice path in a family(as defined in Definition <ref>) forms a chain in. Two elements ofare incomparable if one lies strictly southeast of the other. In particular, the support of a monomial (<ref>) of a split is an antichain of sizer.Let T ∈( (1^m), n), and let 𝐦∈[f_ij] ϕ_T be a monomial. Then 𝐦 is a standard monomial if and only if its support is contained in some family ⇉ T. First we assume that 𝐦 is not a standard monomial. Then 𝐦 is divisible by the monomial (<ref>) of some split in which {a_1, …, a_s, d_1, …, d_t} = T. Let E be the unique endpoint set such that E ⇉ T; then by Definitions <ref> and <ref>, it must be the case that E contains t endpoints which are ≥ d_1.Now suppose that there is some family ⇉ T containing the support of 𝐦. Then by definition, we have ⇉ E. Since 𝐦 is divisible by the monomial of some split, the support of 𝐦 contains an antichain of size r, consisting of the elements (b_i, c_i); therefore these elements must lie on r distinct paths in , and moreover the (row index inof the) endpoint of each of these paths in E is at most the corresponding row index c_i (because lattice paths travel toward the northeast). In particular, since b_r < d_1 in the split,E contains r endpoints which are < d_1.But since r + t = k+1 in any split, (<ref>) and (<ref>) together imply that E contains k+1 endpoints, which is a contradiction since any endpoint set E contains exactly k elements. Therefore there is no ⇉ T containing the support of 𝐦. The converse is proved just as in Lemma <ref>. Starting from Lemma <ref>, the proof is identical to those of Theorems <ref> and <ref> above, except for the shelling of Δ_E. In order to obtain a lexicographical shelling, we now view a lattice path as starting on the eastern edge ofand ending on the diagonal. Each step west is labeled “0,” and each step south is labeled “1.” Since the endpoints are allowed to vary along the diagonal, the restriction of each facetis given not only by the set of descents in the label sequences, but also by any terminal 1's in the label sequences of the paths in . (This is because a terminal 1 is a descent in the symmetric lattice path obtained by reflecting about the diagonal.) Hence the restriction ofis precisely the set () in (<ref>). The rest of the proof is identical to that of Theorem <ref>. §.§ Proof of Proposition <ref> Recall that Proposition <ref> interprets the weight of a monomialψ(under the action of) in terms of the degree sequence of the arc diagram ofψ.Let H = _2k. Let ψ = 𝐦·ϕ_T, where 𝐦∈[f_ij] is a monomial. By the formula for wt_(𝐦) given in Section <ref>, each occurrence of f_ij in 𝐦 contributes - ϵ_i - ϵ_j to wt_(ψ). On one hand, then, each arc (i,j) in the arc diagram of ψ contributes 1 to i and j; on the other hand, since each arc (i,j) represents one occurrence of f_ij in 𝐦, each contribution to i actually contributes -ϵ_i to wt_(ψ). Therefore the arcs contribute wt_(𝐦) to the degree sequence of the arc diagram. It remains to show that the contribution of the hyperedges to the degree sequence matches the weight of ϕ_T. On one hand, by definition, each dot within a hyperedge contributes 1 to the degree of the vertex directly above it. On the other hand, recall from (<ref>) that each entry i in T contributes - ϵ_i to wt_(_T[x_ij]). Since the isomorphism Ψ in Lemma <ref> intertwines the -action, it follows that each entry i in T contributes - ϵ_i to wt_(ϕ_T). But each entry i in T corresponds to one dot beneath vertex i in the arc diagram, and so the hyperedges contribute precisely wt_(ϕ_T) to the degree sequence of the arc diagram. The proofs for _k and Ø_k are identical. § EXPLICIT MAPS IN HOWE DUALITY SETTINGS In the synopses below, we fix coordinate functionsx_ijandy_ijonW. We writewt_Hto denote the weight under the action of the maximal torus of the groupH. Likewise, we writewt_to denote the weight under the action of the Cartan subalgebra (consisting of diagonal matrices) of the Lie algebra. In order to express the weight of a monomial𝐦 ∈[W], we write_x_i ∙(𝐦)to denote the degree of𝐦in the variablesx_i,1, x_i,2, …, and we write_x_∙j(𝐦)to denote the degree of𝐦in the variablesx_1,j, x_2,j, …. §.§ Howe duality for (H, ) = (_k, _p+q) Upon fixing a basis{e_1, …, e_k}forVand its dual basis{e^*_1, …, e^*_k}forV^*, we can view elements ofW = V^*p ⊕V^qas ordered pairs(Y,X) ∈_p,k ⊕_k,q. Explicitly, the coordinate functions onWare given byy_ij : (v^*_1, …, v^*_p, v_1, …, v_q)⟼ v^*_i(e_j),(i,j) ∈ [p] × [k],x_ij : (v^*_1, …, v^*_p, v_1, …, v_q)⟼ e^*_i(v_j),(i,j) ∈ [k] × [q].The action ofK ×H = _p×_q×_konℂ[M_k,q⊕M_q,k]is given by (g_1,g_2,h).f(Y,X)=(g_1)^-k f(g_1^-1Yh, h^-1 X g_2).In particular, the action of an elementt = diag(t_1, …, t_k)in the maximal torus ofH = _kis given byt.y_ij = t_j y_ij,t.x_ij = t_i^-1 x_ij.With respect to this action, the weight of a monomial𝐦 ∈[_p,k ⊕_k,q]is given by thek-tuple wt_H(𝐦) = ( _y_∙ 1(𝐦) - _x_1 ∙(𝐦), …, _y_∙ k(𝐦) - _x_k ∙(𝐦)).The Lie algebra homomorphismω: 𝔤𝔩_p+q ⟶𝒟(_p,k⊕_k,q)^_kis given by[ A B; C D ]⟼∑_i=1^p∑_j=1^p a_ij(-∑_ℓ=1^k y_j ℓ∂/∂ y_iℓ-k δ_ij) +∑_i=1^q∑_j=1^q d_ij(∑_ℓ=1^k x_ℓ i∂/∂ x_ℓ j) + ∑_i=1^p∑_j=1^q b_ij( -∑_ℓ=1^k∂^2/∂ y_iℓ∂ x_ℓ j_Δ_ij)+∑_i=1^q∑_j=1^p c_ij(∑_ℓ=1^k y_jℓ x_ℓ i_f_ji).With respect to this action, the weight of a monomial𝐦 ∈[_p,k ⊕_k,q]is given by the(p+q)-tuplewt_(𝐦) = ( - _y_1 ∙(𝐦) - k, …, - _y_p ∙(𝐦) - k|_x_∙ 1(𝐦), …, _x_∙ q(𝐦))§.§ Howe duality for (H,) = (_2k, _2n) As above, one can view elements ofW = V^nas matricesX ∈_2k,n, with coordinate functions given byx_ij : (v_1, …, v_n)⟼ e^*_i(v_j),(i,j) ∈ [2k] × [n].The action ofK ×H = _n×_2konℂ[M_2k,n]is given by(g,h).f(X)=(g)^-k f(h^-1 X (g^-1)^t).In particular, the action of an elementt = diag(t_1, …, t_k,t_1^-1, …, t_k^-1)of the maximal torus ofH = _2kis given byt.x_ij =t_i^-1 x_ij, 1 ≤ i ≤ k,t_i x_ij, k < i ≤ 2k.With respect to this action, the weight of a monomial𝐦 ∈[_2k,n]is given by thek-tuple wt_H(𝐦) = ( _x_k+1, ∙(𝐦) - _x_1, ∙(𝐦), _x_k+2, ∙(𝐦) - _x_2, ∙(𝐦), …, _x_2k,∙(𝐦) - _x_k, ∙(𝐦)).The Lie algebra homomorphismω: 𝔰𝔬_2n⟶𝒟(_2k,n)^_2kis given by[AB;C -A^t ]⟼∑_i=1^n∑_j=1^n a_ij(-∑_ℓ=1^2k x_ℓ j∂/∂ x_ℓ i-k δ_ij) +∑_i=1^n∑_j=1^n b_ij(-1/4∑_ℓ=1^k ( ∂^2/∂ x_ℓ i∂ x_ℓ+k,j - ∂^2/∂ x_ℓ+k,i∂ x_ℓ j)_Δ_ij)+∑_i=1^n∑_j=1^n c_ij(∑_ℓ=1^k(x_ℓ j x_ℓ+k, i-x_ℓ+k,j x_ℓ i)_f_ji).With respect to this action, the weight of a monomial𝐦 ∈[_2k,n]is given by then-tuplewt_(𝐦) = ( - _x_∙ 1(𝐦) - k, …, - _x_∙ n(𝐦) - k ).§.§ Howe duality for (H,) = (Ø_k, _2n) As above, one can view elements ofW = V^nas matricesX ∈_k,n, with coordinate functions given byx_ij : (v_1, …, v_n)⟼ e^*_i(v_j),(i,j) ∈ [k] × [n].The action ofK ×H = _n×O_konℂ[M_k,n]is given by ((g,s),h).f(X)=(g)^-k/2f(h^-1 X (g^-1)^t),where_n:={(g,s)∈_n × ℂ^× : (g)=s^2}. The Lie algebra homomorphismω: 𝔰𝔭_2n⟶𝒟(M_k,n)^O_kis given by[AB;C -A^t ]⟼∑_i=1^n∑_j=1^n a_ij(-∑_ℓ=1^k x_ℓ j∂/∂ x_ℓ i-k/2δ_ij) +∑_i=1^n∑_j=1^n b_ij( -1/4∑_ℓ=1^k∂^2/∂ x_ℓ i∂ x_ℓ j_Δ_ij) +∑_i=1^n∑_j=1^n c_ij(∑_ℓ=1^kx_ℓ j x_ℓ i_f_ji).With respect to this action, the weight of a monomial𝐦 ∈[_k,n]is given by then-tuplewt_(𝐦) = ( - _x_∙ 1(𝐦) - k2, …, - _x_∙ n(𝐦) - k2).§.§ Maps between coordinate functions In each of the three cases above, identify𝔭^-with(𝔭^+)^*via theK-equivariant linear isomorphism[ 0 0; C 0 ]↦[ [ 0 B; 0 0 ]↦trace(CB) ].Then define the matrix coordinate functionsz_ij∈(𝔭^+)^*byz_ij :[ 0 B; 0 0 ]↦ b_ij.Via the inverse isomorphism(𝔭^+)^*→𝔭^-, we havez_ij↦[00; E_ji0 ][00; 1/2(E_ji-E_ij)0 ][00; 1/2(E_ji+E_ij)0 ]It follows that in all three Howe duality settings above, we havez_ij↦ f_ijviaℂ[z_ij]=ℂ[𝔭^+]≅𝒰(𝔭^-)⊂𝒰(𝔤) →𝒟(W)^H. alphabetic
http://arxiv.org/abs/2312.16749v2
{ "authors": [ "William Q. Erickson", "Markus Hunziker" ], "categories": [ "math.CO", "math.RT", "05E10 (Primary) 13A50, 22E47, 17B10 (Secondary)" ], "primary_category": "math.CO", "published": "20231227235508", "title": "Stanley decompositions of modules of covariants" }
**Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Polyp SegmentationYunqi Gu,Tao Zhou, Senior Member, IEEE, Yizhe Zhang, Yi Zhou,Kelei He, Chen Gong, Senior Member, IEEE, Huazhu Fu, Senior Member, IEEEY. Gu, T. Zhou, Y. Zhang, and C. Gong are with PCA Laboratory, and the School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China. (e-mail: [email protected];[email protected], [email protected];[email protected]) Y. Zhou is with the School of Computer Science and Engineering, Southeast University, Nanjing 211189, China. (e-mail: [email protected]) K. He is with the Medical School, Nanjing University, Nanjing 210023, China. He is also with the National Institute of Healthcare Data Science at Nanjing University, Nanjing 210023, China. (e-mail: [email protected]) H. Fu is with the Institute of High Performance Computing, A*STAR, Singapore (e-mail: [email protected]). Corresponding author: Tao Zhou.January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Automatic polyp segmentation plays a crucial role in the early diagnosis and treatment of colorectal cancer (CRC). However, existing methods heavily rely on fully supervised training, which requires a large amount of labeled data with time-consuming pixel-wise annotations. Moreover, accurately segmenting polyps poses challenges due to variations in shape, size, and location. To address these issues, we propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised polyp Segmentation (DEC-Seg) from colonoscopy images.First, we propose a Cross-level Feature Aggregation (CFA) module that integrates cross-level adjacent layers to enhance the feature representation ability across different resolutions.To address scale variation, we present a scale-enhanced consistency constraint, which ensures consistency in the segmentation maps generated from the same input image at different scales. This constraint helps handle variations in polyp sizes and improves the robustness of the model.Additionally, we design a scale-aware perturbation consistency scheme to enhance the robustness of the mean teacher model.Furthermore, we propose a cross-generative consistency scheme, in which the original and perturbed images can be reconstructed using cross-segmentation maps. This consistency constraint allows us to mine effective feature representations and boost the segmentation performance. To produce more accurate segmentation maps, we propose a Dual-scale Complementary Fusion (DCF) module that integrates features from two scale-specific decoders operating at different scales. Extensive experimental results on five benchmark datasets demonstrate the effectiveness of our DEC-Seg against other state-of-the-art semi-supervised segmentation approaches. The implementation code will be released at https://github.com/taozh2017/DECSeghttps://github.com/taozh2017/DECSeg. Polyp segmentation, semi-supervised learning, scale-enhanced consistency, cross-generative consistency§ INTRODUCTIONColorectal cancer (CRC) is one of the leading causes of death worldwide, often arising from adenomatous polyps. Therefore, early detection and removal of polyps are crucial in reducing the incidence of CRC. Polyp segmentation plays a vital role in identifying and locating polyps in an early stage, enabling clinicians to diagnose and treat them effectively. Currently, various deep learning models <cit.> have been developed and obtained promising performance in polyp segmentation. However, these methods typically rely on a fully supervised training strategy, which requires a large number of pixel-wise annotations. Unfortunately, annotating medical images often demands highly specialized expertise, making the process time-consuming and costly. As a solution, semi-supervised learning (SSL) provides an effective approach to leverage unlabeled data to enhance the model's performance. Recently, various semi-supervised medical image segmentation methods <cit.> based on deep learning have been developed. Among these methods, the consistent constraint is a widely used strategy, by enhancing that the perturbations on unlabeled data should not significantly vary their outputs or predictions. One of the most representative frameworks is the mean teacher (MT) <cit.>, which designs a perturbation-based consistency loss between the teacher and student models on the unlabeled examples. Inspired by MT, several improved methods focus on designing different perturbation strategies to achieve SSL segmentation. For instance, an uncertainty-aware framework (UA-MT) <cit.> was proposed to make the student model gradually learn more reliable targets and eliminate unreliable predictions by exploiting the uncertainty information. Chen  <cit.> developed a cross-pseudo supervision strategy, which initializes two identical networks with different weights and encourages high consistency between the predictions of two different networks with different weights for the same input.Luo  <cit.> designed an uncertainty rectified pyramid consistency (URPC) scheme to enable the segmentation model can produce consistent predictions at different scales. Besides, CLCC <cit.> was proposed to fuse global and patch-wise information by contrastive learning and consistency constraint, which has been applied to polyp segmentation. SLCNet <cit.> was proposed to utilize shape information for the training by feeding pseudo-labels into the network simultaneously.Despite the achievement made, semi-supervised polyp segmentation remains challenging due to variations in the shape, size, and location of polyps. Therefore, it is crucial to develop effective and robust semi-supervised learning strategies that leverage a large amount of unlabeled data to boost the polyp segmentation performance.To this end, we propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised polyp Segmentation (DEC-Seg), which fully exploits multi-scale information from the labeled and unlabeled data to enhance segmentation performance. Specifically, we first propose a Cross-level Feature Aggregation (CFA) module to integrate the adjacent layer features in the encoder, and the cross-level aggregated features are further incorporated into the scale-specific decoder. Then, we present a scale-enhanced consistency strategy to encourage the consistency of segmentation maps from the same inputs with different scales. This consistency can guide our segmentation network to learn more powerful features for handling scale variations. Moreover, a Dual-scale Complementary Fusion (DCF) module is further proposed to filter the effective information from different scale features and fuse them to generate the final prediction map. Besides, we design a scale-aware perturbation consistency to impose on different scale images. Meanwhile, a cross-generative consistency is presented to constrain the original images and perturbed ones can be reconstructed by cross-segmentation maps, which can effectively harness the knowledge from unlabeled data. Finally, we conduct the comparison experiments on five benchmark datasets, and the results demonstrate the superiority of the proposed model over other state-of-the-art semi-supervised segmentation methods. The main contributions of this paper are listed as follows:* We propose a novel semi-supervised polyp segmentation framework, which leverages scale-enhanced consistency and cross-generative consistency to exploit the relationships between labeled and unlabeled data for boosting the segmentation model.* A cross-level feature aggregation module is proposed to fuse the cross-level features, which can enhance the representation ability of features within different resolutions. * We present a scale-enhanced consistency constraint to reduce the discrepancy between the predicted segmentation maps from different scale inputs. Besides, a cross-generative consistency is designed to constrain that the original and perturbed images can be reconstructed using cross-segmentation maps, which aims to mine effective feature representations and boost the segmentation performance.* A dual-scale complementary fusion module is proposed to generate better predictions through the complementation and fusion of different scale information from the two scale-specific decoders. The remainder of this paper is structured as follows. In Section <ref>, we briefly review some related works. In Section <ref>, we give the overview framework and then detail the proposed semi-supervised segmentation model. Then, we present the experimental settings, comparison results, and ablation study in Section <ref>. Finally, we conclude this paper in Section <ref>. § RELATED WORKS §.§ Polyp Segmentation Recently, numerous studies have proposed different deep learning architectures and techniques to enhance the accuracy of polyp segmentation.The UNet framework, proposed by Ronneberger et al. <cit.>, has significantly advanced the field of medical image segmentation. It incorporates skip connections between the contracting and expanding paths, enabling the network to retain fine-grained spatial information while also capturing high-level contextual features. UNet has demonstrated state-of-the-art performance across various segmentation tasks, including liver and tumor segmentation, cell segmentation, brain MRI segmentation, and polyp segmentation.To further improve the accuracy of UNet-based models, researchers have developed several variants, such as attention UNet <cit.>, UNet++ <cit.>, and ResUNet++ <cit.>. For example, Zhou  <cit.> presented UNet++ for medical image segmentation which designs a series of nested, dense skip pathways to reduce the semantic gap between the feature maps. Jha  <cit.> developed a ResUNet++ architecture, which takes advantage of residual blocks, squeeze and excitation blocks, atrous spatial pyramidal pooling, and attention blocks. Besides, several polyp segmentation methods <cit.> have been developed and achieved promising performance. §.§ Semi-Supervised Learning Existing SSL methods can be categorized into self-training, consistency learning, contrastive learning, and adversarial learning. Self-training <cit.> methods use the predictions of the fully-supervised algorithm to act as the pseudo labels of the unlabeled data and retrain the model by mixing with the labeled data. Consistency learning methods <cit.> enforce agreement between different views to improve the model's performance. By utilizing a consistency loss, these methods ensure that the model's predictions remain consistent across different representations or augmentations of the same input. Contrastive learning methods <cit.> obtain better representation learning by contrasting similar features against dissimilar features. Adversarial learning methods <cit.> employ a minimax game between a generator and a discriminator to train the model. By generating new and realistic samples, these methods enhance the model's ability to generalize and produce accurate segmentation.Several SSL methods have been developed for the medical image segmentation task. For example,Li  <cit.> proposed a segmentation model with multiple auxiliary decoders and encourage the consistency of the predictions made by the main decoder and the auxiliary decoders.Besides, contrastive learning <cit.> has been widely used in SSL medical image segmentation. Moreover, adversarial learning <cit.> has also emerged to improve the robustness of the model. For example, Peiris  <cit.> equipped the network with a critic network to influence the segmentation network to produce the resemble prediction map as ground truth. Lei  <cit.> proposed double discriminators which are used to learn the prior relationship between labeled and unlabeled data.However, these semi-supervised methods do not deeply explore the exploitability and importance of scale information for semi-supervised learning.§.§ Consistency LearningConsistency learning as one of the most popular methods in semi-supervised learning aims to force the network to learn potential knowledge from different predictions by applying consistency constraints, which may come from different views of the same input from the same network, or may come from the same input from different networks.Currently, several methods <cit.> based on the mean teacher framework focus on designing different consistent and perturbation strategies to achieve SSL segmentation.For instance,Verma  <cit.> regularized semi-supervised learning by encouraging consistent predictions at interpolations of unlabeled points u1 and u2.Wu  <cit.> proposed three decoders and a different upsampling module in each decoder to amplify the perturbation and impose consistency across the three prediction maps. Zhong  <cit.> proposed a multi-attention tri-branch network (MTNet) that each branch uses a different attention mechanism. Among these methods, the consistent constraint is a widely used strategy, by enhancing that the perturbations on unlabeled data should not significantly vary their outputs. § PROPOSED METHOD §.§ Overview fig:Framework shows the proposed mean teacher-based DEC-Seg framework, which fully exploits multi-scale information from the labeled and unlabeled data for polyp segmentation. First, the original and downsampled images from the labeled data are fed into the student model (, the encoder E_s and scale-specific decoders D_s1 and D_s2), and then the scale-enhanced consistency and supervised loss are calculated to constrain the student network. Then, the unlabeled data are inputted into both the student model and the teacher model (the encoder E_t and scale-specific decoders D_t1 and D_t2). Meanwhile, we conduct scale-enhanced consistency, scale-aware perturbation consistency, and cross-generative consistency for unlabeled data, to fully leverage unlabeled samples to improve the segmentation performance. Moreover, to make use of multi-scale information, we design the fused decoders (, D_s^f and D_t^f) to generate the final segmentation maps. For convenience, we denote X_l={x_1^l,x_2^l, ⋯, x_n_l^l} and Y_l={y_1^l,y_2^l, ⋯, y_n_l^l} as the labeled dataset and the corresponding label set, respectively, where n_l is the number of the labeled images. We use X_u={x_1^u,x_2^u, ⋯, x_n_u^u} to denote the unlabeled dataset with n_u images, and we typically have n_l ≪ n_u. Given inputs X, they are fed into the encoder (Res2Net-50 <cit.> as the backbone network) to learn five levels of features, namely {F_i}_i=1^5. §.§ Cross-level Feature Aggregation ModuleTo fully exploit cross-level information and provide more effective supplementary from different convolutional layers, we propose the CFA module to fuse the features from every two adjacent layers. Then, the aggregated feature can be incorporated into our designed decoder, and the feature flow is shown in fig:encoder.Specifically, as shown in fig:CFA, two adjacent features F_i and F_i+1 first proceeded by a 1× 1 convolution,and then the two features are concatenated (, F_cat) to obtain F_cat^'=ℬ_conv3 × 3(F_cat),where ℬ_conv3×3(·) is a sequential operation that consists of a 3×3 convolution, batch normalization, and a ReLU activation. To learn the cross-level attention-based enhanced feature, we conduct a global average pooling (GAP) on the cascaded feature F_cat^' and utilize the point-wise convolution (PWC) <cit.> to capture channel interactions across different spatial positions. Therefore, we obtain the attention-based weights by W=σ(_pwc2(ζ(_pwc1(𝒢_ave(F_cat^'))))),where the kernel sizes of _pwc1 and _pwc2 denote as C/r× C× 1 × 1 and C×C/r× 1 × 1 (C is channel size and r is a reduction ratio), respectively. Besides, ζ(·) and σ(·) indicate ReLU and Sigmoid activation functions, respectively, and 𝒢_ave is a GAP operation. Next, an element-wise multiplication is utilized to enhance F_cat^' with W, and then a residual structure is also adopted to fuse the enhanced feature and the original cascaded feature. Finally, we obtain the aggregated feature byF_i^=ℬ_conv3×3(F_cat^'⊗W⊕ F_cat^'),where ⊕ and ⊗ represent element-wise addition and multiplication, respectively.It is worth noting that our CFA module can explore contextual information from the diversity of resolutions to enhance the features' representation ability. §.§ Scale-enhanced Consistency The scale variation is still a great challenge for polyp segmentation. As discussed in <cit.>, reducing the gap between the network outputs from different scale images can help the model learn scale-correlated features. Motivated by this observation, we propose a scale-enhanced consistency scheme to constrain the outputs of different scale images to be closed, in which the different scale features can be refined by each other. Specifically, given unlabeled images X_u, X_u and its downsampled versions X_u^d=Down(X_u) are fed to a weight-shared encoder to extract two sets of multi-level features, and then we can obtain the predicted maps S_u and S_u^d,respectively. Specifically, the two segmentation maps can be obtained by S_u=D_s1(E_s(X_u)) and S_u^d=D_s2(E_s(Down(X_u))),where E_s(·) is the encoder in the student model, and Down(·) denotes a 1/2 downsampling operation. D_s1 and D_s2 are scale-specific decoders in the student model, which are independent of each other to increase the perturbation. Similarly, for labeled images X_l, we can obtain the predicted segmentation maps S_l and S_l^d from X_l and its downsampled versions X_l^d by S_l=D_s1(E_s(X_l)) and S_l^d=D_s2(E_s(Down(X_l))), respectively.Therefore, we form a scale-enhanced consistency loss (ℒ_SC) to enforce S and S^d to be consistent, which is defined by ℒ_SC = ℒ_mse (S_u, S_u^d) + ℒ_mse (S_l, S_l^d),where ℒ_mse denotes the widely used mean square error loss.Finally, the supervised loss is formulated byℒ_S= ℒ_sup(S_l, Y_l) + ℒ_sup(S_l^d, Down(Y_l)).§.§ Dual-scale Complementary Fusion Module To further make use of different scale information, we design the fused decoders (D_s^f and D_t^f in Fig. <ref>) to generate more reliable predictions as the final segmentation maps. As shown in Fig. <ref>, taking D_s^f as an example, the features from D_s1 and D_s2 are fused and then passed through to the fused decoder, and the feature flow of the fused decoder is shown in fig:DAS(a). To achieve this, we present a dual-scale complementary fusion (DCF) module to fuse the features from the two scale-specific decoders. The proposed DCF module aims to fuse the multi-scale features and enable the features from the original scale and downsampled scale to complement each other (as shown in fig:DAS(b)). Taking the original scale feature as an example, we first conduct a 3×3 convolution on Up(h_i^2) and Sigmoid activation function to obtain a scale-aware weight W_2=σ(ℬ_conv3 × 3(Up(h_i^2))). Then, we multiply W_2 by ℬ_conv3 × 3(Up(h_i^2)), and the resultant feature is the complementary information (, h_com^2) which is encouraged to learn the information required by the original scale. Finally, we add the downsampled feature to the original scale and then smooth the feature by a 3×3 convolution, thus we can obtain h_rec^1=ℬ_conv3 × 3(ℬ_conv3 × 3(h_i^1)+ℬ_conv3 × 3(Up(h_i^2))*W_2). Similarly, we can obtain the complementary-enhanced downsampled feature h_rec^2. Through a cross-scale complementary-enhanced process, we learn more rich multi-scale feature representations. Further, two features h_rec^1 and h_rec^2 are concatenated and then proceeded by a 1×1 convolution to two channels and a Softmax function to obtain two weight maps α_1 and α_2, where α_1+α_2=1. Finally, we obtain the multi-scale fused feature byh_i^dcf=h_rec^1 ⊗α_1 + h_rec^2 ⊗α_2. Subsequently, the fused features h_i^dcf will be passed into the fused decoder. Specifically, h_5^dcf is first passed through a transpose convolutional layer to have the same resolution with h_4^dcf. Then, the two features are concatenated and fed into two convolutional layers. Repeating the above process, we will obtain the final segmentation maps S_l^f for labeled data. Therefore, the supervised loss in Eq. (<ref>) can be reformulated as follows:ℒ_S= ℒ_sup(S_l, Y_l) + ℒ_sup(S_l^d, Down(Y_l)) + ℒ_sup(S_l^f, Y_l).§.§ Scale-aware Perturbation and Cross-generative ConsistencyFollowing previous works <cit.>, to improve the robustness of the MT model, X_u and its perturbed version X_u^p are passed through both student and teacher models, respectively, to encourage the outputs to be closed. In this study, we design a scale-aware perturbation consistency. Specifically, we impose the perturbation consistency on different scale images, , S_u^p=D_t1(E_t(𝒫(X_u))) and S_u^dp=D_t2(E_t(𝒫(Down(X_u)))), where a random gamma operation is adopted as the perturbation operation (, 𝒫).Meanwhile, we also obtain S_u^fp that its constraint is implemented by the MT framework, due to the particularity of S_u^fp. Thus, the scale-aware perturbation consistency scheme can be optimized by minimizing ℒ_SPC, which is defined byℒ_SPC = ℒ_mse (S_u, S _u^p)+ℒ_mse(S_u^d,S_u^dp) + ℒ_mse (S_u^f, S_u^fp). Further, to utilize the rich information contained in different feature maps, we propose a perturbation-based cross-generative consistency constraint to enhance the feature mining ability of our segmentation network. Therefore, we design two generative networks G_1 and G_2 (The detailed structure is shown in Fig. <ref>) for image reconstruction. The segmentation maps S_u^f and S_u^fp obtained from the aggregation of the two scale information are fed into G_2 and G_1, respectively, and then we can obtain two reconstructed images X̂_u=G_1(S_u^fp) and X̂_u^p=G_2(S_u^f). Finally, we form a cross-generative consistency loss (ℒ_CC), which is given byℒ_CC = ℒ_mse (X̂_u, X_u)+ℒ_mse(X̂_u^p,X_u^p). It is worth noting that ℒ_CC can enforce X̂_u and X_u to be consistent and X̂_u^p and X_u^p to be closed. As a result, such cross-generation can further improve the robustness of our MT-based model to learn powerful features and fully harness the knowledge from unlabeled data to boost the proposed segmentation model.§.§ Loss Function and Optimization Inspired by <cit.>, the supervised loss for labeled data is formulated by ℒ_sup = ℒ_IoU^w + ℒ_BCE^w, where ℒ_IoU^w and ℒ_BCE^w denote the weighted IoU loss and binary cross-entropy (BCE) loss, respectively. Finally, the total loss can be given as follows:ℒ_total = ℒ_S + ℒ_SPC+ αℒ_SC + βℒ_CC,where the hyper-parameter α is set to 1.5 and β is set to 3.In addition, the parameters θ^' of the teacher network can be updated based on the parameters θ of the student network using an exponential moving average (EMA) strategy. Thus, at the t-th training step, the parameters θ^' can be updated by θ^'_t = γθ^'_t-1 + (1-γ)θ_t, where γ is used to control the EMA decay, and it is set to 0.99.§ EXPERIMENTS AND RESULTS§.§ Experimental Setup§.§.§ Datasets and Training SettingsWe carry out the comparison experiments on five public datasets, , ETIS <cit.>, CVC-ClinicDB <cit.>, EndoScene-CVC300 <cit.>,CVC-ColonDB <cit.>, and Kvasir <cit.>.Following the setting in <cit.>, 1450 images are randomly selected from the two datasets (Kvasir and CVC-ClinicDB) for the training set, and the remaining images from the two datasets and the other three datasets (EndoScene-CVC300, ETIS, and CVC-ColonDB) to form the testing set. Besides, 10% (145 images) or 30% (435 images) from the training set are used as the labeled data, while the remaining images are adopted as the unlabeled data.§.§.§ Implementation Details and Evaluation MetricsThe proposed segmentation framework is implemented in PyTorch and trained on one NVIDIA GeForce RTX3090 GPU with an Adam optimizer. All inputs are uniformly rescaled to 352 × 352.Our model converges over 10000 iterations with a batch size of 8 including 4 labeled samples and 4 unlabeled samples, and the learning rate is set to 1e-4.To evaluate the effectiveness, we employ five commonly adopted metrics <cit.>, namely mean Dice (mDice), mean IoU (mIoU), F_β^w, S_α, and mean absolute error (MAE).§.§ Comparison with State-of-the-arts We compare the proposed DEC-Seg with ten state-of-the-art semi-supervised segmentation methods, ,MT <cit.>, DAN <cit.>, UA-MT <cit.>, URPC <cit.>,Duo-SegNet <cit.>, CLCC <cit.>,SLC-Net <cit.>, MC-Net+ <cit.>, CDMA <cit.> and BCP <cit.>. For a fair comparison, we change the backbones of all compared methods to “Res2Net", and then train all compared methods without any data augmentation. Moreover, we adopt a basic U-Net <cit.> model with Res2Net as the backbone of the encoder, denoted as “Baseline".§.§.§ Quantitative Results Quantitative comparisons are reported in tab:seen and tab:unseen.To validate the learning ability of the two seen training datasets (CVC-ClinicDB and Kvasir), we can be seen from tab:seen that while most methods learn well, our method performs better than other compared methods, both on 10% and 30% of labeled data. In addition, due to the additional unlabeled data, our method improves mDice and mIoU by 8.2% and 12.0% compared to baseline using only 10% labeled data, and by 4.9% and 6.3% compared to baseline using only 30% labeled data on CVC-ClinicDB, which indicate that our DEC-Seg can fully leverage unlabeled data to improve the polyp segmentation performance.Additionally, we report the comparison results on three unseen datasets (CVC-300, ETIS, and CVC-ColonDB) from tab:unseen to verify the generalization ability of our DEC-Seg. Because the polyp in the CVC-300 dataset is similar to the training set and relatively simple, the metrics are also similar to CVC-ClinicDB and Kvisir datasets. Compared with SLC-Net, which performs relatively stable on this dataset, our method achieves 3.0% and 3.5% improvements in terms of mDice and mIoU with 30% labeled data.The CVC-ColonDB and ETIS datasets are more difficult, as polyps are with large variations in size. As reported in tab:unseen, comparing the BCP method on the CVC-ColonDB dataset using 10% labeled data, our method achieves 1.2% and 2.1% improvements on mDice and mIoU, respectively. Besides, our method achieves 5.3% and 6.2% improvements over the SOTA semi-supervised segmentation method (BCP) in terms of mDice and mIoU with the 30% labeled data. Similarly, on the ETIS dataset, our performance has a more significant superiority over other methods.The main reason is that our exploration and utilization of scale information make the model more robust to scale variation, which is able to identify small polyps that cannot be identified by other methods, and can also refine the edges of large polyps.Overall, the results indicate that our DEC-Seg has a better generalization ability and effectively utilizes unlabeled data to boost polyp segmentation.§.§.§ Qualitative Resultsfig_result shows some of the visualization results of the test examples under 30% labeled data. It can be seen that our method can accurately locate and segment polyps under different challenging factors.For example, in the first two rows of fig_result, the polyps have very small sizes. It can be clearly seen that Duo-SegNet recognizes the polyp incorrectly, while UAMT, CLCC, MC-Net+, and CDMA methods even fail to locate polyps. Other methods only locate some fragments of polyps, while our method accurately and completely segments polyps. In the 3^rd and 4^th rows, the polyps are visually embedded in their surrounding mucosa, thus it is very difficult to accurately locate and segment these polyps. From the results, our method performs better than other comparison methods to accurately segment polyps. In the 5^th and 6^th rows, the polyps have relatively large sizes, making it is challenging to complete locate the polyps. In this case, some methods (, URPC, Duo-SegNet, MC-Net+, and BCP) produce several regions with over-segmented fragments, while our method can obtain promising segmentation results and produce fine details of the boundary. This is mainly because our method makes use of multi-scale information and cross-generative consistency to learn more powerful feature representations, which are helpful to boost the segmentation performance.In addition, we also show some segmentation results containing multiple polyps (see the 7^th and 8^th rows).Compared with other methods, the segmentation of multiple polyps by our method is more complete and accurate. It can be also observed that our method can effectively locate and segment polyps under different challenging factors, such as scale variation, homogeneous regions, non-sharp boundaries, and multiple polyps.§.§ Ablation StudyTo verify the effectiveness of each key component in DEC-Seg, we conduct ablation studies with 30% labeled data on Kvasir and CVC-ColonDB datasets. The ablative results are shown in tab:abl, where “Baseline” denotes the base segmentation framework only using the labeled data. Effectiveness of SC. To investigate the importance of scale-enhanced consistency (SC), we add the DC loss to encourage the consistency of predictions from the same inputs with different scales. From tab:abl, we observe that No.2 (Baseline + SC) outperforms No.1 and obtains an 8.9% improvement in mean Dice. This result indicates that scale-enhanced consistency is very helpful to improve polyp segmentation performance and is robust to scale variation.Effectiveness of SPC and CC. We further study the contributions of scale-aware perturbation consistency (SPC) and cross-generative consistency (CC). As shown in tab:abl, No.3 performs better than No.2, indicating the effectiveness of scale-aware perturbation consistency. Besides, it can be seen that No.4 improves the No.3 performance on the Kvasir dataset, as the mean Dice is improved from 0.865 to 0.875.Therefore, these improvements indicate that introducing cross-generative consistency loss can help accurately segment polyp tissues in the learning of details and textures. Effectiveness of CFA: We then investigate the importance of the proposed CFA module. To achieve this, we add the CFA on the basis of No.4, by aggregating the features of adjacent layers in the encoder before feeding into the decoder.As reported in tab:abl, it can be clearly seen that No.5 has a significant improvement over No.4 on the CVC-ColonDB dataset with a large scale variation, which proves that CFA can effectively capture scale information and improve segmentation performance.Effectiveness of DCF. As shown in tab:abl, No.6 (using the proposed DCF module) outperforms No.5 on two datasets. This indicates that fusion of multi-scale features can further improve the segmentation performance. In addition, to further validate the effectiveness of the dual-scale complement fusion strategy in the proposed DCF module, we construct a “Basic" strategy, which conducts a concatenation operation followed by two convolution layers to integrate the two features from different scales. The comparison results are shown in tab:abl2. From tab:abl2, it can be observed that our DCF performs better than the “Basic" strategy, indicating the effectiveness of the designed dual-scale complementary fusion module.Moreover, we visualize the segmentation results by using three different decoders, , the scale-specific and fused decoders, and the comparison results are shown in fig:three_predication.It can be observed that our method can not accurately locate the boundaries of the polyps when only using the original scale features or the downsampled scale features. However, we integrate the features from the two scales and then propagate them into the fused decoder, which can produce more accurate segmentation maps (as shown in fig:three_predication (c)). This further validates that the fusion of the features from different scales is helpful for improving the segmentation performance. § CONCLUSIONIn this paper, we have presented a novel semi-supervised learning framework (DEC-Seg) for polyp segmentation with colonoscopy images. The proposed cross-level feature aggregation module integrates the adjacent features from different resolutions, to enhance the features' representation ability.Then, a scale-enhanced consistency is proposed to handle scale variation and learn more scale-aware features. Meanwhile, we design the fused decoders and a dual-scale complementary fusion module to aggregate the features from the scale-specific decoders. Moreover, multiple consistency strategies, , scale-aware perturbation consistency and cross-generative consistency, are presented to enhance the learning process and fully leverage unlabeled data to boost the segmentation performance. Further, we propose the dual-scale complementary fusion module to integrate the features from two scale-specific decoders, which are passed through the fused decoder to produce the final segmentation maps. Experimental results on the five benchmark datasets show that our DEC-Seg is superior to state-of-the-art semi-supervised segmentation methods.IEEEtran
http://arxiv.org/abs/2312.16039v1
{ "authors": [ "Yunqi Gu", "Tao Zhou", "Yizhe Zhang", "Yi Zhou", "Kelei He", "Chen Gong", "Huazhu Fu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231226125631", "title": "Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Polyp Segmentation" }
mymainaddress]Adnan Theerensmycorrespondingauthor [mycorrespondingauthor]Corresponding author [email protected] mymainaddress]Chris Cornelis [email protected] [mymainaddress]Computational Web Intelligence, Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium Rough set theory is a well-known mathematical framework that can deal with inconsistent data by providing lower and upper approximations of concepts. A prominent property of these approximations is their granular representation: that is, they can be written as unions of simple sets, called granules. The latter can be identified with “if…, then… ” rules, which form the backbone of rough set rule induction. It has been shown previously that this property can be maintained for various fuzzy rough set models, including those based on ordered weighted average (OWA) operators. In this paper, we will focus on some instances of the general class of fuzzy quantifier-based fuzzy rough sets (FQFRS). In these models, the lower and upper approximations are evaluated using binary and unary fuzzy quantifiers, respectively. One of the main targets of this study is to examine the granular representation of different models of FQFRS. The main findings reveal that Choquet-based fuzzy rough sets can be represented granularly under the same conditions as OWA-based fuzzy rough sets, whereas Sugeno-based FRS can always be represented granularly. This observation highlights the potential of these models for resolving data inconsistencies and managing noise.Fuzzy quantification Fuzzy rough sets Machine learning Granular computing§ INTRODUCTIONFuzzy rough sets (FRS) <cit.> represent a fusion of fuzzy sets and rough sets, specifically designed to manage vague and potentially inconsistent information. Fuzzy sets model vague information by acknowledging that membership in certain concepts or the logical truth of particular propositions exists on a spectrum. The other part, rough sets, address potential inconsistencies by offering both a lower and upper approximation of a concept concerning an indiscernibility relation between objects.The criteria for inclusion in the lower and upper approximations within rough sets can be formulated using quantifiers. Traditionally, an object is part of the lower approximation of a concept if all objects indiscernible from it are also members of the concept. In the context of fuzzy quantifier-based fuzzy rough sets (FQFRS) <cit.>, a departure from the conventional universal quantifier is made by employing a fuzzy quantifier such as “most”, instead of the universal quantifier. This relaxation introduces a degree of tolerance towards noise into the approximations, enhancing their robustness.Granular representations, whether in the context of rough sets <cit.> or fuzzy rough sets <cit.>, involve expressing the lower and upper approximations as a combination of elementary (fuzzy) sets, referred to as granules, which are derived from the underlying data. These representations hold particular significance in the domain of rule induction. Rule induction entails the generation of a set of rules that establish relationships between object descriptions and decision classes. The granules, constituting rough sets and fuzzy rough sets, can be interpreted as “if..., then...” rules, which are easily comprehensible and interpretable. These granules can be leveraged to construct a rule-based inference system that serves as a predictive model. For instance, the LEM2 algorithm, as outlined in <cit.>, is an example of a rule induction algorithm designed for fuzzy rough sets.In <cit.> the authors proved that OWA-based fuzzy rough approximations are granularly representable sets when using D-convex left-continuous t-norms and their residual implicators for calculating the approximations. In this paper, we extend this result to Choquet-based FRS <cit.> and show that Sugeno-based FRS are granularly representable without the D-convex condition on the t-norm. Furthermore, we show that several other FQFRS models are not granularly representable, therefore making them less suitable for rule induction purposes.The following outline is used for the rest of this paper: Section <ref> provides a review of the necessary preliminaries on FQFRS and the granular representation of OWA-based FRS. Section <ref> demonstrates that Choquet-based FRS can be represented granularly under the same conditions as OWA-based FRS. Section <ref> proves that Sugeno-based fuzzy rough sets can be represented granularly without any extra conditions on the fuzzy set connectives. In Section <ref> the granularity of other FQFRS models is discussed. Finally, Section <ref> concludes this paper and outlines future work.§ PRELIMINARIES§.§ Implicator-conjunctor based fuzzy rough setsWe will use the notation (X) to represent the powerset of X and assume X to be finite throughout this paper. Likewise, we will use the notation (X) to represent the set consisting of all fuzzy sets on X.A fuzzy relation R∈(X× X) may satisfy one or more of the following properties: * Reflexivity: for every x in X, R(x,x)=1,* Symmetry: for every x and y in X, R(x,y) = R(y,x),* -transitivity with respect to a t-norm : if (R(x,y),R(y,z))≤ R(x,z) holds for every x, y, and z in X.A fuzzy relation that is reflexive, symmetric, and -transitive is called a -equivalence relation.We define the R-foreset of an element y∈ X and a fuzzy relation R ∈(X× X) as the fuzzy set Ry(x) := R(x,y).The extension of a mapping 𝒪:[0,1]^2 → [0,1] to fuzzy sets (i.e., (X)^2 →(X)) will be denoted by the same symbol:𝒪(A,B)(x) := 𝒪(A(x), B(x)), ∀ x ∈ X. Pawlak <cit.> introduced the lower and upper approximation of A∈(X) w.r.t. an equivalence relation R∈(X× X) as:apr_R A = {.x∈ X| [x]_R ⊆ A }={.x∈ X| (∀ y∈ X) ((x,y)∈ Ry∈ A)}, apr_R A= {.x∈ X| [x]_R ∩ A ≠∅}={.x∈ X| (∃ y∈ X) ((x,y)∈ Ry∈ A)}. In Radzikowska et al.'s work <cit.>, an implicator-conjunctor-based extension was introduced for fuzzy relations and fuzzy sets. This extension defines thelower and upper approximation of A∈(X) w.r.t. R∈(X× X) as follows:(apr^_R A)(x)= min_y∈ X(R(x,y),A(y)),(apr^_R A)(x) = max_y∈ X𝒞(R(x,y),A(y)),whereis an implicator[An implicator is a binary operator : [0,1]^2→[0,1] that is non-increasing in the first argument, non-decreasing in the second argument and for which (0,0)=(0,1)=(1,1)=1 and (1,0)=0 holds.] and 𝒞 a conjunctor[A conjunctor is a binary operator 𝒞:[0,1]^2→ [0,1] which is increasing in both arguments, satisfies 𝒞(0,0)=𝒞(0,1)=0 and for which 𝒞(1,x)=x holds for all x∈[0,1]. A commutative and associative conjunctoris called a t-norm.].Ifis a left-continuous t-norm andits R-implicator[The residual implicator (R-implicator) of a t-normis defined asI_ (x,y) = sup{λ∈ [0,1] | (x,λ)≤ y},for all x,y ∈ [0,1].], we have((x,y), z)= (x,(y,z)),for all x,y,z ∈ [0,1].Supposeis a left-continuous t-norm,its R-implicator, R a fuzzy -equivalence relation on X and A∈(X), then the lower and upper approximation satisfy the following properties: * (inclusion) (apr^_R A)⊆ A ⊆ (apr^_R A),* (idempotence) (apr^_R (apr^_R A))=(apr^_R A), (apr^_R (apr^_R A))=(apr^_R A),* (exact approximation) ∀ A∈(X):(apr^_R A) = A⇔ A=(apr^_R A).The extension of the concept of granular representability for fuzzy rough approximations was first explored by Degang et al. in 2011 <cit.>. Here, the authors introduced the notion of a fuzzy granule as:R_λ(x) = {(y, (λ, R(x, y)))|y∈ X},where, λ ranges in the interval [0,1], x belongs to the set X, andrepresents a t-norm.<cit.> We call A∈(X) granularly representable ifA = ⋃{R_λ(x) | λ∈ [0,1], x∈ X, R_λ(x) ⊆A} <cit.>Letbe a left-continuous t-norm,its residual implicator and R a -equivalence relation on X. A fuzzy set A∈(X) is granularly representable w.r.t. the relation R if and only if apr^_R (A) = A = apr^_R(A).The intuition behind this definition is that if A can be constructed using simple sets, i.e., fuzzy granules, then A will be free from any inconsistencies. <cit.>A fuzzy set A∈(X) is granularly representable if and only if it satisfies the consistency property, i.e.,(R(x,y),A(y)) ≤ A(x), ∀ x,y ∈ X. An element y is called consistent with respect to a fuzzy relation R∈𝒫(X × X) and a fuzzy set A∈(X) if and only if(R(x,y),A(y))≤ A(x), ∀ x∈ X. Letbe a t-norm andits residual implicator. An element y is consistent with respect to a reflexive fuzzy relation R∈𝒫(X × X) and a fuzzy set A∈(X) if and only if apr^_R(A)(y) = A(y). Sinceis the residual implicator of , we haveyconsistent ⇔ (∀ x∈ X)((R(x,y),A(y))≤ A(x)) ⇔ (∀ x∈ X)(A(y)≤(R(x,y),A(x))) ⇔ A(y) ≤inf_x∈ X(R(x,y),A(x)) ⇔ A(y) = inf_x∈ X(R(x,y),A(x))⇔ A(y) = apr_R(A)(y),where in the second-to-last step, we used the reflexivity of the relation and the property that, for residual implicators, (1,x) = x holds for all x ∈ X.§.§ Choquet and Sugeno integralChoquet and Sugeno integrals extend the concept of classical integration to a context where the measures are not necessarily additive. This allows for more flexible and realistic modeling of uncertainty and imprecision in data, making them useful in a wide range of applications, most notably decision-making <cit.>.A set function μ:𝒫(X)→[0,1] is a monotone measure if: * μ(∅)=0 and μ(X)=1,* (∀ A,B∈𝒫(X))(A⊆ B μ(A)≤μ(B)).A monotone measure is symmetric when A=B implies μ(A)=μ(B). The Choquet integral of f:X→ℝ with respect to a monotone measure μ on X is defined as:∫ f μ=∑_i=1^n f(x^∗_i)·[μ(A^∗_i)-μ(A^∗_i+1)],where (x^∗_1,x^∗_2,…,x^∗_n) is a permutation of X=(x_1,x_2,…,x_n) such thatf(x^∗_1)≤ f(x^∗_2) ≤⋯≤ f(x^∗_n),A^∗_i:={x^∗_i,…,x^∗_n} and μ(A^∗_n+1):=0.The Sugeno integral of f:X→ℝ with respect to a monotone measure μ on X is defined as:f μ=max_i=1^n min(μ({x^∗_i,…, x^∗_n}), f(x^*_i)),where (x^∗_1,x^∗_2,…,x^∗_n) is a permutation of X=(x_1,x_2,…,x_n) such thatf(x^∗_1)≤ f(x^∗_2) ≤⋯≤ f(x^∗_n). We recall that Ordered Weighted Average <cit.> operators are equivalent to Choquet integrals w.r.t. symmetric monotone measures <cit.>. §.§ Fuzzy quantifier-based fuzzy rough setsOne class of robust (i.e., noise tolerant) FRS is based on fuzzy quantifiers <cit.>. For a thorough exposition of the theory of fuzzy quantifiers, we refer the reader to <cit.>.An n-ary semi-fuzzy quantifier on X≠∅ is a mapping Q:((X))^n→ [0,1]. An n-ary fuzzy quantifier on X≠∅ is a mapping Q:((X))^n→ [0,1]. Let Q:((X))^2→ [0,1] be a binary quantification model over the universe X, then the corresponding unary quantification model Unary(Q):(X)→ [0,1] is defined as Unary(Q)(A):= Q(X,A), A∈(X).Given a reflexive fuzzy relation R∈(X× X), fuzzy quantifiers Q_l:((X))^2→ [0,1] and Q_u:(X)→ [0,1], and A∈(X), then the lower and upper approximation of A w.r.t. R are given by:(apr_R,Q_lA)(y) =Q_l(Ry,A ), (apr_R,Q_uA)(y)=Q_u(Ry∩_𝒯 A), Let Q_l and Q_u denote the (linguistic) quantifiers “almost all” and “some” respectively. Then the membership degree of an element y to the lower approximation of A corresponds to the truth value of the statement “Almost all elements similar to y are in A”. Similarly, the membership degree of y in the upper approximation is determined by the truth value of the statement “Some elements are similar to y and are in A”. Note that there exists a distinction in the quantification models applied to the lower and upper approximations. The upper approximation involves unary quantification, as the proposition “Q elements are in A and B” fundamentally represents a unary proposition with the fuzzy set (A,B) as its argument. This contrasts with the lower approximation, where the proposition “Q A's are B's” serves as the underlying proposition, employing a necessarily binary quantification model.To specify a quantifier like “some” on general universes we will make use of RIM quantifiers.A fuzzy set Λ∈([0,1]) is called a regular increasing monotone (RIM) quantifier if Λ is a non-decreasing function such that Λ(0)=0 and Λ(1)=1.The interpretation of the RIM quantifier Λ is that if p is the percentage of elements for which a certain proposition P holds, then Λ(p) determines the truth value of the quantified proposition Λ P.Suppose μ_l and μ_u are monotone measures on a finite universe X and Λ_l and Λ_u are two RIM-quantifiers, then we can define the following fuzzy quantifier fuzzy rough sets (FQFRS). * When we define Q_l = C^_μ_l and Q_u = Unary(C^_μ_u), whereC^_μ(A,B) = ∫(A,B)μ,and restricting μ_l and μ_u to symmetric measures, we get OWA-based fuzzy rough sets (OWAFRS) <cit.>. When we allow general monotone measures, we get Choquet-based fuzzy rough sets (CFRS) <cit.>. By permitting non-symmetry in μ, we increase our flexibility to incorporate additional information from the dataset (cf. <cit.>).* When we define Q_l = S^_μ_l and Q_u = Unary(S^_μ_u), whereS^_μ(A,B) = (A,B)μ,we get Sugeno-based fuzzy rough sets (SFRS).* Let Q_l = YWIC_Λ_l and Q_u = Unary(YWIC_Λ_u), whereYWIC_Λ (A,B) = ∫(A,B)μ^∗_A,μ^∗_A(E)=Λ(∑_i=1^*EA(y^∗_i)/A),where y^∗_i is defined such that C(y^∗_i) is the ith smallest value of C(x) for all x∈ X and i ∈{1,…,X=n}. Then the FQFRS corresponding to these quantifiers is YWIC-FQFRS <cit.>, replacing the Choquet-integral with a Sugeno-integral we get YWIS-FQFRS.* When we define Q_l = WC_Λ_l and Q_u = Unary(WC_Λ_u), whereWC_Λ (A,B) = ∫(A,B)μ_A,μ_A(E)=Λ(A∩ E/A),we get WOWAC-FQFRS <cit.>, replacing the Choquet-integral with a Sugeno-integral we get WOWAS-FQFRS.Note that Unary(YWIC_Λ) and Unary(WC_Λ) are the same and are equal toY_Λ(A) = ∫ A μ_Λ, μ_Λ(E)=Λ(E/X).Because μ_Λ represents a general symmetric measure, we have that Y_Λ is an OWA operator (Yager's OWA model for fuzzy quantification, cf. <cit.>), hence the upper approximations of YWIC- and WOWAC-FQFRS are equivalent to those of OWAFRS. For a comparison between some of these different lower approximations, we refer the reader to <cit.>.In <cit.>, the authors showed that for a specific type of fuzzy connectives and for a -equivalence relation, OWA-based fuzzy rough approximations do not possess inconsistencies, i.e., they are granularly representable fuzzy sets.We say that a binary operator H: [0,1]^2 → [0,1] is directionally convex or D-convex (directionally concave or D-concave) if it is a convex (concave) function in both of its arguments, i.e., for all x_1,x_2, y∈[0,1] and w_1,w_2∈[0,1] such that w_1+w_2=1, it holds that:H(w_1x_1+w_2x_2,y)≤(≥) w_1H(x_1,y)+w_2H(x_2,y) , H (y , w_1 x_1 + w_2 x_2 ) ≤ (≥)w_1 H (y , x_1 ) + w_2 H (y , x_2).Letbe a D-convex left-continuous t-norm andits R-implicator. Thenis concave in its second argument.Letbe a D-convex left-continuous t-norm,its residual implicator, 𝐰_u and 𝐰_l two weight vectors and R a -equivalence relation. Then for every A ∈(X) we haveapr^_R(apr_R, 𝐰_l(A)) = apr_R, 𝐰_l(A)andapr^_R(apr_R, 𝐰_u(A)) = apr_R, 𝐰_u(A),where apr_R, 𝐰_l and apr_R, 𝐰_u denote the OWA-based lower and upper approximation, respectively. Letbe a D-convex left-continuous t-norm,its residual implicator, 𝐰_u and 𝐰_l two weight vectors and R a -equivalence relation. Then for every A ∈(X) we haveapr_R, 𝐰_l(A) = ⋃{R_λ(x) | R_λ(x) ⊆apr_R, 𝐰_l(A)} apr_R, 𝐰_u(A)= ⋃{R_λ(x) | R_λ(x) ⊆apr_R, 𝐰_u(A)},where apr_R, 𝐰_l and apr_R, 𝐰_u denote the OWA-based lower and upper approximation, respectively. § GRANULARITY OF CHOQUET-BASED FUZZY ROUGH SETSIn this section, we generalize the granular representation of OWA-based fuzzy rough sets (cf. <cit.>) to Choquet-based fuzzy rough sets. In particular, we show that under the same requirements on the fuzzy connectives, the Choquet-based fuzzy rough approximations are still free from any inconsistencies, i.e., they are granularly representable sets. We first recall Jensen's inequality <cit.>, and then prove a specific variant for Choquet integrals.Let f:ℝ→ℝ be a convex (concave) function, x_1,x_2,…,x_n∈ℝ and w_1,w_2,…,w_n∈[0,1] weights (∑_i=1^n w_i = 1). Then we havef(∑_i=1^n w_i · x_i) ≤(≥) ∑_i=1^n w_i · f(x_i).Let Φ: ℝ^+ →ℝ^+ be a non-decreasing function, f: X →ℝ^+ and μ a monotone measure on X. If Φ is convex, we haveΦ(∫ f μ) ≤∫Φ(f) μ.If Φ is concave, we haveΦ(∫ f μ) ≥∫Φ(f) μ.Let Φ be convex, the proof for a concave Φ proceeds analogously. Using Jensen's inequality we haveΦ(∫ f μ) =Φ(∑_i=1^nf(x^∗_i)·[μ(A^∗_i) - μ(A^∗_i+1)]) ≤∑_i=1^nΦ(f(x^∗_i))·[μ(A^∗_i) - μ(A^∗_i+1)]= ∫Φ(f) μ,where we can apply Jensen's inequality because of∑_i=1^n [μ(A^∗_i) - μ(A^∗_i+1) ] =μ(X)- μ(∅) = 1,and the last equality in Eq. (<ref>) holds because of the non-decreasingness of Φ (hence order-preserving). Let μ be a monotone measure on X and f:X^2→ℝ. Then we have the following inequalities∫min_x∈ X f(x,y) μ ≤min_x∈ X∫ f(x,y)μ(y),∫max_x∈ X f(x,y) μ ≥max_x∈ X∫ f(x,y)μ(y),where ∫ denotes either a Choquet integral or Sugeno integral. Follows directly from the increasingness of the Choquet and Sugeno-integrals and the fact thatmin_z∈ X f(z,y) ≤ f(x,y)and max_z∈ X f(z,y) ≥ f(x,y),for every x∈ X.Letbe a D-convex left-continuous t-norm,its residual implicator, μ_u and μ_l two monotone measures and R a -equivalence relation. Then for every A ∈(X) we haveapr_R(apr^μ_l_R(A)) = apr^μ_l_R(A)andapr_R(apr^μ_u_R(A)) = apr^μ_u_R(A),where apr^μ_l_R and apr^μ_u_R denote the Choquet lower and upper approximation, respectively. Observe that because of the -transitivity and reflexivity we haveR(x,y) = max_z∈ X(R(x,z),(z,y)).We start withthe lower approximation. Note thatapr_R(apr^μ_l_R(A)) ⊆apr^μ_l_R(A)follows directly from the inclusion property of the lower approximation. Using this and noting thatis concave in its second argument (Proposition <ref>), thus allowing the use of Jensen's inequality for Choquet integrals, we obtain the other inclusion:apr^μ_l_R(A)(y) = ∫ (Ry,A) μ_l= ∫(max_z∈ X(R(x,z),R(z,y)), A(x))μ_l(x)= ∫min_z∈ X((R(x,z),R(z,y)), A(x))μ_l(x) ≤min_z∈ X∫((R(x,z),R(z,y)), A(x))μ_l(x) = min_z∈ X∫(R(z,y), ( R(x,z),A(x)))μ_l(x)≤min_z∈ X(R(z,y), ∫( R(x,z),A(x))μ_l(x))= min_z∈ X(R(z,y), apr^μ_l_R(A)(z))=apr_R(apr^μ_l_R(A))(y),where we made use of the monotonicity of t-norms and implicators, Proposition <ref>, Lemma <ref> and Jensen's inequality for Choquet integrals with Φ(x)= (R(z,y),x). For the upper approximations the proof proceeds analogously:apr^μ_u_R(A)(y) = ∫(Ry,A) μ_u= ∫(max_z∈ X(R(x,z),R(z,y)), A(x))μ_u(x) = ∫max_z∈ X((R(x,z),R(z,y)), A(x))μ_u(x)≥max_z∈ X∫((R(x,z),R(z,y)), A(x))μ_u(x)= max_z∈ X∫(R(z,y), ( R(x,z),A(x)))μ_u(x) ≥max_z∈ X(R(z,y), ∫( R(x,z),A(x))μ_u(x)) = max_z∈ X(R(z,y), apr^μ_u_R(A)(z))=apr_R(apr^μ_u_R(A))(y),where commutativity and associativity of the t-norm is used as well as Lemma <ref> and the Jensen inequality for Choquet integrals with Φ(x)= (R(z,y),x). The other inclusion follows directly from the inclusion property of upper approximations. Letbe a D-convex left-continuous t-norm,its residual implicator, μ_u and μ_l two monotone measures and R a -equivalence relation. Then for every A ∈(X) we haveapr^μ_l_R(A) = ⋃{R_λ(x) | R_λ(x) ⊆apr^μ_l_R(A)} apr^μ_u_R(A)= ⋃{R_λ(x) | R_λ(x) ⊆apr^μ_u_R(A)},where apr^μ_l_R and apr^μ_u_R denote the Choquet lower and upper approximation, respectively. Directly follows from Proposition <ref> and the exact approximation property of fuzzy rough sets (Proposition <ref>). § GRANULARITY OF SUGENO-BASED FUZZY ROUGH SETS In this section, we prove that Sugeno-based fuzzy rough sets are granularly representable under the same conditions as classical fuzzy rough sets. As a result, they are free from any inconsistencies. Sugeno-based lower and upper approximations can thus be seen as a way to simultaneously remove inconsistencies and noise. Let μ be a monotone measure on a finite universe X, f: X →ℝ^+and Φ: ℝ^+ →ℝ^+ be a non-decreasing function. If Φ(x)≥ x, for every x∈ [0,μ(X)], we haveΦ(f μ) ≥Φ(f) μ.If Φ(x)≤ x, for every x∈ [0,μ(X)], we haveΦ( f μ) ≤Φ(f) μ.We will only prove the first inequality, the second inequality is proved analogously. Let (x^∗_1,x^∗_2,…,x^∗_n) be a permutation of X=(x_1,x_2,…,x_n) such thatf(x^∗_1)≤ f(x^∗_2) ≤⋯≤ f(x^∗_n),and define A^∗_i := {x^∗_i,⋯,x^∗_n}. Note that because of the non-decreasingness of Φ we haveΦ(f(x^∗_1)) ≤Φ(f(x^∗_2)) ≤⋯≤Φ(f(x^∗_n)),henceΦ( f μ) = Φ(max_i=1^n min(μ(A^∗_i), f(x^∗_i))) = max_i=1^n (Φ(min(μ(A^∗_i), f(x^∗_i))))= max_i=1^n min(Φ(μ(A^∗_i)), Φ(f(x^∗_i))) ≥max_i=1^n min(μ(A^∗_i), Φ(f(x^∗_i)))= Φ(f)μ,where the first inequality follows from the fact that Φ(x)≥ x for every x∈ [0,μ(X)]. Letbe a left-continuous t-norm,its residual implicator, μ_u and μ_l two monotone measures and R a -equivalence relation. Then for every A ∈(X) we haveapr_R(apr^μ_l_R(A)) = apr^μ_l_R(A)andapr_R(apr^μ_u_R(A)) = apr^μ_u_R(A),where apr^μ_l_R and apr^μ_u_R denote the Sugeno lower and upper approximation, respectively. For the lower approximation, note that Φ(x)=(R(z,y),x) satisfies the requirements of Lemma <ref>, i.e., Φ(x)≥ x and increasing. Indeed,x ≤max{λ : (y, λ) ≤ x}=(y,x),because of (y,x)≤min(y,x)≤ x, while the increasingness follows from the increasingness ofin the second argument. The rest of the proof is analogous to that of Theorem <ref>. For the upper approximation, note that Φ(x)=(R(z,y),x) satisfies the requirements of Lemma <ref>, i.e., Φ(x)≤ x and increasing. Indeed,x ≥min(y,x) ≥(y,x),while the increasingness follows from the increasingness of . The rest of the proof is analogous to that of Theorem <ref>.Letbe a left-continuous t-norm,its residual implicator, μ_u and μ_l two monotone measures and R a -equivalence relation. Then for every A ∈(X) we haveapr^μ_l_R(A) = ⋃{R_λ(x) | R_λ(x) ⊆apr^μ_l_R(A)} apr^μ_u_R(A)= ⋃{R_λ(x) | R_λ(x) ⊆apr^μ_u_R(A)},where apr^μ_l_R and apr^μ_u_R denote the Sugeno lower and upper approximation, respectively. Directly follows from Proposition <ref> and the exact approximation property of fuzzy rough sets (Proposition <ref>). § GRANULARITY OF OTHER FUZZY QUANTIFIER BASED FUZZY ROUGH SETS In this section, we will show that YWI-FQRFRS and WOWA-FQFRS are not granularly representable. In addition, we will show that on realistic datasets these inconsistencies do not occur frequently.§.§ Counterexamples of the granularity of YWI-FQFRS and WOWA-FQFRSThe following examples show that YWI-FQFRS and WOWA-FQFRS (both the Choquet and Sugeno versions) are not granularly representable, even when using a convex t-norm and its residual implicator. We will make use of the following well-known propositions.The Łukasiewicz t-norm _L(x,y)=max(0,x+y-1) is convex.Also note that the R-implicator of the Łukasiewicz t-norm _L(x,y)=max(0,x+y-1) is equal to the Łukasiewicz implicator _L(x,y)=min(1,1-x+y). [Counterexample for the granularity of YWIC-FQFRS]Let X = {x_1,x_2,x_3,x_4,x_5}, Λ the identity RIM-quantifier (Λ(x)= x) and A = {x_1,x_2,x_3}, which we will also denote asA = [1.0, 1.0, 1.0, 0.0, 0.0].Furthermore, suppose we have one attribute a on X that is given by [1.0, 0.5, 1.0, 0.0, 0.0]. Using this attribute and the fuzzy _L-equivalence relation R on X defined byR(x,y)=1-a(y)-a(x),we get the following membership degrees:R = [ 1.0 0.5 1.0 0.0 0.0; 0.5 1.0 0.5 0.5 0.5; 1.0 0.5 1.0 0.0 0.0; 0.0 0.5 0.0 1.0 1.0; 0.0 0.5 0.0 1.0 1.0 ].Making use of Λ(x)=x, we have that Eq. (<ref>) reduces toYWIC_Λ (C,B)= ∑_i=1^n (_L(C,B))(x^∗_i)·C(y^∗_i)/C.Let us now calculate apr^YWIC_R(A):apr^YWIC_R(A)(x) = YWIC_Λ (Rx,A) = ∑_i=1^5 (_L(Rx,A))(x^∗_i)·Rx(y^∗_i)/Rx,where x^∗_i and y^∗_i are defined such that (Rx,A)(x^∗_i) is the ith largest value of (Rx,A)(y) and Rx(y^∗_i) is the ith smallest value of Rx(y) for all y∈ X and i ∈{1,…,5}. Calculating _L(Rx,A) (_L(x,y) = min(1,1-x+y)) gives us[_L(R(x_i,x_j), A(x_j))] = [ 1.0 1.0 1.0 1.0 1.0; 1.0 1.0 1.0 0.5 0.5; 1.0 1.0 1.0 1.0 1.0; 1.0 1.0 1.0 0.0 0.0; 1.0 1.0 1.0 0.0 0.0 ].Notice that this is already sorted, so x^∗_i = x_i for i ∈{1,…,5}. We now sort R:[R(x_i,(y^∗_i)_j)] = [ 0.0 0.0 0.5 1.0 1.0; 0.5 0.5 0.5 0.5 1.0; 0.0 0.0 0.5 1.0 1.0; 0.0 0.0 0.5 1.0 1.0; 0.0 0.0 0.5 1.0 1.0 ].Adding everything together, we get (Rx = [2.5, 3.0, 2.5,2.5,2.5])apr^YWIC_R(A)= [1.0, 0.75, 1.0, 0.2, 0.2].Let us now calculate apr_R(apr^YWIC_R(A)):_L(R(x_i,x_j),apr^YWIC_R(A)(x_j)) = [1.01.01.01.01.0;1.0 0.751.00.70.7;1.01.01.01.01.0;1.01.01.00.20.2;1.01.01.00.20.2 ],thusapr_R(apr^YWIC_R(A)) = [1.0, 0.7, 1.0, 0.2, 0.2] ≠apr^YWIC_R(A),andapr^YWIC_R(A) - apr_R(apr^YWIC_R(A))= [0.0, 0.05, 0.0, 0.0, 0.0]. [Counterexample for the granularity of YWIS-FQFRS]Let X = {x_1,x_2,x_3,x_4,x_5}, Λ the identity RIM-quantifier (Λ(x)= x) and A = {x_1,x_2,x_3}, which we will also denote asA = [1.0, 1.0, 1.0, 0.0, 0.0].Furthermore, suppose we have one attribute a on X that is given by [0, 0, 0.2, 0.91, 1]. Using this attribute and the fuzzy _L-equivalence relation R on X defined byR(x,y)=1-a(y)-a(x),we get the following membership degrees:R = [1.01.00.8 0.090.0;1.01.00.8 0.090.0;0.80.81.0 0.290.2; 0.09 0.09 0.291.0 0.91;0.00.00.2 0.911.0 ]. Through a straightforward calculation we get:apr^YWIS_R(A)=[0.91, 0.91, 0.71, 0.19747, 0.0948], apr_R(apr^YWIS_R(A)) = [0.91, 0.91, 0.71, 0.18479, 0.0948],andapr^YWIS_R(A) - apr_R(apr^YWIS_R(A))= [0.0, 0.0, 0.0, 0.01267, 0.0]. [Counterexample for the granularity of WOWAC-FQFRS]Let X = {x_1,x_2,x_3}, Λ the identity RIM-quantifier (Λ(x)= x) and A = {x_2}, which we will also denote asA = [0.0, 1.0, 0.0].Furthermore, suppose we have one attribute a on X that is given by [0.0, 0.5, 0.0]. Using this attribute and the fuzzy _L-equivalence relation R on X defined byR(x,y)=1-a(y)-a(x),we get the following membership degrees:R = [ 1.0 0.5 1.0; 0.5 1.0 0.5; 1.0 0.5 1.0 ].Making use of Λ(x)=x, we have that Eq. (<ref>) reduces toW^_Λ (C,B)= ∑_i=1^n (_L(C,B))(x^∗_i)·C(x^∗_i)/C=∑_i=1^n (_L(C,B))(x_i)·C(x_i)/C.Let us now calculate apr^WOWAC_R(A):apr^WOWAC_R(A)(x) = W^_Λ (Rx,A) = ∑_i=1^5 (_L(Rx,A))(x_i)·Rx(x_i)/Rx.Calculating _L(Rx,A) (_L(x,y) = min(1,1-x+y)) gives us[_L(R(x_i,x_j), A(x_j))] = [ 0.0 1.0 0.0; 0.5 1.0 0.5; 0.0 1.0 0.0 ].Adding everything together, we get (Rx = [2.5, 2, 2.5])apr^WOWAC_R(A)= [0.2, 0.75,0.2].Let us now calculate apr_R(apr^WOWAC_R(A)):_L(R(x_i,x_j),apr^WOWAC_R(A)(x_j)) = [0.21.00.2;0.7 0.750.7;0.21.00.2 ] ,thusapr_R(apr^WOWAC_R(A)) = [0.2, 0.7, 0.2]≠apr^WOWAC_R(A),andapr^WOWAC_R(A) - apr_R(apr^WOWAC_R(A))= [0.0, 0.05, 0.0]. [Counterexample for the granularity of WOWAS-FQFRS]Let X = {x_1,x_2,x_3,x_4,x_5}, Λ the identity RIM-quantifier (Λ(x)= x) and A = {x_1,x_2,x_3}, which we will also denote asA = [1.0, 1.0, 0.5, 0.5, 0.0].Furthermore, suppose we have one attribute a on X that is given by [0.8, 0.0, 0.0, 0.0, 1.0]. Using this attribute and the fuzzy _L-equivalence relation R on X defined byR(x,y)=1-a(y)-a(x),we get the following membership degrees:R = [ 1.0 0.2 0.2 0.2 0.8; 0.2 1.0 1.0 1.0 0.0; 0.2 1.0 1.0 1.0 0.0; 0.2 1.0 1.0 1.0 0.0; 0.8 0.0 0.0 0.0 1.0 ]. Through a straightforward calculation we get:apr^WOWAS_R(A)=[0.666…, 0.5, 0.5, 0.5, 0.44…], apr_R(apr^WOWAS_R(A)) = [0.644…, 0.5, 0.5, 0.5, 0.44…],andapr^WOWAS_R(A) - apr_R(apr^WOWAS_R(A))= [0.022…, 0.0, 0.0, 0.0, 0.0]. As we can see in the above counterexamples, the inconsistencies always occur only in one element (cf. Proposition <ref>), and the difference between the FQFRS approximations and their inconsistency-free lower approximations never exceeds 0.05. In the next section, we will evaluate if this observation also extends to typical real-life datasets.§.§ Granularity on realistic datasetsBecause the WOWA and YWI FQFRS models lack granular representability, inconsistencies may arise in their lower approximations, rendering them unsuitable for rule induction. Yet, the question remains: to what extent does this issue manifest in real-world datasets?To answer this question we will examine the occurrence of these inconsistencies in classification datasets. §.§.§ SetupTo measure the extent of inconsistencies still remaining in the lower approximation of a dataset, we will compute the following error as well as the percentage of inconsistent elements (cf. Proposition <ref>): ∑_C∈ d|apr^FQFRS_R(C) - apr_R(apr^FQFRS_R(C))|/#Classes*#Instances,and∑_C∈ d|{x∈ X: apr^FQFRS_R(C)(x) - apr_R(apr^FQFRS_R(C)(x)>0)}|/#Classes*#Instances,where d is the set of classes, and #Classes and #Instances represent the number of classes and instances, respectively. This calculation is based on Proposition <ref> and the exact approximation property of fuzzy rough sets (as shown in Proposition <ref>). When there are no inconsistencies, this value will be zero. We will conduct our experiment on 20 classification datasets (Table <ref>) from the UCI repository <cit.>. The different FQFRS models we will evaluate are:FQFRS∈{WOWAC, WOWAS, YWIC, YWIS}.The Łukasiewicz t-norm and its residuated implicator are used in all models. The lower approximations will be evaluated using the RIM quantifiers Λ(x) = Λ_(α,1)(x) withΛ_(α, β)(p) = {[0p≤α; 2(p-α)^2/(β-α)^2α≤ p ≤α+β/2; 1-2(p-β)^2/(β-α)^2 α+β/2≤ p≤β;1 β≤ p ].,Zadeh's S-function <cit.> andα∈{0.6,0.7,…,0.9,0.91,0.92,…,0.99,1},where we have chosen finer steps at the end to observe the convergence of Λ_(α,1) to the universal quantifier as α approaches 1. We will use the following _L-equivalence relation:R(x,y)=∑_a∈𝒜[max(0,1-a(y)-a(x)/σ_a)],where 𝒜 is the set of conditional attributes and σ_a denotes the standard deviation of a conditional attribute a∈𝒜.§.§.§ Results and discussionTable <ref> shows the maximal error and maximal percentage of inconsistencies over all models and α-values for each dataset. This table reveals that the errors do not exceed 0.004, and for certain datasets, they are negligible, since they can almost certainly be attributed to floating-point errors. The WOWAC model attains nearly all of these maximum errors. Table <ref> illustrates the maximum error and the highest percentage of inconsistencies across all α-values for the YWIC model. Note the substantial contrast between the two models, with the YWIC model exhibiting fewer inconsistencies compared to the WOWAC model. To further examine the differences between the four FQFRS models, we depict the error with respect to the α-parameter for the top four datasets with the maximum errors in Figure <ref> and the percentage of inconsistencies for the top four datasets with the maximum percentage of inconsistencies in Figure <ref>. In all cases, the WOWA models consistently display the most significant errors and a higher level of inconsistencies, with the Choquet variant (WOWAC) occupying the highest position. For the YWI models, the Choquet variant also displays the highest error, whereas the Sugeno variant (YWIS) exhibits no inconsistent elements or errors largely originating from floating-point errors. § CONCLUSION AND FUTURE WORK In summary, this paper explored the granular representation of fuzzy quantifier-based fuzzy rough sets (FQFRS). We established that Choquet-based FQFRS can be granularly represented under the same conditions as OWA-based fuzzy rough sets, while Sugeno-based FQFRS can be granularly represented under the same conditions as classical fuzzy rough sets. This discovery highlights the potential of these models for resolving data inconsistencies and managing noise.Additionally, we examined models that incorporate extra weighting on the first argument, such as WOWA and YWI. Our findings indicated that these models do not yield granularly representable lower approximations. However, it is worth noting that in practical situations, these approaches still demonstrate effectiveness in mitigating inconsistencies, as demonstrated in our experiments.Looking ahead, it remains an open question whether there exist FQFRS models that introduce an extra weighting on the first argument while still achieving granularly representable lower approximations. Furthermore, there is room for exploration into the possibility of achieving granular representation under more relaxed conditions, such as extra conditions on the t-norm or through the development of weaker versions of granular representability.
http://arxiv.org/abs/2312.16704v1
{ "authors": [ "Adnan Theerens", "Chris Cornelis" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20231227200240", "title": "On the Granular Representation of Fuzzy Quantifier-Based Fuzzy Rough Sets" }
label1]Wojciech Rejchel label2,label3]Paweł Teisseyre cor1 label2,label3]Jan Mielniczuk [cor1]Corresponding author: Paweł Teisseyre ([email protected]) [label1]organization=Nicolaus Copernicus University, city=Toruń,country=Poland [label2]organization=Polish Academy of Sciences,city=Warsaw, country=Poland [label3]organization=Warsaw University of Technology,city=Warsaw, country=Poland Learning from positive and unlabeled data (PU learning) is actively researched machine learning task.The goal is to train a binary classification model based on a training dataset containingpart ofpositives whichare labeled, and unlabeled instances. Unlabeled set includesremaining part of positives and all negative observations.An important element in PU learning is modelingof the labeling mechanism, i.e. labels' assignmenttopositive observations.Unlike in many prior works, we consider a realistic setting for which probability oflabel assignment, i.e. propensity score, is instance-dependent.In our approach we investigate minimizer of an empirical counterpart of a joint risk which dependson both posterior probability of inclusion in a positive class as well as on a propensity score. The non-convex empirical risk isalternately optimised with respect to parameters of both functions. In the theoretical analysis we establish risk consistency of the minimisers using recently derived methods from the theory of empirical processes. Besides,the important development here is a proposed novel implementation of an optimisation algorithm, forwhich sequential approximation ofa set of positive observationsamong unlabeled ones is crucial. This relies on modified technique of 'spies' as well as ona thresholding rule based on conditional probabilities. Experiments conducted on 20 data sets for various labeling scenarios show that the proposed method works on par or more effectively than state-of-the-artmethods based on propensity function estimation. positive-unlabeled learning propensity score estimation empirical risk minimization selected at random assumptionrisk consistency § INTRODUCTION §.§ PU learningPositive-unlabeled (PU) learning is a machine learning task that aims to fit a binary classification model based ondata with partially assigned labels <cit.>. In this scenario, some instances from a positive classretain true labels, while for the remaining instances, no labels have been assigned (theycan belongeither to positive or negative class). PU learning can be thus viewed as a variant of semi-supervised learning <cit.> when we have positive, negative and unlabeled observations at our disposal; the difference is that in PU case we do not have observations with a negative label assigned. PU data appear in many practical applications. Consider reporting certain ailments, such as migraine attacks, using dedicated mobile applications <cit.>. Some patients report headaches on days when they occur. However, patients who do not report symptoms may include those who did not experience migraine, as well as those who experienced it but failed to report. Another example is detection of illegal content on social networks. Some profiles are reported as containing illegal content (positive cases). However, profiles not reported as illegal may also contain content that violates the law, but this has not been verified. PU data appear naturally in the classification of texts and images <cit.>, anomaly detection <cit.>, recommendation systems <cit.> and in many bioinformatics applications <cit.>, where we often have a small set of positive observations (e.g. confirmed drug-drug interactions) and a large number of observations not assigned to any of the classes. A key element of PU data analysis is modeling of the labeling mechanism that describes which of the positive observations are assigned a label. This is usually done by imposing some conditions on theprobability of such action, called the propensity score function <cit.>. The simplest approach used for PU data is to assume that this probability is constant (Selected Completely at Random assumption, in short SCAR) <cit.>, which significantly simplifies learning <cit.> . However, this assumption is often not met in practice. For example, probability that a patient who has had a migraine attack will report it, may depend on the level of pain but also on other less obvious factors, such as age or her/his handling of migraine reporting application. Similarly, patients who experience certain illness-related symptoms, are more likely to be diagnosed and test positive for the disease, whereas asymptomatic patients may remain largely undiagnosed. Much of recent research work, described below,has focused on a more realistic case when labeling is instance-dependent, in particular commonly adopting aless restrictive assumption, called Selected At Random (SAR), according to which the probability of labeling a positive observation depends solely on the observed feature vector <cit.>. We also note that PU framework canbe viewed as a special case of data with noisy labels, see <cit.>.§.§ Related worksPresently study ofPU learning refocuses from the scenario when labels arerandomly assigned in a positive class independently from data (SCAR assumption) to the scenario when they may depend on attributes of these elements (SAR assumption). For an overview of PU learning under SCAR, we refer to <cit.>. Recently, some methods have been proposed for PU learning under SAR setting, mainly based on some modifications of loglikelihood methods tailored to the studiedpartial observabilityscenario. In particular, LBE method <cit.> uses Expectation-Maximisation (EM) algorithm and assumption (<ref>) below. The Expectation step calculatesconditional probabilities of class given labels and predictors, based on current estimates of parameters, whereas Maximisation stepis based onthe expectation ofconditional likelihood of class indicators given data. SAR-EM <cit.> and TM <cit.> methods are based on alternate optimisation of empirical counterparts of separate Fisher consistent criteria for the posterior and thepropensity score. The main difference between those two papers is the way inwhich criterion for propensity score is constructed. Moreover, <cit.>introduced the concept of joint learning of posterior probability and propensity score which extends the method proposed in <cit.> for SCAR framework. The recent method <cit.> assumes that the propensity score is a linear function of the posterior probability for the true class variable, which is a special case of Probabilistic Gap (PG) Assumption <cit.>. Moreover, the method, called here PGLIN, uses the Positive Function data assumption, stating that for some part of the support of distribution of predictorsposterior probability is equal to 1.Finally, in <cit.> and <cit.> deep learning approaches to Empirical Risk Minimisation Method under PU scenario are investigated. In contrast to previously discussed papers,parametric modelling of propensity score is avoided, but at the expense of assuming that probability of a positive class is given.From the theoretical perspective, we also refer to <cit.> where a bound on the expected excess risk is derived under assumption that the propensity function is known.Finally, it is worth mentioning that there are two basic assumptions for PU data generation: single-sample scenario (SS) and case-control (CC) scenario <cit.>. The SS scenario, assumes that the training data set is an iid sample from a general population being a mixture of positive and negative cases, and the labeled observations are drawn from among the positive ones with a probability described by the propensity score function. In the CC scenario, the unlabeled datais drawn from a general population, and the labeled data is an iid sample from the positive class. In view of this, considering the SAR assumption and the propensity score estimation problem is natural in the case of the SS scenario. On the other hand, most of the work operating for CC, assumes SCAR <cit.>. Considering the above discussion, the current work focuses on the SS scenario and SAR assumption and in the experiments we focus on the methods that directly estimate non-constant propensity scores. §.§ Contribution and proposed methodThe methodproposedhere is based on the observation that the posterior probability for the class label indicator (which indicates whether the observation is assigned a label or not) can be representedas the product of the posterior probability for the true class variable and the propensity score function. Taking advantageof this fact and assuming that the both functions can be modeled parametrically, we propose the optimization of the risk function for the class label indicator, with respect to parameters related to the propensity score function and the posterior probability for the true class variable.Due to the fact that both posterior and propensity are modelled as members of the same parametric family, the described approach suffers fromof lack of identifiability, i.e. the parameters ofpropensity score function and the probability for the true class variable can not be specifieduniquely. However, if certain additional parametric assumptions are imposed on the posterior probability and the propensity score then bothfunctions are identifiable up totheir interchange.We considera local minimiser of empirical risk and establish probabilistic bounds on its excess risk (Theorem 1) from which risk consistency of the minimiser defined in (<ref>) below follows (Theorem 2).In practice,we need to effectively find this minimizer, which is not an obvious task due to identifiability issue and non-convexity of the empirical risk.To solve the problem, we propose an asymmetric procedure of risk minimisation which differs in the way both parameters are optimised. In each iteration, among the unlabeled observations, we select those that are most likely to be positive. Determining this set is a pivotal problem on which effectiveness of the whole procedure relies and we propose a novel wayto tackle this based on conditional probabilities and technique of spies <cit.>. Subsequently, the determined set is used to estimate the propensity score function. Then, given the estimate of the propensity function, we optimize the joint risk function to findparametersfor the true class variable (see flowchart in Figure <ref>).In the experiments, we refer to the proposed method as JERM (Joint Empirical Risk Minimization). Experiments conducted on 20 data sets, including tabular and image data, and 4 different labeling schemes, including the SCAR and SAR assumptions, show that JERM works comparably to or better than the previously considered methods described above.Our contributions can be summarized as follows. * We analyze a joint empirical risk function including parameters for the posterior probability of the true class variable and the propensity score function.* We prove an upper bound on the excess risk for the local minimizer of the joint empirical risk function from which its risk consistencyminimiser defined in (<ref>)follows.* We propose a new algorithm JERM (Joint Empirical Risk Minimization), based on the optimization of the joint risk function and the estimation of the propensity score using the spy technique.* We design and perform experiments that allow to compare of related methods for different labeling schemes and labeling frequencies.§ POSITIVE-UNLABELED LEARNING VIA JOINT RISK OPTIMIZATION §.§ PreliminariesWe consider a PU setting, where the triple (X,Y,S) is generated from some unknown distribution P(X,Y,S), X ∈ is feature vector, Y ∈{0,1} is true class variable, which is not observed directly andS ∈{0,1} is class label indicator. Value S=1 indicates that the instance is labeled and thus positive, whereas S=0 means that the instance is unlabeled. In PU learning it is assumed that P(S=1|X=x,Y=0)=0, which means that negative examples cannot be labeled.We adopt single-sample scenario <cit.> in which it is assumed that iid random vectors (X_i,Y_i,S_i) for i=1,…,n are generated from P(X,Y,S). Since Y_i is not observable, the PU training data is 𝒟={(X_i,S_i):i=1…,n}. Observe that in the consider framework s(x)=P(S=1|X=x) can be estimated using PU training data 𝒟. However, our goal is to estimate the posterior for the true class variable y(x)=P(Y=1|X=x). This task cannot be performed directly because we do not observe Y_i.We note that instance-dependent labeling can be naturally studied in a single-training sample scenario in contrast to case-control scenario in which two samples are available: one iid sample from positive class P(X|Y=1) and one from a general population P(X).It follows from the Law of Total Probability and assumption P(S=1|X=x,Y=0)=0, that the posterior probabilities are related ass(x)=e(x)y(x),where the propensity scoree(x)=P(S=1|Y=1, X=x) is unknownand ina general casenot constant. In view of (<ref>), identification of posterior y(x) and propensity score e(x)is clearly impossible in general. However, if certain parametric assumptions are imposed on y(x) and e(x) then bothfunctions are identifiable up to an interchange of y(x) and e(x) <cit.>. Let σ(t)=1/(1+e^-t) be a logistic function, a^T denotes transposed column vector a and |a|_1=∑_i=1^p |a_i| for a=(a_1,…,a_p)^T. The following parametric assumption will be imposed. Posterior probability y(x) and the propensity score function e(x) are described by logistic functions:y(x) = σ ( ^T x),e(x) = σ ( ^T x),whereandare ground-truth unknown parameters and ||_1 > ||_1. If (<ref>) is true, then y(x) and e(x) are identifiable up to their interchange<cit.>. Additional condition ||_1 > ||_1, ensures thatthe functions y(x) and e(x) are identifiable.Therole of thel_1 norm played in these conditionsis not essential and it may be replaced byany norm. The model (<ref>) has been introduced in <cit.> and <cit.>; in the later referenceit is called double logistic model.Obviously, Assumption <ref> corresponds to the SAR setting, because the propensity score function depends on the observed features.The assumption enables modeling a wide class of situations in which the probability of labeling depends on the feature vector through the sigmoid function.Importantly,we note that Assumption <ref> encompasses situations whene(x) is notmonotonically increasing function of y(x) and thus Probabilistic Gap Assumptionconsidered in <cit.> is not met. The limitation of Assumption <ref> is the specificparametric form of the propensity score function.Below, we introduce the joint risk function, which will be a core element of our method. For a,b ∈ and s ∈{0,1} alogistic loss function isϕ(a,b,s)= -s log [σ(a)σ(b)] - (1-s) log [1-σ(a)σ(b)],Let us denote a joint parameter by θ = (β, γ) and the ground-truth parameter by = (, ). Therisk function is Q(θ)=ϕ(β^TX,γ^TX,S),wherethe expectation is taken with respect to both X and S. Moreover, its observable empiricalcounterpart (empirical risk)is Q_n(θ)=1/n∑_i=1^n ϕ(β^TX_i,γ^TX_i,S_i).In the proposed approach, we minimize the above function with respect to β and γ in order to obtain estimators of these parameters.Function (<ref>) has been already considered in PU learning under SCAR assumption <cit.> and SAR assumption <cit.>.It has been shown in<cit.> that under Assumption <ref>, avector = (,) is the unique minimizer of Q(θ) over a set {θ=(β,γ): |β|_1 > |γ|_1}.Table <ref> contains the most important notations used in the paper.§.§ Theoretical analysis for risk minimizersIn this section we discuss theoretical properties of minimizers of (<ref>). Weconsider the estimator of , being aminimizer of (<ref>) in the following sense:θ̃= min_θ: |θ|_1 ≤ w Q_n(θ),where w>0 is a radius of a ball around zero, in which we look for the optimal solution. The restriction to a ball with a radius w in (<ref>) guarantees the existence of θ̃, because Q_n is continuous. Clearly, one wants to take large w in (<ref>) and such a choicewill be justified by our theoretical results. To be compatible with assumption ||_1 > ||_1, we proceed as follows: first we calculate |β̃|_1 and |γ̃|_1 from (<ref>). If |β̃|_1 > |γ̃|_1, then welet θ̃= (β̃, γ̃), otherwise θ̃= (γ̃, β̃).Next, we impose some technical assumptions on the distribution of the feature vector that allow us to obtain theoretical results regarding the bounds for excess risk of our estimator. We assume throughout that components of X are linearly independent almost everywhere (a.e.),that is EXX^T is positive definite. We suppose that individual predictors X_ij's are sub-Gaussian, i.e. there exists μ > 0 such that for each j=1,…,p, i=1,…,n andt ∈ we haveexp(tX_ij) ≤exp(μ^2 t^2/2). A family of random variables having sub-Gaussian distributionsis an important generalisation of a family of normally distributedrandom variables with mean zero. In this case μ^2 equals the varianceof the corresponding variable. They can be described as variables such thattheirtail can be bounded by atail of a certain normal variable N(0,σ^2)up to a multiplicative constant. Apart from normally distributed variables the sub-Gaussianfamily includes e.g. all bounded random variables (for charaterisation of such variablessee e.g. Theorem 2.6 in <cit.>). In<cit.>consistency in estimation of θ̃ is established using classical techniques. In the current paper we apply much more sophisticated methods from the theory of empirical processes, which allow us to control the excess risk, or regret, of θ̃ defined as R(θ̃)=Q(θ̃) -Q(θ_*).Note that R(θ̃) is interpreted as the amount by which the theoretical risk of Q(·)calculated at θ̃ for a fixed data set deviates from its minimal value Q(θ_*)(see e.g. <cit.>).We note that minimiser θ̃ is not necessarily unique and the obtained results willhold for any θ̃ in (<ref>).In Theorem <ref> we investigate properties of a minimiser of (<ref>) over a neighborhood of the true parameter θ̂= min_θ: |θ - θ_*|_1 ≤ r Q_n(θ),where r>0 is an arbitrary number. Obviously,cannot be found in practice, because its calculation requires knowledgeof θ_*. However,risk consistency of the minimiser in (<ref>) easily follows fromthe bound inTheorem <ref>. It will beestablished in Theorem <ref>. Suppose Assumptions <ref> and <ref>hold. For each s ∈ (0,1) and anydefined in (<ref>) we haveP( R() ≤32μ r/ s√(log p/n)) ≥ 1-s.Suppose Assumptions <ref> and <ref>hold.For any θ̃ in (<ref>) we have R(θ̃) → 0 in probability, if log p=o(n) and w=C(n/log p)^1/2-η, where C>0 is aconstant and 0<η<1/2. Fix ε>0. In Theorem <ref> we take r=C(n/log p)^1/2-η ' for η ' ∈ (0,η) and s=32Cμ/ε(log p/n)^η '. Notice that s → 0 forn →∞, because log p=o(n). Then for anyin (<ref>) P(R()>ε) ≤ s,so this probability tends to zero as n →∞.Let K(τ,r_0):={θ: |θ-τ|_1≤ r_0} denote a ball with radius r_0, centered at τ. For sufficiently large n we have K(0,w) ⊂ K(,(n/log p)^1/2-η'), because log p=o(n) and w=C(n/log p)^1/2-η for η ' <η. Due to that the excess risk of any minimiser in (<ref>) tends in probability to zero as well. In Theorem <ref> we establish that the excess risk of θ̃ in (<ref>) tends to zero even if the radius w tends to infinity as n →∞. The rate of w depends on a relation between log p and n, but also on the value η∈ (0,1/2). As can be observed in the proof of Theorem<ref>, the value η isa balance between a length of the radius w and a rate that probability (<ref>) tends to zero. Smaller η allows for larger w to be taken in (<ref>) but this in turn slows down the rate of convergence of the probability in (<ref>). In the proof of Theorem <ref> we will need the following lemma, which is an extended version of the recent result from <cit.>.The latter is a multivariate version of the ConcentrationPrinciple <cit.>. Let z_1,…,z_n be fixed elementsfrom some set 𝒵. Moreover, let ℱ be a family of K-dimensional functions on 𝒵.Consider Lipschitz functions h_i:ℝ^K →, i=1,…,n with a Lipschitz constant L>0. If there exists f̅∈ℱ such that h_i(f̅(z_i))=0 for i=1,…,n, then sup_f ∈ℱ|∑_i=1^n ε_i h_i(f(z_i))| ≤ 2 √(2) Lsup_f ∈ℱ∑_i=1^n ∑_k=1^K ε_i,k f_k(z_i),where f=(f_1,f_2,…,f_K) for each f ∈ℱ and {ε_i}_i, {ε_i,k}_i,k are independent Rademacher sequences.The proof of Lemma <ref> can be found in <ref>. We take anysatisfying (<ref>) and s ∈ (0,1). We also define a maximum deviation of aempirical process Q_n(θ)- Q(θ): U_n(r) = sup_ |θ- |_1 ≤ r|J_n(θ)|,where J_n(θ)=Q_n(θ) - Q_n() - Q(θ) + Q(). From the definition ofwe haveQ_n(θ̂) ≤ Q_n() and thus0≤ Q() - Q(θ_*)= -J_n() + Q_n(θ̂) - Q_n() ≤ U_n(r). Therefore, we focus the attention on bounding (<ref>).We start with Markov's inequality, which gives P(U_n(r)> z) ≤ U_n(t) /z for any z>0. The choice of z will be given later. Consequently, the main part of the proof is to bound U_n(r), which is done using tools from the empirical process theory. Some of them are well-known, but we also needto apply some recent methods from <cit.>.The first step is the Symmetrization Lemma <cit.>, which implies that U_n(r)≤ 2 sup_ |θ- |_1 ≤ r|1/n∑_i=1^n ε_i [ϕ(β^TX_i,γ^TX_i,S_i) - ϕ( ^TX_i, ^TX_i,S_i)] |,where ε_1, …, ε_n is a Rademacher sequence containing independent sign variables, i.e. P(ε_i=1)=P(ε_i=-1)=0.5. This sequence is alsoindependent of vectors (X_i,Y_i,S_i). In order tohandlethe bound in (<ref>), one usually applies the Contraction Principle <cit.>.However, here we need a multivariate version of this result established in Lemma <ref>. In order to apply it, we first fix (X_i,S_i) andconsider randomness only with respect to ε_i.Next, we apply a two-dimensional version of Lemma <ref> with f_β,γ(x) = ((β-)^Tx,(γ-)^Tx) and functions h_i, i=1,…,n defined ash_i(a,b) = ϕ(^T x_i + a, ^Tx_i+b,s_i) -ϕ(^T x_i, ^Tx_i,s_i) ,where x_i and s_i are fixed values of X_i and S_i, respectively. It is easily checkedthat each h_i is Lipschitz with theLipschitz constant √(2), moreover h_i(f_,)=0. Therefore,in view ofLemma <ref> we can bound (<ref>) by(8/n)sup_ |θ- |_1 ≤ r ∑_i=1^n [ε_i,1 (β-)^TX_i +ε_i,2 (γ-)^TX_i],where ε_i,1,ε_i,2are two independent Rademacher sequences defined in Lemma <ref>. Notice that the expected value in (<ref>) is again considered with respect to ε_i and X_i. Obviously, we have∑_i=1^n ε_i,1 (β-)^TX_i≤ |β-|_1 | ∑_i=1^n ε_i,1 X_i|_∞≤ r | ∑_i=1^n ε_i,1 X_i|_∞,so we can bound(<ref>) by (16r/n) | ∑_i=1^n ε_i,1 X_i|_∞. Next, we use Assumption 1, i.e. the fact that X_ij are sub-Gaussian with a parameter μ, which implies that∑_i=1^n ε_i,1 X_ij is sub-Gaussian with a parameter √(n)μ. Therefore,using Lemma 2.2 in <cit.> we obtain| ∑_i=1^n ε_i,1 X_i|_∞≤ 2 μ√(nlog p), which implies thatU_n(r)≤ 32 μ r √(log p /n). Finally, we take z=32 μ r/s√(log p/n), which finishes the proof. §.§ Proposed algorithm: JERMEstimator, defined in (<ref>)has desirable theoretical properties, but its use in practice is problematic. This is due to the fact thatin order to ensure uniqueness of the minimiser of the risk we have to impose additional assumption |β^*|_1>|γ^*|_1, which is impossible to verify in practice, and thereforewe are unable to distinguish between estimators of y(x) and e(x). To deal with this challenge, we propose a new, asymmetric procedure, called JERM (Joint Empirical Risk Minimization) for optimizing the risk function described by(<ref>) for which optimisation steps for the posterior and the propensity are different and the ensuing estimators are uniquely determined. The method alternately determines estimators for y(x) and e(x). We propose to estimate e(x) using model fitted on the set P̂ which at each iteration is a new approximation ofthe set of positive examples P={i:Y_i=1}. Then, given an estimator of e(x), we optimize function (<ref>)with respect to β, which allows us to determine the estimator of y(x). The whole procedure is described by Algorithm <ref> andbelow we describe the details of the estimation of e(x).In order to estimate e(x), we consider the theoretical risk functionH(γ):=- E_X,S|Y=1[Slog[σ(γ^TX)]+(1-S)log[1-σ(γ^TX)] ].The advantage of optimizing function H(·)compared tooptimization of (<ref>) is that the H(·)is convex with respect to its argument, which makes it possible to find a unique solution. The corresponding empirical risk function isH_P(γ) = -1/|P|∑_i∈ P[S_ilog[σ(γ^TX_i)]+(1-S_i)log[1-σ(γ^TX_i)] ]. Asset P is unknown in PU setting,it has to be estimated.The proposed method for estimating the set P uses thetechnique of so-called 'spies'. By a spy we mean an observation belonging to unlabeled setU={i:S_i=0}, which is the nearest neighbor of a certain observation from the labeled set L={i:S_i=1}.Note that here we deviate form the usual definition of spies which are defined as labeled examplesaddedto the unlabeled dataset <cit.>.Formally, the set of spies is defined as SP={i∈ U: X_i=1-NN(X_j), j∈ L}. Intuitively, the spy set contains those unlabeled observations that are close to positive observations in a feature space, so they are alsolikely to be positive. Note, however, that apart from the set of spies, which will be assigned to P̂, there may be other observations among unlabeled observations likely to be positive. Therefore, in addition, we consider observations for which the conditional probability h(x)=P(Y=1|X=x,S=0) is largerthanthe value h(x) for at least one spy.The rationale here is that we consider as plausible positives those elements which are as likely to be positive, in a specific sense, as at least one spy. We denote this set by A={i∈ U∖ SP: h(X_i)>min_j∈ SP h(X_j)}. Probability h(x) is unknown, however it can be expressed using the functions e(x) and y(x). Namely, denoting by f(x) density of X, we haveh(x) = P(Y=1|X=x,S=0) =f(x)y(x)(1-e(x))/f(x)(1-s(x))=y(x)(1-e(x))/1-y(x)e(x). The function h(x) can be estimated by plug-in the estimator y(x) and the estimator e(x) obtained in the previous iteration. In this way we obtain the estimator of set A (line 7 in Algorithm <ref>): Â={i∈ U∖ SP: ĥ(X_i)>min_j∈ SPĥ(X_j)}. Finally, the set of positive examples is estimated as P̂=L∪ SP∪Â. The whole procedure is shown as a flowchart in Figure <ref>, whereas Figure <ref> visualises the process ofdetermination of P̂ based on spies.The above algorithm contains some technical details that call for comments. First, as Q_n(β,γ) is not convex in either β or γ,Majorisation-Minimisation (MM) algorithm (see e.g. <cit.>, Section 5.8), commonly employed in such scenarios, is used to find the maximiser in line 5 in Algorithm <ref>. The analogous idea was used in previous papers on PU learning <cit.>, where it was shown that the use of the MM algorithm is superior to standard gradient-based algorithms. Secondly, we need to initialize the e(x) estimator (line 3 in Algorithm <ref>). We use a simple estimator ê(x)=0.5(1+ŝ(x)), where ŝ(x) is estimated by fitting the naive model in which variable S is treated as a class variable. The form of the estimator results from a simple inequality s(x)= e(x)y(x)≤ e(x)≤ 1 and considering the average value between s(x) and 1.§ EXPERIMENTS §.§ Methods and datasetsWe use the most related methods discussed in the previous sections: PGlin <cit.>,LBE <cit.>, SAR-EM <cit.> and TM <cit.>. Such methods were chosen because we focus on methods operating under the SAR assumption and those that directly estimate e(x). To make the comparison fair, in all methods the base classifier is a logistic regression (LR). The LR model was also used as the base model in <cit.>, <cit.> and <cit.>. The reference method is the NAIVE method, which uses the variable S as the unknown classvariable, i.e. it treats unlabeled observations as negative. We also consider the ORACLE method, which assumes knowledge of the true variable Y. This method cannot be used in practice for PU data, but is useful in experiments because its result can be interpreted as yieldingan upper bound on accuracy.The experiments were conducted on 20 datasets, including 16 tabular datasets from the UCI repository and 4 image datasets: CIFAR10, MNIST, Fashion and USPS. The characteristics of the sets are provided in Table <ref> in Appendix A. Tabular datasets with multiple classes were transformed into binary classification datasets, such that the positive class includes the most common class, and the remaining classes are combined into the negative class. In the case of image datasets, we define the binary class variable depending on the particular dataset. In CIFAR10, all vehicles form a positive class and animals form a negative class. For MNIST, even and odd digits form the positive and negative classes, respectively. In the USPS positivedigits less than five constitute the positive class, and remaining digits constitute the negative class. In FashionMNIST, clothing items worn on the upper body are marked as positive cases, the remaining images are in the negative class. For image data, we use a pre-traineddeep neural network Resnet18 to extract the feature vector. For each image, the feature vector of dimension 512 , is an outocomeof the average pooling layer. Then, the extracted feature vector is used by PU models.§.§ Labelling strategies and evaluation methodsGiven a set related to a binary classification problem, we artificially generate PU data using various labeling strategies. We assign all negative observations to the unlabeled set. Fromthe positive observations we randomly select those that will be labeled with probability e(x)=P(S=1|X=x,Y=1). We are considering the following strategies. S1. Propensity scoree(x)=c. S2. Propensity scoree(x)=F_Logistic(x^Tβ^*+a).S3. Propensity scoree(x)=F_Cauchy(x^Tβ^*+a). S4. Propensity score e(x)=[F_Logistic(x^Tβ^*+a)]^10.In the above formulas, F_Logistic and F_Cauchy denote the cumulative distribution functions of the logistic and Cauchy distributions, respectively. Parameter a is determined to control the value of labelling frequency c=P(S=1|Y=1) and β^* is computed using ORACLE method with logistic regression. Strategy S1 corresponds to SCAR, whereas strategies S2-S4 correspond to SAR assumption. Moreover, note that S2 is related to assumption (<ref>). The analysis of strategies S3 and S4 allows us to examine the robustness of the method to deviation from the assumption (<ref>). The S4 strategy was considered in <cit.>.Note that in this casepropensity score approximates 0-1 step function.In the case of tabular datasets, we randomly split the data into a training set (75% of observations) and a test set (25% of observations). For image datasets, we use predefined splits provided in the PyTorch library <cit.>. From the training data, we generate PU data using the labeling strategies described above. Then, thePU models are fitted ontraining data. Finally, the models are evaluated on test data. We use balanced accuracy as the primary evaluation metric because this metric takes into account class imbalances that occur in some datasets. The above steps are repeated for 10 random data splits and finally we report the average values and standard deviations. §.§ DiscussionThe aim of the experiments was to address the following questions. (1) How do the consideredmethods work for different labeling schemes? (2) Is the proposed method JERMrobust to deviations from model assumptions? (3) What is the effectiveness of the methods for different label frequencies? §.§.§ Analysis of performance for different labeling schemesTable <ref> shows summary results of pairwise comparisons: wins (W), losses (L) and draws (D) of the proposedmethod JERM against each competitive method in terms of average balanced accuracy, whereas Tables <ref>-<ref> in Appendix show detailed results for S1-S4 and labeling frequency c=0.3,0.5,0.7.When compared with the NAIVE method and SAR-EM, theproposed method JERM is the clear winner for most datasets, regardless of labeling scheme and label frequency. Comparison with LBE shows, that for most datasets, the classification accuracy is comparable, however in the case of S4, the proposed method works significantly better for 3-5 datasets, while the opposite situation does not happen.Note that the labelingstrategy (S4) originates from <cit.>, where LBE was proposed.Comparison with PGLin reveals that for the strategies S2-S4, the JERM method has significantly higher accuracy for a larger number of datasets. The effectiveness of the TM method depends on the labeling scheme. For S1, TM performs similarly (12-14 datasets) or better (5-7 datasets) than the JERM method. The situation changes for non-SCAR schemes (S2-S4), where for most datasets the JERMworks better or comparable to TM. Importantly, for S3 and S4, on no dataset TMperforms better than the JERM method.In addition, Table <ref> shows percentage of absolute wins (averaged over c) of the JERM method against each competitive method. The proposed method JERM turns out to be a winner for most datasets and S2-S4 scenarios. In the case of S1, the JERM works better that the NAIVE and EM for most datasets. The analysis of the average ranks for individual methods (last row in Tables 2-13) indicates that the JERM works on average most effectively for non-SCAR schemes and lower label frequencies (c=0.3 and c=0.5), while for c=0.7 is the second best. For the SCAR scheme S1, the TM method emerges as the winner.§.§.§ Robustness to deviations from model assumptions Note that our method, as well as LBE, is based on the assumption (<ref>) indicating that the propensity function depends on a linear combination of features through a sigmoidal activation function. The labeling schemes S1 and S2 satisfythis assumption, in contrast toS3 andS4 scenarios. In particular, for S4, the propensity function approximates 0-1 step function and deviates significantly from the assumed sigmoid function. Importantly, the JERM achieves the highest averaged ranks for S3 and S4 and lower label frequencies (c=0.3,0.5). This points to the robustness of our method whenthe propensity score does not follow logistic model. Additionally, for the S4 scheme, the JERMperforms significantly better (3-5 datasets) or comparable (15-17 datasets) to LBE, which may indicate that the proposed method is more resistant to deviation from assumptions than LBE.§.§.§ Performance for different labeling frequenciesThe analysis of the situation of low label frequency c is particularly interesting, because in this situation, learning is usually based on small (or even extremely small) number of labelled instances. Experiments show that for low label frequency, the proposed method is more effective than other competitors. The JERM method turns out to be the winner (in terms of averaged ranks) for S2-S4 and is the second best for S1. This is confirmed by more detailed analyzes carried out on selected datasets (Segment, Banknote, CIFAR10, Fashion), for which we studied the dependence between balanced accuracy and the label frequency c (Figures <ref> and <ref>). We observe high accuracy of the proposed method even for small label frequency, such as c = 0.1. In the case of some datasets and scenarios (e.g. the Segment dataset and the S4 scheme), the advantage is very pronounced. As expected, the performance of all methods increases with c, approaching the effectiveness of the ORACLE method. For image datasets, the shapes of the accuracy curves corresponding to the JERM method and LBE are similar, which is understandable because both methods are based on a the same assumption (<ref>). Importantly, however, the accuracy curve for the proposed method usually dominates the accuracy curve for LBE.§.§.§ Computational issues Finally, webriefly discuss computational issues. All considered methods based on alternating model fitting for a posterior probability and propensity score function (EM, LBE, TM)are quite computationally expensive. In the case of the proposed method, the computational cost is additionally increased by the use of the MM algorithm in optimization and determination of spies. However, despite this, the computation times are comparable to the most related LBE method. For example, for CIFAR10 data and scheme S1, the computation time for LBE is 133.01 secs, while the computation time for the proposed method is slightly less 120.23. In turn, for MNIST data and scheme S1, the computation time for LBE is 164.15 and is slightly smaller than the computation time for JERM which is 178.00 secs. § CONCLUSIONSIn this work, we proposed a novel PU learning method JERM, based on joint modeling the posterior probability and the propensity scoreusing sigmoid functions which depend on the linear combination of features. The parameters of the above functions are estimated by optimizing the joint risk function for the observed label indicator. The risk function minimizer was analyzed theoretically. In particular, we establishedrisk consistency for the empirical risk minimizer.We also proposed an iterativemethod for optimizing joint empirical risk. The algorithm consists in alternately determining estimates for the posterior probability and the propensity score function. An innovative element of the work is the use of the spies technique to estimate the set of positive observations, which in turn enables the determination of the propensity score estimator. Importantly, in future research, the proposed propensity score estimator can be combined with other PU methods that require knowledge of the propensity scores. The results of experiments conducted on 20 real data sets indicate that the proposed method works comparable to or better than state-of-the-art methods based on propensity score estimation. The advantage of our method is especially noticeable when the SCAR assumption is not met and the labeling frequency is low. Moreover, we have shown that the methodis robust with respect to the form of labeling. Interesting future research directionisinvestigation of effectiveness of selecting active predictors under the considered scenario by augmentingjoint empirical risk considered here withsparsity-inducing regularizers. elsarticle-num§§.§ Results for labeling strategy S1 §.§ Results for labeling strategy S2§.§ Results for labeling strategy S3§.§ Results for labeling strategy S4§ Let z_1,…,z_n be fixed vectors from 𝒵. Besides, let ℱ be a family of K-dimensional functions on 𝒵. We also consider Lipschitz functions h_i:ℝ^K →, i=1,…,n with a Lipschitz constant L>0. If there exists f̅∈ℱ such that h_i(f̅(z_i))=0 for i=1,…,n, then sup_f ∈ℱ|∑_i=1^n ε_i h_i(f(z_i))| ≤ 2 √(2) Lsup_f ∈ℱ∑_i=1^n ∑_k=1^K ε_i,k f_k(z_i),where f=(f_1,f_2,…,f_K) for each f ∈ℱ and {ε_i}_i, {ε_i,k}_i,k are independent Rademacher sequences. The key step in the proof is Corollary 1 in <cit.>, which does not require the assumption on existence of f̅to establish analogue of the lemma abovewithout absolute values on the left-hand side of (<ref>). We show how to strengthen Corollary 1 in <cit.> to (<ref>). The price for this improvement will be the additional constant2 on the right-hand side of the inequality.For all real t we have |t| = max(t,0)+ max(-t,0). Therefore, we obtainsup_f ∈ℱ|∑_i=1^n ε_i h_i(f(z_i))|= sup_f ∈ℱ[max(∑_i=1^n ε_i h_i(f(z_i)),0) +max(- ∑_i=1^n ε_i h_i(f(z_i)),0)]≤ sup_f ∈ℱmax(∑_i=1^n ε_i h_i(f(z_i)),0)+ sup_f ∈ℱmax(∑_i=1^n (-ε_i) h_i(f(z_i)),0).Variables ε_i and -ε_i have the same distibution, so(<ref>) equals2 sup_f ∈ℱmax(∑_i=1^n ε_i h_i(f(z_i)),0)= 2max( sup_f ∈ℱ∑_i=1^n ε_i h_i(f(z_i)),0),which is 2sup_f ∈ℱ∑_i=1^n ε_i h_i(f(z_i)), becausesup_f ∈ℱ∑_i=1^n ε_i h_i(f(z_i)) ≥∑_i=1^n ε_i h_i(f̅(z_i))=0.Finally, we apply Corollary 1 in <cit.>.
http://arxiv.org/abs/2312.16557v1
{ "authors": [ "Wojciech Rejchel", "Paweł Teisseyre", "Jan Mielniczuk" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20231227124512", "title": "Joint empirical risk minimization for instance-dependent positive-unlabeled data" }
Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, ul. Chopina 12/18, 87-100 Toruń, Poland [email protected] Let f be an orientation-preserving homeomorphism of the 2-disc 𝔻^2 that fixes the boundary pointwise and leaves invariant a finite subset in the interior of 𝔻^2. We study the strong Nielsen equivalence of periodic points of such homeomorphisms f and we give a necessary and sufficient condition for two periodic points to be strong Nielsen equivalent in the context of braid theory. In addition, we present an application of our result to the trace formula given by Jiang–Zheng, deducing that the obtained forced periodic orbits belong to different strong Nielsen classes. Keywords: Nielsen theory; Strong Nielsen equivalence; Periodic points; Braid groups. Strong Nielsen equivalence on the punctured disc Stavroula Makri January 14, 2024 ================================================ § INTRODUCTIONA classical problem related to Nielsen theory deals with the question of determining the minimum number of fixed points among all maps homotopic to a given map f from a compact space to itself.In particular, let X be a compact connected polyhedron, f:X→ X a continuous self-map, and let Fix(f)={x∈ X | f(x)=x} be the set of fixed points of f. One is interested in studying the so-called minimal number of the map f denoted by MF[f] and defined as MF[f]=min{#Fix(g) |g∼ f}, where ∼ means homotopic. We define an equivalence relation on the set of fixed points of f in the following way. Two fixed points belong in the same fixed point class if there exists a path γ joining them such that f(γ)∼γ keeping the endpoints fixed during the homotopy. The equivalence classes with respect to this relation are called the Nielsen classes of f. The Nielsen number N(f) is defined as the number of Nielsen classes having non-zero fixed-point index sum. The number N(f) is a homotopy invariant which gives a lower bound for MF[f], that is N(f)≤ MF[f].To compute N(f) one needs to decide whether two fixed points belong to the same fixed point class. This geometric problem was described in an algebraic context by Reidemeister in the following way. One can associate a Reidemeister class to each fixed point of f. Then two fixed points belong to the same fixed point class if and only if their Reidemeister classes coincide. Let ϕ:G→ G be an endomorphism of a group G. Then u,v∈ G are said to be Reidemeister equivalent or ϕ-conjugate if there exists w∈ G such that v=ϕ(w)· u· w^-1. This equivalence relation gives rise to the Reidemeister classes. Briefly, one interprets ϕ as the endomorphism f_π induced by f on the fundamental group π_1(X) of X. In <cit.>, Guaschi uses braid group theory to give a necessary and sufficient condition in terms of a conjugacy problem in the braid group for distinguishing Reidemeister classes, or equivalently Nielsen classes, of f:(𝔻^2, A)→ (𝔻^2, A), orientation-preserving homeomorphisms of the 2-disc 𝔻^2 relative to some n-point invariant set A⊂Int(𝔻^2), that is f(A)=A, that fixes the boundary pointwise. Due to the Alexander's trick we have that any orientation preserving homeomorphism of 𝔻^2 that fixes the boundary pointwise is isotopic to the identity map. Let B_n denote the Artin braid group on n strings, B_n,1 the subgroup of B_n+1 whose induced permutation stabilizes (n+1) and let U_n+1 be the kernel of the homomorphism B_n,1→ B_n defined geometrically by removing the (n+1)st string. Given such a homeomorphism f, one can associate a braid β∈ B_n with A by fixing an isotopy between the identity and f. Note that the strings of β appear naturally by following A under the isotopy. Thus, every fixed point x∈Int(𝔻^2)∖ A defines a braid β_x in B_n,1⊂ B_n+1, related to A∪{x}, whose induced permutation stabilizes (n+1). Given fixed points x,y∈Int(𝔻^2)∖ A one can associate to each point an element u,v, respectively, of the free group 𝔽_n, since the braids that correspond to x,y belong to U_n+1 and U_n+1≅𝔽_n. In Theorem 1 of <cit.>, Guaschi showed that the elements u,v belong in the same Reidemeister class if and only if the associated braids, β_u, β_v are conjugate in B_n,1 via an element of U_n+1. Equivalently, x,y are in the same Nielsen class if and only if β_u, β_v are conjugate in B_n,1 via an element of U_n+1. Consequently, any braid conjugacy invariant, such as linking number properties, Thurston type, application of Garside's algorithm and link invariants may be used to show that β_x and β_y are not conjugate in B_n+1 and thus that the two elements u,v∈𝔽_n are not Reidemeister equivalent.This work is inspired by <cit.> and aims to generalize this result in the case of periodic points of such homeomorphisms f. In particular, we find an analogous criterion to decide when two periodic points of a homeomorphism of the 2-disc relative to a finite set are strong Nielsen equivalent in the sense of Asimov–Franks <cit.> by further assuming contractible isotopy. More precisely, let x,y be two periodic points of the same period of a homeomorphism f. We say that x,y are strong Nielsen equivalent if there exists a contractible isotopy f_t:f≃ f and a path γ:[0,1]→ M such that γ(0)=x, γ(1)=y, and for every t∈ (0,1), γ(t) is a periodic point of the same period as x and y. An isotopy is said to be a deformation of another isotopy if the corresponding paths in the group of homeomorphisms are homotopic with fixed endpoints. A self-isotopy is called contractible if it is a deformation of the trivial isotopy. First, we show that two periodic points are strong Nielsen equivalent if and only if they correspond to the same class inside the corresponding mapping class group. This will be the main tool in finding an analogous criterion in the case of periodic points. Let us fix some notation before stating the result. Given a compact, connected orientable surface M we consider homeomorphisms f:M→ M that are isotopic to the identity. We denote by P_n(f) the set of all periodic points with period n of f and by o(x,f) the orbit of a periodic point x of f.For n∈ℕ, let X_n∈Int(M) be a set of n distinct points. Given a periodic orbit, o(x,f), of period n, choose a homeomorphism h:(M, o(x,f))→ (M, X_n) isotopic to the identity. Then, let the strong Nielsen type of the orbit, denoted by snt(x,f), to be the conjugacy class of [h^-1fh] in MCG(M, X_n), where [·] represents the isotopy class and MCG(M, X_n) the mapping class group of M relative to X_n.For the following proposition we assume that M is either a compact, connected orientable surface with negative Euler characteristic or the 2-disc 𝔻^2, where in this case we restrict our attention to homeomorphisms that fix pointwise the boundary; see Remark <ref>. Let f:M→ M be an orientation-preserving homeomorphism. Let x,y∈ P_n(f). It holds that x,y are strong Nielsen equivalent, denoted by x SNy, if and only if snt(x,f)=snt(y,f). Note that one should expect this result as it has been stated in <cit.> by Boyland that one can show this equivalence between strong Nielsen classes and strong Nielsen types.This result is equivalent to the one where one can replace periodic points x SNy with periodic orbits o(x,f) SNo(y,f). In a more general context, to prove this theorem and thus to obtain this equivalence, one needs to also prove that the self-isotopy is contractible. Something that already holds in our case. Note that one can prove this result without asking for contractible self-isotopy. However, the absence of contractible-isotopy does not allow us to have for free the equivalent result for periodic orbit.We will make use of this theorem in the case of the 2-disc. More precisely, given an orientation-preserving homeomorphism f:(𝔻^2, A)→ (𝔻^2, A) that fixes the boundary pointwise we prove that the problem of distinguishing strong Nielsen classes is equivalent to studying the conjugacy problem in the braid group. In the case of the 2-disc it is well known that B_n≅MCG(𝔻^2, A). One can associate to a periodic point x∈ P_n(f) a braid element β_x∈ B_n and using the well-known decomposition B_n,m≅ B_m(𝔻^2-{npoints}) ⋊ B_m we prove the following result, which provides an algebraic characterization of strong Nielsen equivalent classes. Let C_n,m:=B_m(𝔻^2-{npoints}) and ϕ_β an endomorphism of C_n,m, which describes the action of B_n on C_n,m. Let f:(𝔻^2, A, ∂𝔻^2)→ (𝔻^2, A, ∂𝔻^2), where A⊂Int(𝔻^2) in an f-invariant n-point set. Let x,y∈Int(𝔻^2)∖ A such that x,y∈ P_m(f). The following are equivalent: * xSNy.* β_x=c·β_y· c^-1, for β_x, β_y∈ B_n,m⊂ B_n+m and c∈ C_n,m.* o_x=ϕ_β(c)· o_y· c^-1, for o_x, o_y, c∈ C_n,m and β∈ B_n. Roughly speaking, the braids β_x, β_y correspond to the invariant set A∪ o(x,f) and A∪ o(y,f) respectively, and the braids o_x, o_y correspond to the invariant sets o(x,f) and o(y,f) respectively, while β to the invariant set A.Application: We will use this result to show that the forced periodic orbits one obtains from the trace formula by Jiang–Zheng in <cit.> belong to different strong Nielsen classes.The structure of the paper is as follows. In Section <ref>, we give definitions and basic results from Nielsen and braid theory. In Section <ref>, we explain how one can associate a braid element to the orbit of a periodic point, providing also the theory of braids that we will need, and we conclude by proving Proposition <ref> and Theorem <ref>. We make some important remarks in Section <ref> comparing periodic Nielsen points with strong Nielsen periodic points. Moreover, we explain how from our result we deduce the result by Guaschi in the case of fixed points, which implies that Theorem <ref> is a generalization of Theorem 1 in <cit.> to the case of periodic points. The last section, Section <ref>, is devoted to an application of our result to the Lefschetz trace formula given by Jiang–Zheng in <cit.>. We show that the forced braids that they obtain, which could correspond to forced periodic orbits, belong all to different strong Nielsen classes. Thus, given a periodic point on the 2-disc, combining their result with ours, one can have an algorithm with output forced periodic orbits, which are not strong Nielsen equivalent. § PRELIMINARIES In what follows, we assume that M is a compact, connected, orientable surface and that any map f:M→ M is orientation-preserving homeomorphism. In this section we give the necessary definitions and theory on periodic points, braid groups and mapping groups.We begin with fixing some notation and giving two equivalent definitions of strong Nielsen equivalence of periodic points. For further details in the theory we direct the reader to <cit.> and <cit.>. A periodic point x is a point for which f^n(x)=x for some n>0. The least such n is called the period of the periodic point. Let P_n(f) be the set of all periodic points with period n and let o(x,f) be the orbit of a periodic point x. The strong Nielsen equivalence is an equivalence relation on periodic points of a surface homeomorphism. There are two ways to define the strong Nielsen equivalence; one that uses paths in the surface and another that uses the suspension flow.We recall a couple of notions that we will need for the following definition. An isotopy f_t:f_0≃ f_1 is said to be a deformation of another isotopy h_t:f_0≃ f_1 if the corresponding paths in Homeo(M) are homotopic with fixed endpoints. A self-isotopy f_t:f≃ f is called contractible if it is a deformation of the trivial isotopy, which means that the corresponding closed loop in Homeo(M) is null-homotopic. Let x,y∈ P_n(f). We say that x,y are strong Nielsen equivalent, denoted x SNy, if there exists a contractible isotopy f_t:f≃ f and a path γ:[0,1]→ M such that γ(0)=x, γ(1)=y, and for all t∈ (0,1), γ(t)∈ P_n(f_t).Note that the strong Nielsen equivalence on periodic points is a stronger equivalence relation compared to the periodic Nielsen equivalence. Recall that given x,y∈ P_n(f), x is periodic Nielsen equivalent to y if there exists a path γ:[0,1]→ M with γ(0)=x, γ(1)=y and f^n(γ)∼γ with fixed endpoints.This definition gives an equivalence relation on periodic points, while the following one, which uses the suspension flow, defines an equivalence relation on periodic orbits.We recall that the mapping torus M_f of a homeomorphism f:M→ M is the quotient space M×[0,1]/∼, with (x,1)∼ (f(x),0). To obtain the suspension flow ψ_t on M_f, one takes the unit speed flow in the [0,1] direction on M×[0,1] and then projects it to M_f. Let p:M×[0,1]→ M_f be the projection and x∈ M, then γ_x denotes the orbit of p(x,0) under ψ_t. When x is a periodic point, then γ_x can be viewed as a simple closed curve in M_f.Let x,y∈ P_n(f). The periodic orbits o(x,f) and o(y,f) are strong Nielsen equivalent, denoted o(x,f) SNo(y,f), if γ_x is freely isotopic to γ_y in M_f. In terms of the suspension flow, two periodic orbits o(x,f) and o(y,f) are periodic Nielsen equivalent if γ_x is freely homotopic to γ_y in M_f. These two definitions of strong Nielsen equivalence are connected due to the following result, part of which is in <cit.>.Two periodic orbits are strong Nielsen equivalent if and only if there are points from each of the orbits that are strong Nielsen equivalent. Note that the classical definition of the Nielsen class is defined only when n=1, that is for fixed points, and it requires only that f(γ(t))∼γ(t) relative to the endpoints. There is no difference between the notions of strong and periodic Nielsen equivalence for fixed points, see (<cit.>, Theorem 2.13) The requirement of a contractible self-isotopy at the definition of strong Nielsen equivalence using paths is necessary to ensure that the definition is equivalent to the one using the suspension flow. Additionally, one would like strong Nielsen equivalence to imply periodic Nielsen equivalence for periodic points, which holds when we have contractible self-isotopy. For further details we refer the reader to (<cit.>, 2.4. Remarks).The reason we assume that M has negative Euler characteristic or that when M=𝔻^2, we restrict our attention to the class of homeomorphisms that preserve pointwise the boundary, is because in both of these cases all self-isotopies of M are contractible; and thus we directly have that the two definitions of strong Nielsen equivalence are equivalent. However, one can define the strong Nielsen equivalence in a more general context. We review now the relation between braids and mapping classes, as well as the relation between periodic orbits and braids. Let us focus in the case when M is the 2-disc 𝔻^2, since this is the main interest of this paper. Let f_t:f_0≃ f_1 denote an isotopy between two homeomorphisms f_0, f_1 of the 2-disc. The collection of isotopy classes of orientation preserving homeomorphisms on 𝔻^2 that fix the boundary ∂𝔻^2 pointwise, with the operation of composition is called the mapping class group of 𝔻^2 and is denoted by MCG(𝔻^2, ∂𝔻^2). Recall that, due to the Alexander's trick, any orientation preserving homeomorphism of 𝔻^2 that fixes the boundary pointwise is isotopic to the identity map. Thus, MCG(𝔻^2, ∂𝔻^2) is trivial. Let A⊂Int(𝔻^2) be an n-point set. If in addition we consider isotopy classes relative to the set A, which means that both all homeomorphisms and isotopies must leave A invariant, then this mapping class group is denoted by MCG(𝔻^2, A, ∂𝔻^2). It is well-known that the group MCG(𝔻^2, A, ∂𝔻^2) can be identified with Artin's braid group B_n; for more details see <cit.>. One can see that braids and mapping classes are naturally related in the following way. Given an n-point set A⊂Int(𝔻^2) and a braid β∈ B_n, then sliding 𝔻^2 down along β, this gives an orientation-preserving homeomorphism f:𝔻^2→𝔻^2 satisfying f(A)=A. Note that the points in A play the role of the endpoints of the n-braid β∈ B_n. On the other hand, given f:𝔻^2→𝔻^2 with f(A)=A, one can associate a braid β∈ B_n with A by fixing an isotopy f_t:id≃ f. The strings of the braid β appear naturally following A under the isotopy f_t. One can now see how periodic orbits are associated with braids if we consider A to be the orbit of a periodic point x∈Int(𝔻^2) with period n, which is indeed an n-point invariant set. We will investigate the relation between braids and periodic orbits further in Section <ref>.For completion let us conclude this section reviewing the Nielsen–Thurston classification (<cit.>, <cit.>) and showing its connection with braids. For every homeomorphism f: (M,A)→ (M, A) of a compact surface M, with M∖ A having negative Euler characteristic, there exists a homeomorphism ϕ, isotopic to f relative to A, the Thurston representative of the isotopy class, that satisfies one of the following: * ϕ is finite order, i.e. there exists k∈ℕ such that ϕ^k=id.* ϕ is pseudo-Anosov, i.e it preserves a transverse pair of measured singular foliations, expanding the measure uniformly along the leaves of one foliation and contracting it uniformly, by the same factor, along the leaves of the other.* ϕ is reducible, i.e. there exists a collection of pairwise disjoint simple closed curves γ={γ_1,…,γ_k} in Int(M)∖ A, called reducing curves for ϕ, such that ϕ(γ)=γ. Moreover, γ has a ϕ-invariant open tubular neighborhood U, which does not intersect with the set A, such that each component of M∖ (U∪ A) has negative Euler characteristic and the restriction of an appropriate iterate of ϕ to each component of M∖ U is either finite order or pseudo-Anosov relative to (M∖ U)∩ A.Combining the fact that MCG(𝔻^2, A, ∂𝔻^2)≅ B_n with the Nielsen–Thurston classification, it follows that braids appear naturally into three types: periodic, pseudo-Anosov and reducible.§ MAIN RESULTS In this section we prove Proposition <ref> and Theorem <ref>. Let f: M→ M be an orientation-preserving homeomorphism, isotopic to the identity, of a compact connected orientable surface M and let X_n⊂Int(M) be an n-point set. Given a periodic orbit o(x,f) of a point x of period n, choose a homeomorphism h_x:(M, o(x,f))→ (M, X_n), such that f(o(x,f))=X_n, isotopic to the identity. Then, let the strong Nielsen type of the orbit o(x,f), denoted by snt(x,f), to be the conjugacy class of [h_x^-1fh_x] in MCG(M, X_n). By [h_x^-1fh_x] we denote the isotopy class of h_x^-1fh_x. Note that we consider the conjugacy class of [h_x^-1fh_x] in MCG(M, X_n) in order to have independence of the choice the homeomorphism h_x. Actually, the strong Nielsen type of an orbit is the isotopy class of f relative to the orbit, however we use the homeomorphism h_x in order to send the orbits to a common model. For homeomorphisms of the disk, due to the identification of MCG(𝔻^2, X_n, ∂𝔻^2) with the braid group on n strings B_n, a strong Nielsen type of an orbit is commonly called braid type. Thus, in this case we use the notation snt(x,f)=bt(x,f). The first implication follows from Lemma 8 of <cit.> assuming f=g. To prove the inverse implication let x,y∈ P_n(f). Assume that snt(x,f)=snt(y,f), that is [h_x∘ f∘ h_x^-1]=[h_y∘ f∘ h_y^-1]∈MCG(M,X_n). Thus, up to conjugation in MCG(M, X_n), there exists an isotopy from h_x∘ f∘ h_x^-1 to h_y∘ f∘ h_y^-1. This implies that there exists a self-isotopy f_t:f≃ f, and moreover, for t∈[0,1] one can find a sequence of n-point sets K_t⊂Int(M) and homeomorphisms h_K_t:(M, K_t)→ (M, X_n) isotopic to the identity, such that ϕ_t:=h_K_t∘ f_t∘ h_K_t^-1:(M, X_n)→ (M, X_n) gives an isotopy from h_x∘ f∘ h_x^-1 to h_y∘ f∘ h_y^-1. Note that K_0=o(x,f), K_1=o(y,f) and that h_K_0=h_x, h_K_1=h_y. Since we work up to conjugation in MCG(M, X_n), we can suppose without loss of generality that both h_x(x)=α∈ X_n and h_y(y)=α∈ X_n. It holds that f^n(h_x^-1(α))=f^n(x)=x and f^n(h_y^-1(α))=f^n(y)=y, that is f_0^n(h^-1_K_0(α))=h_K_0^-1(α) and f_1^n(h^-1_K_1(α))=h_K_1^-1(α). Note that n is the smallest integer such that ϕ_0^n(X_n)=X_n and ϕ_1^n(X_n)=X_n pointwise. As a result, for all t∈ [0,1], f_t^n(h^-1_K_t(α))=h_K_t^-1(α). This is the case because all homeomorphisms ϕ_t=h_K_t∘ f_t∘ h_K_t^-1 belong in the same isotopy class of MCG(M, X_n), that is each ϕ_t behaves on the set X_n in the same manner, where A can be seen as a set of marked points. We are ready now to define the path γ on M connecting x and y. Define γ:[0,1]→ M as γ(t)=h^-1_K_t(α)∈ M. Thus, γ(0)=h^-1_K_0(α)=h_x^-1(α)=x and γ(1)=h^-1_K_1(α)=h_y^-1(α)=y. Moreover, for all t∈ [0, 1], f^n_t(γ(t))=f_t^n(h^-1_K_t(α))= h_K_t^-1(α)=γ(t). Therefore, for all t∈ [0, 1], γ(t)∈ P_n(f_t), which concludes the proof. Let A⊂Int(𝔻^2), and let f:(𝔻^2, A, ∂𝔻^2) be an orientation-preserving homeomorphism such that it preserves the boundary pointwise and f(A)=A. Fix an isotopy f_t:id≃ f fixing ∂𝔻^2 pointwise during the isotopy, and let β∈ B_n represent the braid that corresponds to A following the isotopy f_t. Let x, y∈Int(𝔻^2)∖ A such that x, y∈ P_m(f), and let o(x, f), o(y, f) be the orbit of x, y respectively. We will extend the braid β∈ B_n to β_x∈ B_n+m and β_y∈ B_n+m by considering additionally the motion, under the isotopy f_t, of the set o(x, f) and o(y, f) respectively, in the following way. Let B_n,m be the subgroup of B_n+m whose elements' strands are divided into two blocks, one of n-strands and the other of m-strands, possibly linked. We consider the short exact sequence1→ B_m(𝔻^2∖{npoints})→ B_n,m B_n→ 1, where one can consider p as forgetting the last m strands. It is well-known that this short exact sequence splits and thus we have B_n,m≅ B_m(𝔻^2∖{npoints}) ⋊ B_n. One can think of the section ι: B_n→ B_n,m of p as adding m vertical strands at the end of a braid β∈ B_n. As a result, an element b∈ B_n,m can be uniquely written as ι(β)·γ, for γ∈ B_m(𝔻^2∖{npoints}) and ι(β) the image of β∈ B_m under ι. Moreover, by abusing the notation we denote by γ the element γ of B_m(𝔻^2∖{npoints}) but seen as an element of B_n,m. Thus, β_x∈ B_n,m can be considered as an x-extension of β and is nothing more than the element β_x=β· o_x, where β∈ B_n corresponds to the invariant set A, under f_t, and the braid o_x∈ B_m(𝔻^2∖{npoints}) corresponds to the orbit o(x,f)⊂Int(𝔻^2)∖ A, under f_t. Similarly, β_y=β· o_y, where the braid o_y∈ B_m(𝔻^2∖{npoints}) corresponds to the orbit o(y,f)⊂Int(𝔻^2)∖ A, under f_t. Note that for simplicity we will write β instead of ι(β). Before proving Theorem <ref> let us introduce some further notation. Let C_n,m:=B_m(𝔻^2∖{npoints}). Recall that B_n,m≅ C_n,m⋊ B_n. For β∈ B_n define ϕ_β:C_n,m→ C_n,m as follows:ϕ_β(γ)=β^-1·γ·β, for any γ∈ C_n,m,that is the action of B_n onC_n,m. Note that ϕ_β(γ)∈ C_n,m, since C_n,m is a normal subgroup of B_n,m. Let f:(𝔻^2, A, ∂𝔻^2)→ (𝔻^2, A, ∂𝔻^2) be an orientation-preserving homeomorphism, x,y∈Int(𝔻^2)∖ A, such that x, y∈ P_m(f) and X_m⊂Int(𝔻^2) an m-point subset of f such that A∩ X_m=∅. Suppose that x SNy. From Proposition <ref> this is equivalent to the equality [h_x∘ f∘ h_x^-1]=[ h_y∘ f∘ h_y^-1]∈MCG(𝔻^2, A∪ X_m, ∂𝔻^2). Recall that h_x:(𝔻^2, A∪ o(x,f), ∂𝔻^2)→ (𝔻^2, A∪ X_m, ∂𝔻^2) and h_y:(𝔻^2, A∪ o(y,f), ∂𝔻^2)→ (𝔻^2, A∪ X_m, ∂𝔻^2) are homeomorphisms isotopic to the identity, where h_x(A)=A, h_y(A)=A and h_x(o(x,f))=X_m, h_y(o(y,f))=X_m. Equivalently, under the isotopy f_t, the invariant sets A∪ o(x, f) and A∪ o(y, f) encode, up to conjugation, the same braid in B_n,m≅ C_n,m⋊ B_n. The conjugation has to be made by an element in C_n,m⊂ B_n,m, since the braid that corresponds to B_n is fixed for both braids, as it encodes the motion of the f-invariant set A. Let β_x and β_y in B_n,m⊂ B_n+m represent the elements [h_x∘ f∘ h_x^-1] and [h_y∘ f∘ h_y^-1] of MCG(𝔻^2, A∪ X_m, ∂𝔻^2) respectively. Thus, β_x=c·β_y· c^-1, for β_x, β_y∈ B_n,m and some c∈ C_n,m. This proves the first equivalence of the theorem. Suppose that β_x=c·β_y· c^-1, for β_x, β_y∈ B_n,m and c∈ C_n,m. We already saw that β_x=β· o_x, where β∈ B_n corresponds to the invariant set A, under f_t, and o_x∈ C_n,m corresponds to the orbit o(x,f)⊂Int(𝔻^2)∖ A, under f_t, and similarly β_y=β· o_y. Thus, β· o_x=c· (β· o_y)· c^-1=β·ϕ_β(c)· o_y· c^-1. The second equality follows from the definition of the endomorphism ϕ_β. Thus, o_x=ϕ_β(c)· o_y· c^-1, for some c∈ C_n, m. On the other hand, suppose that o_x=ϕ_β(c)· o_y· c^-1, for some c∈ C_n, m. It follows that β· o_x=β·ϕ_β(c)· o_y· c^-1 and from the definition of the endomorphism ϕ_β we obtain that β· o_x=c·β· o_y· c^-1, which implies that β_x=c·β_y· c^-1. This proves the second equivalence and concludes the proof of the theorem. § REMARKS In this section we will make some important remarks regarding Theorem <ref>. We recall the following result. Let ϕ be an automorphism of 𝔽_n induced by a braid β∈ b_n. Then u,v∈𝔽_n are ϕ-conjugate if and only if β_u and β_v are conjugate in B_n,1 via an element of 𝔽_n.We know that the notion of strong and periodic Nielsen equivalence for fixed points coincide, see Remark <ref>. Thus, applying Theorem <ref> for m=1, we deduce that the fixed points x, y are Nielsen equivalent if and only if o_x=ϕ_β(c)· o_y· c^-1, for some c∈ C_n,1. Notice thatC_n,1≅𝔽_n, where 𝔽_n is the free group of rank n. Moreover, ϕ_β turns out to be an endomorphism of 𝔽_n. It follows that this equivalence can be seen as the equivalence between Nielsen classes and Reidemeister classes. In addition, we showed that o_x=ϕ_β(c)· o_y· c^-1, for c∈𝔽_n if and only ifβ_x and β_y are conjugate in B_n,1 via an element of 𝔽_n. This results agrees with Theorem <ref>, which characterizes Reidemeister classes of fixed points using braids. With this remark we justify how Theorem <ref> generalizes Theorem <ref> into the case of periodic points. Let x,y∈ P_m(f). We would like to compare periodic Nielsen equivalence of x,y with strong Nielsen equivalence of x,y in terms of braids. We notice that the definition of strong Nielsen equivalence using paths takes into account the entire orbit of γ:[0,1]→𝔻^2∖ A, while in the case of periodic Nielsen equivalence we take into account just its mth iterate. In terms of braids, one can see that for strong Nielsen equivalence we consider braids in B_n,m, while for periodic Nielsen equivalence we consider braids in B_n, 1. For periodic Nielsen equivalence we consider braids in B_n,1 since we can use Theorem <ref>, seeing the periodic points as fixed points of the mapf^m. As a result, from the m stands of a braid in B_n,m we get information about the motion of every point in the m-periodic orbit, while from the last stand of a braid in B_n,1 we get information only for the motion of the periodic point under f^m. § BRAID FORCINGThe main reference for what follows is <cit.>. Let f be any orientation preserving homeomorphism of the 2-disc that fixes the boundary pointwise. The aim of this section is to show an application of Theorem <ref> to the problem of finding periodic orbits forced by a given one. In particular, given f, we show that the periodic orbits forced by a given one, obtained by the trace formula of Jiang–Zheng, <cit.>, are all in different strong Nielsen classes. We will both recall the notion of braid types and provide important definitions that we will need and then we will present the trace formula of Jiang–Zheng. We will conclude with an algorithm presented in <cit.> for finding periodic orbits of disc homeomorphisms, which, due to our result, turn out to belong to different strong Nielsen classes. In analog to the role of the Sharkovskii order on periods for maps of the interval <cit.>, Boyland <cit.> and Matsuoka <cit.> introduced the notion of braid types, defined in Section <ref>, for characterizing the forcing relation in dimension 2. By forcing relation we mean that a given periodic orbit implies the existence of other orbits. We recall that an n-braid type is in fact a conjugacy class in the braid group B_n. A braid β' is an extension of β if β' is a union of β and another braid γ. The braids β, γ are disjoint but possibly linked. As described in Section <ref>, note that any braid b∈ B_n,m is an extension of a braid β∈ B_n by a braid γ∈ B_m(𝔻^2∖{npoints}).We denote by β_P∈ B_n the braid that corresponds to the f-invariant subset P⊂Int(𝔻^2) under the isotopy f_t:f≃id. Moreover, by [β] we denote the conjugacy class, in the group specified by the context, of a braid β. A braid β forces a braid γ if, for any f and any isotopy f_t:f≃id, the existence of an f-invariant set P with [β_P]=[β] guarantees the existence of an f-invariant set Q with [β_Q]=[γ]. An extension β' is forced by β if, for any f and any isotopy f_t:f≃id, the existence of an f-invariant set P with [β_P]=[β] guarantees the existence of an f-invariant set Q⊂𝔻^2∖ P with [β_P∪ Q]=[β']. We are now ready to state the theorem by Jiang–Zheng <cit.>, where they deduce a trace formula for the computation of the forcing order. Let β' ∈ B_n,m⊂ B_n+m be an extension of β∈ B_n. Then β forces β' if and only if β' is neither collapsible nor peripheral relative to β, and the conjugacy class [β'] has a non-zero coefficient in tr_B_n+mζ_n,m(β). By ζ_n,m they denote a matrix representation of B_n over a free ℤB_n+m-module and the trace tr_B_n+m is meant to take values in the free Abelian group generated by the conjugacy classes is B_n+m. Thus, to obtain (n+m)-strand forced extensions of a given braid in B_n we can apply the following steps. Compute the trace tr_B_n+m, merge conjugate terms in the sum by solving the conjugacy problem in B_n+m, then rule out certain unwanted terms, which correspond to collapsible and peripheral extensions, and finally the non zero terms remaining after cancellation are exactly the (n+m)-strand forced extensions of a given n-braid. Roughly speaking, collapsible and peripheral extensions relative to β, are those braids whose some strands may be merged or moved to infinity while keeping β untouched. For more details we refer the reader to <cit.>.In <cit.>, using Theorem <ref> they provide an algorithm for solving the problem of finding periodic orbits linked to a given one, for orientation-preserving 2-disc homeomorphisms. We state their result in the following theorem. Given a set P of n points in Int(𝔻^2), a cyclic braid β∈ B_n, and an integer m>0, there is an algorithm to decide a homeomorphism f:𝔻^2→𝔻^2, which fixes the boundary pointwise, has P as an n-periodic orbit that is induced by β, and has m-periodic orbits linked to P. We briefly describe the idea of the algorithm. Given a cyclic braid β∈ B_n, due to the identification B_n≅MCG(𝔻^2, A), we obtain an orientation-homeomorphism of the 2-disc with A=P the given n-periodic orbit. Then we can apply Theorem <ref> to determine forced extensions β'∈ B_n,m of β by computing the trace tr_B_n+mζ_n,m(β).Application: We recall that in Theorem <ref> we deduce that two m-periodic points of a homeomorphism of the 2-disc, that leaves invariant an n-point set, are strong Nielsen equivalent if and only if their corresponding braids are conjugate in B_n+m, that is they have the same braid type. Now applying Theorem <ref>, we conclude that the forced periodic orbits one obtains from Theorem <ref>, are not strong Nielsen equivalent. Thus, two forced extensions β'_1, β'_2∈ B_n,m of β∈ B_n, that correspond to two different forced periodic orbits of a given orbit, can not be strong Nielsen equivalent. They can not be strong Nielsen equivalent, because if they were they should have the same braid type and thus they should appear as one conjugacy class in the trace, but two different forced periodic extensions correspond to two different conjugacy classes.We conclude this section with the following remark. For m=1, using Theorem <ref>, we obtain forced periodic points of order 1, i.e. forced fixed points. As we mentioned above, all these distinct forced fixed points will belong to pairwise distinct strong Nielsen classes. Thus, from Remark <ref>, it follows that the trace formula detects fixed points that all belong to different Nielsen classes. This should not be a surprising result, since the trace formula defined in <cit.> is actually a generalized version of the generalized Lefschetz number. § ACKNOWLEDGEMENTThe author is grateful to the University Center of Excellence "Dynamics, Mathematical Analysis and Artificial Intelligence" at the Nicolaus Copernicus University for the provided facilities, the great hospitality and the financial support. In addition, the author would like to thank Philip Boyland for his generous help and our helpful discussions. plain
http://arxiv.org/abs/2312.16072v1
{ "authors": [ "Stavroula Makri" ], "categories": [ "math.DS", "math.GR", "math.GT", "37E30, 20F36, 37C25" ], "primary_category": "math.DS", "published": "20231226144507", "title": "Strong Nielsen equivalence on the punctured disc" }
[email protected]@[email protected] Centre for Condensed Matter Theory, Department of Physics, Indian Institute of Science, Bangalore 560 012, IndiaThe Hamiltonian nature of the precessional dynamics of the classical Heisenberg model leads to reciprocal interactions amongst the spins. Heisenberg spins is reciprocal in nature.In this work, we study the dynamics of a nonequilibrium classicalspin chain in whichthe neighbours interact through a purely non-reciprocal exchange coupling [https://iopscience.iop.org/article/10.1209/epl/i2002-00280-2EPL 60, 418 (2002)] which preserves rotational symmetry. The resultant dynamics conserves neither magnetization nor energy. We uncover other local conservation laws in their place in the extreme case of a strictly antisymmetric coupling. We show numerically that the model undergoes an analogue of thermalization. We present results on the presence of conserved quantities, their diffusive spreading and a hydrodynamic picture, and the nature of the decorrelation front upon adding an initial perturbation to the system. Emergent hydrodynamics in a non-reciprocal classical isotropic magnet Sriram Ramaswamy===================================================================== Introduction: Effective interactions that evade Newton's 3rd Law are ubiquitous in living or driven matter <cit.>. The importance of such non-reciprocity (hereafter NR) has long been appreciated in the context of learning and associative memory <cit.>, and the subject is currently experiencing a resurgence of interest <cit.>.The breaking of reciprocity leads to a host of highly original effects including oscillations and travelling waves without inertia, advection without a velocity, pursuit-and-capture behaviour, and exceptional-point phase transitions<cit.>.An early study <cit.> explored criticality, chaos and control in the reversible precessional dynamics of a lattice of classical Heisenberg spins, with distinct exchange couplings for right and left neighbours. It was argued <cit.> that this manifestly non-reciprocal and non-Hamiltonian dynamics should arise generically in the presence of a sustained energy throughput, if the underlying lattice was non-centrosymmetric. The canonical dynamics of classical Heisenberg spins <cit.> {_i} on a latticeconsists of Larmor precession: d_i/dt = γ_i ×𝐡_i,where γ is the gyromagnetic ratio and the local magnetic field 𝐡_i = ∂ H / ∂_i arises from interactions with neighboring spins through a Hamiltonian H = - ∑_i,jJ_ij_i ·_j. The reciprocal nature of this dynamics is clear: the torque on spin i due to spin j is precisely the negative of that on j due to i. The model of <cit.> amounts to retaining precessional dynamics, with 𝐡_i = ∑_j J_ij_j, but allowing a non-symmetric J_ij. The dynamics still preserves spin-rotational symmetry but cannot be derived from a Hamiltonian.In this Letter, we study a chain of Heisenberg spins with purely antisymmetric couplings, J_ij = - J_ji. Despite the absence of a Hamiltonian description for this maximally non-reciprocal dynamics, we find the system displaysan analogue of thermalization and smooth fluctuating hydrodynamic behaviour of emergent conserved fields, namely the three components of the staggered magnetization and a scalar which we call the pseudo-energy. These are distinct from quasiconserved quantities as discovered in a class of periodically driven systems <cit.>. We develop a self-consistent treatment which leads to diffusion for these slow fields and relaxational dynamics for the regular magnetization, which we corroborate by direct numerical simulation of the microscopic equations of motion. Finally, we numerically calculate the decorrelator for our system and show that it displays a ballistic spreading of chaos like its Hamiltonian counterpart thereby providing further evidence for thermalization.The decorrelator or Out-of-Time-Ordereded-Correlator (OTOC) has proven to be a useful tool in the study of chaos spreading in quantum non-integrable systems and also classical ones with elementary degrees of freedom, like spins, that do not have a symplectic Poisson bracket structure. <cit.>. The decorrelator of a classical one-dimensional system of Heisenberg spins described by a Hamiltonian with only nearest neighbour exchange (henceforth referred to as the Heisenberg model) was found to exhibit a ballistic spreading of chaos with a butterfly velocity even while the magnetization and energy densities displayed diffusion at high temperature, consistent with the expectation of thermalization in this system <cit.>. Model: We study a system of classical spins _i of unit magnitude on the sites i of a one-dimensional lattice of L sites with periodic boundary conditions. The equation of motion of the spins is _̇i̇ = _i × (_i+1 - _i-1 )Although the dynamics is non-Hamiltonian, itobeys a Liouville theorem <cit.>, i.e., the flow in spin-space is divergence-free. Eqn.( <ref>) conserves neither the magnetization =∑_i _i nor the Heisenberg Hamiltonian. However, the dynamics conserves the staggered magnetization =∑_i (-1)^i _i and the quantityℰ= ∑_i (-1)^i _i ·_i+1, which we refer to as the pseudo-energy. These are analogues of the magnetization and energy that are conserved in the Heisenberg model. We numerically calculate the two-point correlators C_N(x,t) = ⟨(x+x_0,t+t_0) ·(x_0,t_0)⟩ and C_ℰ(x,t) = ⟨ℰ(x+x_0,t+t_0)ℰ (x_0,t_0)⟩ in the statistical steady state, with the samples drawn from an ensemble with equally weighted microstates.The results are shown in Fig. ( <ref>) where the good data collapse shows that correlations ofand of ℰ spread diffusively.Hydrodynamic description:To arrive at the continuity equations for the two conserved quantities, we rewrite the spins at the even and odd lattice indices as linear combinations of magnetization and staggered magnetization:_2i= _i + _i ,_2i+1 = _i - _i.The pseudo-energy can be similarly rewritten asℰ_i = 1/2_2i· (_2i-1 - _2i+1)= -1/2(_i+_i)· (_i-1 -_i-1 - _i + _i). The spin dynamics ( <ref>) can be written in terms of _i and _i and coarse-grained while retaining only the most relevant lower order gradient terms. In a naïve continuum limit, in which coarse-grained expressions for products of variables are replaced by the products of the individual coarse-grained quantities, we obtain the following hydrodynamic equations forand  <cit.>.∂_t (x,t)=×∂_x - ×∂_x ∂_t (x,t)= ∂_x (× )Further, the staggered energy in the continuum limit is ℰ(x,t) =-1/2 ( + ) ·∂_x ( - ), with the dynamical equation <cit.> ∂_t ℰ = ∂_x ( (×) ·∂_x ( + ) )Eqns. ( <ref>) and ( <ref>) have the form of continuity equations, as expected, for the conserved quantitiesand ℰ, with the currents j_ = × and j_ℰ =-(×) ·∂_x ( + ) respectively. Equations ( <ref> -  <ref>) have been obtained for the hydrodynamic (i.e. low wavenumber) modes ofand . The conservation law forsuggests that it is a slow variable evolving against the backdrop of the decaying non-conserved variable . We expect that the effect of the large wavenumber (fast) modes on these hydrodynamic modes is to provide a noise for each, which in conjunction with the non-linearities in ( <ref>),( <ref>) give rise to diffusion for the conserved modeand relaxation for the non-conserved mode . We thus write down the following effective hydrodynamic equations forand . ∂_t= -/τ + u_M (×∂_x ) - u_N (×∂_x ) + ξand ∂_t= D∂_x^2- u_MN∂_x (×) + ζ,where ξ and ζ are a non-conserving and conserving noise consistent withandbeing non-conserved and conserved respectively. Further⟨ξ^α(x,t) ξ^β(x',t') ⟩ = A_M δ^αβδ(x-x') δ(t-t')and⟨ζ^α(x,t) ζ^β(x',t') ⟩ = - A_N δ^αβ∇^2 δ(x-x') δ(t-t'). In the above equations, in addition to the relaxation time τ, diffusion constant D and noise ξ and ζ, we assume there are effective non-linear couplings u_M, u_N and u_MN. The detailed derivation of the above equations by integrating out the fast modes is an involved exercise, which we defer to future work. Here, we motivate the form of the above equations from the structure of Eqns.( <ref>, <ref>) with the observation that the parameters u_M, u_N and u_MN, τ, D along with the noise strengths A_M and A_N are not independent and satisfy relations among themselves. These relations allow us to argue for the existence of diffusion ofand relaxation of , while calculating the diffusion constant D and relaxation time τ self-consistently. We proceed by adapting the formalism of <cit.> to our system. The Eqns.( <ref>, <ref>) allow us to calculate the propagators for the fieldsandperturbatively in the couplings u_M, u_N and u_MN. The self-energy Σ_M for the fieldis then set equal to -1/τ and the self-energy Σ_N forto -Dq^2 (where q is the wavenumber) to obtain self-consistent equations for the relaxation time and the diffusion constant. Fig. ( <ref>) shows the diagrams that contribute to the self-energies upto one loop.These yield the following equations <cit.>1/τ = u_M^2A_Mτ^2Λ^3 + u_N u_MNA_NΛ/D^2 D = u_MN^2A_M√(τ^3/D) + u_Nu_MNA_N √(τ/D^3),where we have absorbed the numerical prefactors arising from the integrals in the calculation of the diagrams in Fig. ( <ref>) into the definitions of the effective parameters u_M, u_N, u_MN, A_M and A_N. This allows us to obtain the terms on the right hand side of Eqn.( <ref>) simply from power counting of the diagrams in terms of the ultraviolet cutoff Λ  <cit.>. Note that that the effective parameters in Eqns.( <ref>) and( <ref>) are, in principle, dependent on the values ofand ℰ since these are the quantities conserved by the dynamics. To numerically demonstrate the presence of diffusion, we perform a calculation analogous to that performed for the magnetization and energy of the classical Heisenberg model in <cit.>. We calculate the two point correlator C_N(x,t) ofby sampling over all possible initial states with equal weight. This is an unbiased calculation over all microstates and is simpler to perform than one with fixed values of the conserved quantities. For the Heisenberg model, in the thermodynamic limit, it corresponds to a zero value of both the energy and the magnetization, from the equivalence of ensembles. For our model, it presumably corresponds to zero values of bothand ℰ in the thermodynamic limit.Numerical results: Each individual configuration of our system isevolved according toEqn.( <ref>), with Δ t = 0.001 - 0.002 using an RK4 integrator. The system size for correlator calculation is L= 512, with5000 initial conditions sampled from an ensemble in which all microstates are equally weighted. The correlator C(x,t) is calculated by sliding over the reference point x_0,t_0, averaging over 500 consecutive snapshots for each fixed time t, for each initial state. Under the ambit of hydrodynamic theory, the two-point correlator of conserved quantities shows a scaling function of the form C(x,t) ∼ t^-α f(t^-αx) , with α as the scaling exponent and f a universal function. The numerical results for C_N(x,t) and C_ℰ(x,t) in Fig. ( <ref>) show that the two-point correlators for both the conserved quantities decay in time in a manner consistent with diffusion (α = 1/2). The insets show a scaling collapse of both correlators to the diffusive form t^-1/2 f(t^-1/2x), where the exact function f is different for the two cases. The magnetization starting from different initial states is found to rapidly decay to zero numerically as expected from relaxational dynamics for  <cit.>. In particular, for initial conditions in which the magnetization has a modulation at a wavevector k about a uniformly magnetized state, the relaxation time is finite in the limit k → 0 as expected due to the finite value of τ we have assumed in our theory. Decorrelator and chaos propagation: We now turn to a different characterization of the dynamics of our system, namely the spreading of chaos. Chaotic dynamics is considered to be an important indicator of the presence of thermalization in Hamiltonian systems. Chaos spreading in the classical Heisenberg chain has been quantified through a numerical calculation of the decorrelator, which shows a ballistic spreading of chaos with a butterfly velocity <cit.>. Further, a calculation of the decorrelator allows one to obtain a Lyapunov exponent even for spin systems, whose elementary degrees of freedom do not have a symplectic Poisson bracket structure. We perform a similar calculation of the decorrelator for our system. The decorrelator is defined asD( x_i, t) = 1- ⟨_i^a( t)·_i^b( t) ⟩, where a and b are two different initial conditions which differ only at a single site of the lattice (i=0) as ^b_0(0) = ^a_0(0) + δ_0 whereδ_0= ε[𝐧̂×^a_0(0)]𝐧̂ = [𝐳̂×^a_0(0)]/|𝐳̂×^a_0(0)|𝐳̂ is the unit-vector along an arbitrarily define z-axis and ε is the strenth of the perturbation (i.e. a measure of how different the two initial conditions are). As for the calculation of the two point correlators, all possible initial conditions are equally weighted while performing the average ⟨…⟩ in Eqn.( <ref>). We used L=2048, Δ t = 0.001 in the numerical evolution of ( <ref>), averaged over 10000 uniformly drawn initial configurations, to evaluate the decorrelator. The spin arrays were stored with quadruple numerical precision.A colormap of the decorrelator obtained is shown in Fig.( <ref>). It can be clearly seen that the disturbance of strength ε introduced at site i=0 spreads ballistically with a clearly defined butterfly velocity v_B. We obtain v_B = 1.35(± 0.02) which is different from the value obtained for the classical Heisenberg model <cit.> illustrating the difference in the microscopic dynamics of the two models. In the vicinity of the ballistically propagating front, the form of the decorrelator can be approximated as <cit.> aslog D(x,t)ε^2= 2 κ t [1 - (x/v_B t)^2],where κ is the Lyapunov exponent of the system. A fit to this form in the vicinity of the front is also shown in Fig. ( <ref>) from which one can obtain a Lyapunov exponent as well equal to 0.49(± 0.02). This value falls within the error range of the value calculated for the Heisenberg model <cit.>.Discussion: To summarize, we have studied the dynamics of a classical spin chain with non-reciprocal couplings, which cannot be derived from a Hamiltonian employing standard Poisson bracket relations. We find that the dynamics conserves the staggered magnetization and a pseudo-energy which are analogous to the conserved magnetization and energy of the classical Heisenberg model. We have developed a hydrodynamic description of the model which self-consistently produces diffusion for the staggered magnetization and a relaxation of the magnetization for which we have also provided numerical evidence by simulating the microscopic dynamics. We have also shown that the model we study displays a ballistic spreading of chaossurprisingly similar to that seen in the Heisenberg model and we have numerically obtained the butterfly velocity and Lyapunov exponent. Our primary conclusion is that there is clearly an analogue of thermalization in the model we have studied despite the absence of a Hamiltonian. The emergenceof diffusion in conserved quantities from an underlying precessional dynamics, and the ballistic spreading of chaos in strict analogy with the Heisenberg model — for which these phenomena are presented as evidence of thermalization—form the basis of our conclusion.Our findings open up the possibility of expanding the notion of thermalizationto encompass thebehaviours observed in non-reciprocal systems such as the one we study. An investigation of the nature of the long-time steady state and a statistical description of it in terms of suitable ensembles is a fertile direction for future work. Acknowledgements: SR acknowledges the SERB, India, for support in the form of a J C Bose National Fellowship. SM acknowledges support from the DST, Govt. of India. NB acknowledges support from UGC India. 28 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Das et al.(2002)Das, Rao, and Ramaswamy]das2002driven author author J. Das, author M. Rao,andauthor S. Ramaswamy, 10.1209/epl/i2002-00280-2 journal journal EPL (Europhysics Letters) volume 60,pages 418 (year 2002)NoStop [Ivlev et al.(2015)Ivlev, Bartnick, Heinen, Du, Nosenko, and Löwen]ivlev2015statistical author author A. V. Ivlev, author J. Bartnick, author M. Heinen, author C.-R. Du, author V. Nosenko,and author H. Löwen, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.5.011035 journal journal Physical Review X volume 5, pages 011035 (year 2015)NoStop [Dadhichi et al.(2020)Dadhichi, Kethapelli, Chajwa, Ramaswamy, and Maitra]dadhichi2020nonmutual author author L. P. Dadhichi, author J. Kethapelli, author R. Chajwa, author S. Ramaswamy,andauthor A. Maitra, 10.1103/PhysRevE.101.052601 journal journal Physical Review E volume 101, pages 052601 (year 2020)NoStop [Saha et al.(2020)Saha, Agudo-Canalejo, and Golestanian]SSaha_etalPRX2020 author author S. Saha, author J. Agudo-Canalejo,and author R. Golestanian, 10.1103/PhysRevX.10.041009 journal journal Phys. Rev. X volume 10, pages 041009 (year 2020)NoStop [You et al.(2020)You, Baskaran, and Marchetti]you2020nonreciprocity author author Z. You, author A. Baskaran, and author M. C. Marchetti,https://www.pnas.org/doi/10.1073/pnas.2010318117 journal journal Proceedings of the National Academy of Sciences volume 117, pages 19767 (year 2020)NoStop [Frohoff-Hülsmann et al.(2021)Frohoff-Hülsmann, Wrembel,and Thiele]CahnHilliardnonvarcoupling2021 author author T. Frohoff-Hülsmann, author J. Wrembel,and author U. Thiele, 10.1103/PhysRevE.103.042602 journal journal Phys. Rev. E volume 103, pages 042602 (year 2021)NoStop [Uchida and Golestanian(2010)]PhysRevLett.104.178103 author author N. Uchida and author R. Golestanian, 10.1103/PhysRevLett.104.178103 journal journal Phys. Rev. Lett. volume 104, pages 178103 (year 2010)NoStop [Parisi(1986)]parisi1986asymmetric author author G. Parisi, https://iopscience.iop.org/article/10.1088/0305-4470/19/11/005 journal journal Journal of Physics A: Mathematical and General volume 19, pages L675 (year 1986)NoStop [Hennequin et al.(2012)Hennequin, Vogels, and Gerstner]PhysRevE.86.011909 author author G. Hennequin, author T. P. Vogels,and author W. Gerstner, 10.1103/PhysRevE.86.011909 journal journal Phys. Rev. E volume 86, pages 011909 (year 2012)NoStop [Fruchart et al.(2021)Fruchart, Hanai, Littlewood, andVitelli]fruchart2021non author author M. Fruchart, author R. Hanai, author P. B. Littlewood,andauthor V. Vitelli, https://www.nature.com/articles/s41586-021-03375-9 journal journal Nature volume 592, pages 363 (year 2021)NoStop [Bowick et al.(2022)Bowick, Fakhri, Marchetti, and Ramaswamy]bowick2022symmetry author author M. J. Bowick, author N. Fakhri, author M. C. Marchetti,andauthor S. Ramaswamy, 10.1103/PhysRevX.12.010501 journal journal Phys. Rev. X volume 12, pages 010501 (year 2022)NoStop [Bachelard et al.(2019)Bachelard, Piovella, and Gupta]shamikgupta2019 author author R. Bachelard, author N. Piovella,and author S. Gupta, 10.1103/PhysRevE.99.010104 journal journal Phys. Rev. E volume 99, pages 010104 (year 2019)NoStop [Hong and Strogatz(2011)]PhysRevLett.106.054102 author author H. Hong and author S. H. Strogatz, 10.1103/PhysRevLett.106.054102 journal journal Phys. Rev. Lett. volume 106, pages 054102 (year 2011)NoStop [Barberis and Peruani(2016)]PhysRevLett.117.248001 author author L. Barberis and author F. Peruani, 10.1103/PhysRevLett.117.248001 journal journal Phys. Rev. Lett. volume 117, pages 248001 (year 2016)NoStop [Bonilla and Trenado(2019)]PhysRevE.99.012612 author author L. L. Bonilla and author C. Trenado, 10.1103/PhysRevE.99.012612 journal journal Phys. Rev. E volume 99, pages 012612 (year 2019)NoStop [Hanai et al.(2019)Hanai, Edelman, Ohashi, and Littlewood]PhysRevLett.122.185301 author author R. Hanai, author A. Edelman, author Y. Ohashi,and author P. B. Littlewood, 10.1103/PhysRevLett.122.185301 journal journal Phys. Rev. Lett. volume 122, pages 185301 (year 2019)NoStop [Hanai and Littlewood(2020)]PhysRevResearch.2.033018 author author R. Hanai and author P. B. Littlewood, 10.1103/PhysRevResearch.2.033018 journal journal Phys. Rev. Res. volume 2, pages 033018 (year 2020)NoStop [Galda and Vinokur(2019)]Galda2019 author author A. Galda and author V. M. Vinokur, 10.1038/s41598-019-53455-0 journal journal Scientific Reports volume 9 (year 2019),10.1038/s41598-019-53455-0NoStop [Ma and Mazenko(1975a)]ma1975critical author author S.-k. Ma and author G. F. Mazenko, 10.1103/PhysRevB.11.4077 journal journal Phys. Rev. B volume 11,pages 4077 (year 1975a)NoStop [McRoberts et al.(2023)McRoberts, Zhao, Moessner, andBukov]McRoberts_PRR5 author author A. J. McRoberts, author H. Zhao, author R. Moessner,andauthor M. Bukov, 10.1103/PhysRevResearch.5.043008 journal journal Phys. Rev. Res. volume 5, pages 043008 (year 2023)NoStop [Bulchandani et al.(2022)Bulchandani, Huse, and Gopalakrishnan]PhysRevB.105.214308 author author V. B. Bulchandani, author D. A. Huse,and author S. Gopalakrishnan, 10.1103/PhysRevB.105.214308 journal journal Phys. Rev. B volume 105, pages 214308 (year 2022)NoStop [Khemani et al.(2018)Khemani, Vishwanath, and Huse]PhysRevX.8.031057 author author V. Khemani, author A. Vishwanath,and author D. A. Huse,10.1103/PhysRevX.8.031057 journal journal Phys. Rev. X volume 8, pages 031057 (year 2018)NoStop [Maldacena et al.(2015)Maldacena, Shenker, and Stanford]1503.01409 author author J. Maldacena, author S. H. Shenker,and author D. Stanford, 10.1007/JHEP08(2016)106 title A bound on chaos,(year 2015), http://arxiv.org/abs/arXiv:1503.01409 arXiv:1503.01409 NoStop [Das et al.(2018)Das, Chakrabarty, Dhar, Kundu, Huse, Moessner, Ray, andBhattacharjee]das2018light author author A. Das, author S. Chakrabarty, author A. Dhar, author A. Kundu, author D. A. Huse, author R. Moessner, author S. S.Ray,and author S. Bhattacharjee, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.024101 journal journal Physical Review Lettersvolume 121, pages 024101 (year 2018)NoStop [McRoberts et al.(2022)McRoberts, Bilitewski, Haque, andMoessner]McRoberts_PRB105 author author A. J. McRoberts, author T. Bilitewski, author M. Haque, and author R. Moessner, 10.1103/PhysRevB.105.L100403 journal journal Phys. Rev. B volume 105, pages L100403 (year 2022)NoStop [Sup()]Suppinf @noop note See supplemental informationNoStop [Ma and Mazenko(1975b)]PhysRevB.11.4077 author author S.-k. Ma and author G. F. Mazenko, 10.1103/PhysRevB.11.4077 journal journal Phys. Rev. B volume 11,pages 4077 (year 1975b)NoStopSupplemental information§.§ Derivation of the hydrodynamic equationsThe equations of motion for the odd and even spins are_2i = _2i× (_2i+1 - _2i-1)_2i+1 = _2i+1× (_2i+2 - _2i)We define the local magnetization _i and the local staggered magnetization _i on the bond connecting spins _2i and _2i+1 as_i = _2i + _2i+1/2 _i = _2i - _2i+1/2Using eqns.( <ref>) and ( <ref>), we can write down the equations of motion for _i and _i _i= _i ×Δ_i - _i ×Δ_i_i= _i ×Δ_i - _i ×Δ_i,where Δ_i= _i - _i-1 Δ_i= _i - _i-1 Taking the continuum limit of Eqn.( <ref>) retaining only the first order terms, we obtain the hydrodynamic equations forand .∂_t=×∂_x - ×∂_x∂_t= ∂_x (×)The local pseudo-energy isℰ_i= 1/2[_2i.(_2i-1- _2i+1) ] from which we obtain ∂_t ℰ_i= J^E_i - J^E_i+1,where J^E_i= _2i-1/2.(_2i×_2i-2) Writing the pseudo-energy equations in terms of _i and _i and taking the continuum limit,ℰ =-1/2( + )·∂_x ( - )and using the following equations based on( <ref>),∂_t ( + ) = ( + ) ×∂_x ( - )∂_t ( - ) = ( - ) ×∂_x ( + )we obtain ∂_t ℰ = -1/2∂_t( + ) ·∂_x ( - ) - 1/2( + ) ·∂_x (∂_t( - ) ) = -1/2 ( + ) ·∂_x(( - ) ×∂_x ( + ) ) = -1/2∂_x ( ( + ) ·(( - ) ×∂_x ( + )) ) = ∂_x ( (×) ·∂_x ( + ) )where we observe in the last step that triple products of the form ∂_x( + ) · (× - ×) will be zero. The final equation above gives us the Eqn. (4) of the main paper.§.§ The evolution of the magnetization and staggered magnetizationBefore we proceed, we show the diffusion behaviour in -dynamics can be obtained, to a first order approximation, from the steady state description of(∂_t= 0):≈ -τ (×∂_x )∂_t= ∂_x(×) + ζ ⇒∂_t≈ -τ∂_x (× (×∂_x )) + ζ= τ∂_x (^2(1 - · ) ∂_x ) + ζThe interesting behaviour of the magnetization is captured in the transient state. We evaluate the non-reciprocal dynamics for initial states with a non-zero magnetization: a small periodic modulation of the spins is added to a perfectly aligned state. It can be seen from Fig.( <ref>) that it decays fairly rapidly to zero for different values of the modulation wavenumber k. Further, the relaxation of the magnetization in the limit k → 0 in a finite time is consistent with our assumption of a finite τ arising from the hydrodynamics. This can be seen in the form of a fairly rapid relaxation of the magnetization for the lowest value of k (=1) possible in our numerics. A description of the structure (bounces) in the early time behavior of the magnetization is beyond the scope of the hydrodynamic framework. §.§ Perturbation calculation for Self-energy termsWe write the -dynamics in the Fourier space, keeping the relaxation term but leaving out the non-linear contributions to zeroth order. (-i ω+1/τ) M^α =ξ^αG_M0^αβ = (-i ω+1/τ)^-1δ^αβ The non-linear terms are introduced as perturbations, which contribute to the relaxation ofthrough a self-energy term.(-i ω+1/τ) M(q, ω)^α =iq (u_M ϵ^αβγ M^β M^γ - u_N ϵ^αβγ N^β N^γ) +ξ^α Similarly, writing the -dynamics in the Fourier space to the zeroth order(-i ω + Dq^2) N^α =ζ^αG_N0^αβ = (-i ω+Dq^2)^-1δ^αβ and then adding the perturbation(-i ω+Dq^2) N(k, ω)^α =iq (u_MNϵ^αβγ N^β M^γ )+ ζ^α We write the magnetization correlation function from the non-conserving noise⟨ M^α (q, ω) M^β (q', ω') ⟩ =4 π^2 A_Mω^2 + τ^-2δ^αβδ(q+q')δ (ω + ω') The staggered magnetization correlation function is similarly:⟨ N^α (q, ω) N^β (q', ω') ⟩ =4 π^2 A_N q^2ω^2 + (Dq^2)^2δ^αβδ(q+q')δ (ω + ω') Evaluating the propagator involves adding corrections from the higher resolution in the perturbation terms:G(q,ω) = G_0 (q, ω) + G_0 (q, ω) Σ G (q, ω)The propagators forandare of the form G (q,ω)= 1/-iω + Σ(q,ω), where Σ(q,ω) is the self energy.The self energy for , Σ_M(q=0,ω=0) = -1/τ sinceis not a conserved field of the dynamics and is thus expected to relax. The self energy for , Σ_N(q → 0,ω=0) = -Dq^2 asis conserved and thus expected to display diffusion. The structure of the propagator G_N0(q, ω) dictates that each power of ω goes with -2 powers of q. Similar correspondence must hold for G_N(q, ω) as well if the perturbation series is to converge in the q, ω→ 0 limit. We can obtain τ and D in terms of the other parameters in eqns.( <ref>-  <ref>) self-consistently by calculating the above-mentioned self energies. This calculation to one loop is described below. The relevant diagrams are shown in Fig.( <ref>).It can be seen that each diagram is given by sums of involving integrals of the form u_1 u_2 ∫∫(α_1 q + β_1 p)(α_2 q + β_2 p)f(p)(ν^2 + g(p)^2)(-i ν + h(p)) dν dpwhere f(p)= {A_M, A_N p^2 }, g(p)= {(Dp^2)^2,τ^-1}, h(p)= {Dp^2 , τ^-1}, u_1, u_2 = {u_M, u_N, u_MN}. α_1, β_1, α_2 and β_2 are numerical constants and p and ν, the internal wavenumber and frequency. The factors in the denominator of the integrand come from the propagators, the factor f(p) from the noise correlator and the other two factors of the form α q + β p come from the two vertices in each diagram since each is linear in the momenta of the associated propagators. The different integrals that contribute to a particular diagram are obtained from the different ways of dividing the external wavenumber q and frequency ω into the wavenumbers p and q-p and frequencies ν and ω-ν of the propagators in the loop in addition to choosing the different components of the vector fieldsandin the loop. We are also only interested only in the leading order dependence in the ultraviolet cutoffs and so have replaced the propagators in the loop with their low wavenumber and low frequency forms and have taken the external frequency ω to be zero. As mentioned in the main text, numerical pre-factors from the above integrals can be absorbed into the definitions of the coupling strengths u_M, u_N and u_MN and the noise strengths A_M and A_N. Thus, the contribution of each diagram can be obtained from a simple power counting of the contributing integrals. Performing the relevant integrals over the frequencies ν for each diagram and adding, we obtainΣ (q,ω=0) ∼ - u_1 u_2 ∫_-Λ^Λ(Eq^2 + Fp^2)f(p)g(p) [g(p)+h(p)]dp,for each of the diagrams in Fig.( <ref>), where Λ is the ultraviolet cutoff for the wavenumber and E and F are numerical constants. Since, f(p), g(p) and h(p) are all even functions of p, there are no terms linear in p in the numerator of the integrand. The diagrams in the top row of Fig.( <ref>), contribute to Σ_M. For these, we set q=0 in Eqn.( <ref>). For the diagram on the left, u_1=u_2=u_M, f(p)=A_M, g(p)=h(p)=τ^-1, producing a contribution-u_M^2 A_M τ^2 ∫_-Λ^Λ p^2 dp ∼ -u_M^2 A_M τ^2 Λ^3For the diagram on the right, u_1=u_N, u_2=u_MN f(p)=A_Np^2, g(p)=Dp^2,h(p)=τ^-1, which gives-u_N u_MNA_N/D∫_-Λ^Λp^2/Dp^2 + τ^-1dp ∼ -u_N u_MN A_N Λ/D^2Thus, we obtain the first of the two self-consistent equations 1/τ = u_M^2A_Mτ^2Λ^3 + u_N u_MNA_NΛ/D^2 The diagrams in the bottom row of Fig.( <ref>), contribute to Σ_N. For these, it turns out that F=0 and so they give contributions ∼ q^2 as expected from the diffusive behavior of . For the diagram on the left, u_1=u_2=u_MN, f(p)=A_M, g(p)=τ^-1 and h(p)=Dp^2, yielding-q^2u_MN^2 A_M τ∫_-Λ^Λ1/Dp^2+τ^-1 dp ∼ -q^2u_MN^2A_M√(τ^3/D)Finally, for the diagram on the right, u_1=u_MN, u_2=u_N f(p)=A_Np^2,g(p)=Dp^2, h(p)=τ^-1, giving-q^2u_MNu_N A_N/D∫_-Λ^Λ1/Dp^2 + τ^-1 dp ∼ -q^2u_Nu_MNA_N √(τ/D^3)From these we obtain the other self-consistent equationD = u_MN^2A_M√(τ^3/D) + u_Nu_MNA_N √(τ/D^3). § COMPARISON WITH THE HEISENBERG DYNAMICSTo test the accuracy of our numerical code, we first ran the simulation for the Hamiltonian originating Heisenberg dynamics and obtained results for the conserved quantities, namely the standard magnetization and energy density. We confirmed that the conserved quantities show diffusive behaviour (Fig. <ref>) , thus setting up a standard to corroborate our results with. We also calculated the numerical values for the butterfly velocity and Lyapunov exponent for this case, and found them to be within expected error range of our simulation parameters, v_B = 1.66(± 0.02) , κ = 0.49 (± 0.02) (Fig.  <ref>), with ε = 10^-3, L = 2048, Δ t = 0.001-0.002.Finding κ from the expression log(D(x,t)/ε^2) = 2 κ t (1 - (x/v_Bt)^2)requires knowledge of D(0,t) but the perturbation strength (10^-3) is not small enough for the numerics to find an appreciable jump in log(D(x,t)/ε^2) from t=0 itself, as evident from the inset of the plot.Finally, we also find the power-law associated with the broadening of the decorrelator front (Fig. <ref>). The approach here involves comparing the Decorrelator to a threshold value, D_0 = 100 ε^2 and mark the time at which the single configuration arrival-front exceeds this threshold value, D(x,t) ≥ D_0. Collecting this data for several samples (∼ 10^4) we see that the arrival-front of the decorrelator broadens with time. The distribution of the arrival-times from the mean arrival-time (the slope of which with respect to the position is just the inverse arrival velocity), shows a collapse when fit to a 1/3 power-law. §.§ Liouville's Theorem holds for the generalized nearest-neighbour precessional dynamicsThe canonical dynamics of SO(3) spins on a lattice is governed byd S_i^αdt = ∑_βγϵ_αβγS_i^β∂ H∂ S_i^γ so that∑_α∂Ṡ_i^α∂ S_i^α = ∑_αβγϵ_αβγ(∂ S_i^β∂ S_i^α∂ H∂ S_j^γ + S_i^β∂^2 H∂ S_i^α∂ S_i^γ)The terms in the parenthesis being symmetric in αβ and αγ are eliminated by ϵ_αβγ. Thus the velocity in S-space is divergence free.This argument holds true even for the generalized dynamics:Ṡ_i α = ϵ_S_i β(S_i+1 γ± S_i-1 γ)∑_α∂Ṡ_i^α∂ S_i^α = ∑_ϵ_(∂ S_i^β∂ S_i^α (S_i+1 γ± S_i-1 γ) )The above term vanishes since ∂ S _j α∂ S_i β = 0 for any α, β, i ≠ j. <cit.> presented a similar argument in their work on non-reciprocal spin models, starting with the Landau-Lifshitz equation. 1 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Hanai(2022)]2208.08577 author author R. Hanai, @nooptitle Non-reciprocal frustration: time crystalline order-by-disorder phenomenon and a spin-glass-like state,(year 2022), http://arxiv.org/abs/arXiv:2208.08577 arXiv:2208.08577 NoStop
http://arxiv.org/abs/2312.16500v1
{ "authors": [ "Nisarg Bhatt", "Subroto Mukerjee", "Sriram Ramaswamy" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20231227101332", "title": "Emergent hydrodynamics in a non-reciprocal classical isotropic magnet" }
0 1 0 0 0 0 1 0=1 1 =1 =1 OT1pzcmit thicktheorem . #1 #2 (#3) remark . #1 #2 (#3) =0 thicktheorem theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition definition[theorem]Definition game[theorem]Game remark claim[theorem]Claim remark[theorem]Remark example[theorem]Example note[theorem]Note conjecture[theorem]Conjecture MyClaim[theorem]Claim MyClaimClaimClaims theoremTheoremTheorems assumptionAssumptionAssumptions constructionConstructionConstructions corollaryCorollaryCorollaries conjectureConjectureConjectures definitionDefinitionDefinitions exmapleExampleExamples experimentExperimentExperiments counterexampleCounterexampleCounterexamples lemmaLemmaLemmata observationObservationObservations propositionPropositionPropositions remarkRemarkRemarks claimClaimClaims factFactFacts noteNoteNotes=1appendixApp.AppendicessectionSec.Sections =1plain =1 =0π α#1 l#1#1 #1 l#1#1 boxfig[2]
http://arxiv.org/abs/2312.16025v1
{ "authors": [ "Minki Hhan", "Tomoyuki Morimae", "Takashi Yamakawa" ], "categories": [ "quant-ph", "cs.CC", "cs.CR" ], "primary_category": "quant-ph", "published": "20231226122710", "title": "A Note on Output Length of One-Way State Generators" }
[ \̱ḇe̱g̱i̱ṉ\endı * On a nonlocal de Sitter gravityIvan Dimitrijević Faculty of Mathematics, University of Belgrade, Studentski trg 16, p.o. box 550, 11000 Belgrade, Serbia [email protected] Branko DragovichInstitute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia [email protected] Mathematical Institute of the Serbian Academy of Sciences and Arts, Kneza Mihaila 36, 11000 Belgrade, SerbiaZoran Rakić Faculty of Mathematics, University of Belgrade, Studentski trg 16, p.o. box 550, 11000 Belgrade, Serbia [email protected] Jelena Stanković Faculty of Education, University of Belgrade, Kraljice Natalije 43, Belgrade, Serbia [email protected]* Ivan Dimitrijević, Branko Dragovich, Zoran Rakićand Jelena Stanković========================================================================== In this paper, we briefly review highlightsof nonlocal de Sitter gravity based on the nonlocal term√(R - 2Λ) ℱ() √(R - 2Λ) in the Einstein-Hilbert action without matter sector.This nonlocal de Sitter gravity model has several exact cosmological FLRW solutionsand one of these solutions contains some effects that are usuallyassigned to dark matter and dark energy. There are also some other interesting and promising properties of this kind of gravity nonlocality. We also review some anisotropic cosmological solutions, and mention the corresponding nonlocal Schwarzschild-de Sitter metric. § INTRODUCTION The Standard Model of Cosmology (SMC), or ΛCDM model, assumes that general relativity (GR) isvalid and applicable not only in the Solar system but also at the galactic and cosmological scale. Also there is almost common view that at the current cosmic time the universe approximately consists of 68 % ofdark energy (DE), 27 % ofdark matter (DM) and only5 % of well known visible (ordinary) matter, see <cit.>. According to ΛCDM point of view, there exists a cold dark matter which is responsible for observational dynamics inside galaxies and their clusters, while darkenergy in the form of the cosmological constant Λ acts as a repulsive force and causes accelerated expansion of the universe.However, despite many efforts to confirm existence of DM and DE either in the sky orin the laboratory experiments, they are still not discovered and remain hypothetical. In addition to problems with DE and DM, general relativity has its own problems, like singularity of the black holes solutions as well as singularity atthe beginning of the universe, and it means that GR should be modified in the vicinity of these singularities.It is also well known that GRis nonrenormalizable theory from the quantum point of view.Although GR is one of the most beautiful and successful physical theories, which has given several significantconfirmedpredictions,it is nevertheless reasonable to doubt in its validity in description and understanding of allastrophysical and cosmological gravitational phenomena. Also, there is no a priory reason that GR should be appropriate gravity theory at all spacetime scales. Keeping all this in mind, it follows that general relativity is not a final theory of gravitation and that its extension is desirable.Since, there is still nophysical principle that could indicate in which direction we should search for right extension of GR, there are many approaches to its modification, see <cit.> as some reviews. Despite many attempts, there is not yet generally accepted modification of general relativity. One of the current and attractive approaches is nonlocal modified gravity, see, e.g. <cit.>. In a nonlocalmodification, the Einstein-Hilbert action is extended by a term that contains all higher order degrees of d'Alembert-Beltrami operator= ∇_μ∇^μ = 1/√(-g)∂_μ (√(-g) g^μν∂_ν) usuallyin the form of an analytic expression F() = ∑_n=0^+∞ f_n^n , see <cit.>. There are also nonlocal gravity modelswhich contain some degrees of ^-1, e.g. see references <cit.>. One interesting class of nonlocal de Sitter models has the formS = 1/16 π G∫ d^4 x√(-g) (R- 2 Λ+ P(R) ℱ() Q(R)) ,where Λ is the cosmological constant, P(R) and Q(R) are some differentiable functions of the scalar curvature R, and ℱ() is an analytic function of , see <cit.> and references therein. To better explore pure nonlocal effects, it is intentionally omitted term with matter in (<ref>). This paper is a brief overviewof highlights ofnonlocal de Sitter gravity model (<ref>), whereP(R) = Q(R) = √(R - 2Λ) , ℱ () =∑_n= 1^+∞( f_n ^n + f_-n^-n) .In particular, we will present several relevant exact vacuum cosmological solutions and some aspects of the corresponding Schwarzshild-de Sitter metric. The first step in finding some exact cosmological solutions issolving the equation √(R-2Λ) =q √(R-2Λ) , where q =ηΛ (η∈ℝ) is an eigenvalue and√(R-2Λ) is an eigenfunction of the operator .One of these solutions mimics effects that are usually assigned to dark matter and dark energy. Some other solutions are examples of the nonsingular bounce ones in flat, closed and open universe. There are also singular and cyclic solutions. All these cosmological solutions are a result of nonlocality and do not exist in the local de Sitter case.§ NONLOCAL DE SITTER GRAVITY: √(DS) MODEL§.§ Action Our nonlocal de Sitter gravity model (introduced in <cit.>) is given by the action S = 1/16 π G∫ d^4 x√(-g) √(R - 2Λ) F() √(R - 2Λ) ,where F () is the following formal expansion in terms of the d'Alemberian:F() = 1+() = 1 + _+ () + _- () , _+ () =∑_n=1^+∞ f_n^n ,_- () =∑_n= 1^+∞ f_-n ^-n . When F()= 1, i.e. () = 0, thenmodel (<ref>) becomes local de Sitter one and coincides with Einstein-Hilbert action with cosmological constant Λ:S_0 = 1/16 π G∫ d^4 x√(-g) √(R - 2Λ)√(R - 2Λ)= 1/16 π G∫ d^4 x √(-g) (R - 2 Λ) . It is worth pointing out that action (<ref>) can be obtained in a very simple and natural way from action (<ref>)by embedding nonlocal operator (<ref>) within product √(R - 2 Λ) √(R - 2 Λ) . By thisway,R and Λ enter with the same form into nonlocal version as they are in the local one, and nonlocal operator F() is dimensionless. In order to differentiate (<ref>) from other non-local de Sitter models, we will often denotemodel (<ref>) as √(dS) . §.§ Equations of MotionTheequations of motion (EoM) for model (<ref>), when Q(R) =P(R), are given by (for more details, see <cit.>):G_μν+ Λ g_μν - g_μν/2 P(R)() P(R) + R_μν W - K_μν W + 1/2Ω_μν = 0 , W = 2 P'(R)() P(R),K_μν = ∇_μ∇_ν - g_μν ,Ω_μν =∑_n=1^+∞ f_n ∑_ℓ=0^n-1 S_μν(^ℓ P, ^n-1-ℓ P)-∑_n=1^+∞ f_-n∑_ℓ=0^n-1 S_μν(^-ℓ-1 P, ^-n+ℓ P),where S_μν (A, B) is defined asS_μν (A, B) = g_μν(∇^α A∇_α B + AB ) - 2∇_μ A ∇_ν B .If P(R) is an eigenfunction of the corresponding d'Alembert-Beltrami operator , and consequently also of its inverse ^-1, i.e.holds, for q≠ 0 , P(R) = qP (R),^-1 P(R) = q^-1 P(R) , () P(R) =(q) P(R) , then -7mm W = 2 (q) P' P,(q) = ∑_n=1^+∞ f_nq^n + ∑_n=1^+∞ f_-n q^-n ,-7mm S_μν(^ℓ P, ^n- 1 -ℓ P)= q^n-1 S_μν (P, P), -7mm S_μν(^-ℓ-1 P, ^-n+ℓ P) = q^-n-1 S_μν (P, P), -7mm S_μν(P, P) = g_μν(∇^α P∇_α P + PP ) - 2∇_μ P ∇_ν P, -7mmΩ_μν = '(q) S_μν(P,P), '(q) = ∑_n=1^+∞ n f_nq^n-1 -∑_n=1^+∞ n f_-n q^-n-1 and we get -17mmlG_μν+ Λ g_μν - g_μν/2 (q)P^2 + 2 (q) R_μν P P' - 2 (q) K_μν P P' 17mm + 1/2'(q) S_μν(P,P) = 0. The last equation can be rewritten to -10mml(G_μν+ Λ g_μν)(1 + 2 (q) P P') + (q)g_μν(-1/2 P^2 + P P'(R-2Λ)) 27mm- 2 (q) K_μν P P' + 1/2'(q) S_μν(P,P) = 0. If P(R)= √(R-2Λ), thenP(R)P'(R) = 1/2 and √(R - 2Λ) = q √(R - 2Λ) = η Λ√(R - 2Λ) , η Λ≠ 0 ,where q = η Λ and q^-1= η^-1Λ^-1 (η– dimensionless) follows from dimensionality of equalities (<ref>). Since P(R)= √(R-2Λ),EoM (<ref>) simplify to (G_μν+ Λ g_μν)(1 + (q)) + 1/2'(q) S_μν(√(R-2Λ),√(R-2Λ)) = 0.It is clear that EoM (<ref>) are satisfied if ℱ (q) = -1andℱ' (q) = 0 . It is worth pointing out that not only nonlocal de Sittermodel (<ref>) is very simple and natural but also such are corresponding EoM (<ref>)with respect to all other models and their EoM that can be derived from(<ref>) with Λ≠ 0.Let us remark that nonlocal operator ℱ (), which satisfies conditions (<ref>) in model (<ref>), can be taken in the rather general form <cit.> ℱ() =α e/qexp(-/q) + β e q/exp(-q/) , α + β = -1 , q = ηΛ . § COSMOLOGICAL SOLUTIONS§.§ Cosmological Solutions in Homogenous and Isotropic Space At the cosmological scale the universe is homogeneous and isotropic with the Friedmann-Lemaître-Robertson-Walker (FLRW) metric, (c=1 ,k= 0, ± 1),ds^2 = - dt^2 + a^2(t)(dr^2/1-k r^2 + r^2 dθ^2 + r^2 sin^2 θ dϕ^2).For the FLRW metric (<ref>) we have the following expressions for scalar curvature R and operator :R (t)= 6(ä/a + (ȧ/a)^2 +k/a^2),R= - ∂^2/∂ t^2R-3 H ∂/∂ tR,where H= 1/ad a/d t ≡ȧ/a is the Hubble parameter.In the sequel of this subsection, we present some exact cosmological solutions with Λ≠ 0 . For some details, see <cit.> and references therein. §.§.§ Solutionsof the form a(t) = A t^n e^γ t^2, (k= 0) In the case of next eight flat spaces (k=0), see <cit.> .There are two solutions of the forma(t) = A t^n e^γ t^2 , k= 0 ,where n and γ are some definite real constants.The eigenvalue problem√(R-2Λ) = q √(R-2Λ) ,q = ηΛ≠ 0is satisfied in the following two cases:1.n =2/3 , γ = Λ/14 ,q = - 3/7Λ , 2.n=0 , γ = 1/6Λ , q = -Λ . Using (<ref>) and (<ref>), we obtain the following two solutions in flat space:a_1(t)= A t^2/3 e^Λ/14 t^2 ,k =0, ℱ(-3/7Λ) = -1 , ℱ'(-3/7Λ) = 0 ,a_2(t)= Ae^Λ/6 t^2,k =0, ℱ(-Λ) = -1 , ℱ'(- Λ) = 0 . §.§.§ Solutionsof the form a(t) = (α e^λ t + β e^-λ t)^γ,(k=0)We have the followingtwo special solutions:a_3 (t) = A cosh^2/3(√(3/8Λ) t) , k= 0 , (3/8Λ)=-1,'(3/8Λ)=0 , a_4 (t) = A sinh^2/3(√(3/8Λ) t) , k= 0 ,(3/8Λ)=-1,'(3/8Λ)=0 . §.§.§ Solutionsof the form a(t) = (α sinλ t + β cosλ t )^γ,(k=0) In this case, we obtained the following four solutions:a_5 (t) = A (1 + sin(√(-3/2Λ) t))^1/3,k= 0 , (3/8Λ)=-1,'(3/8Λ)=0 ,a_6 (t) = A (1 - sin(√(-3/2Λ) t))^1/3,k= 0 , (3/8Λ)=-1,'(3/8Λ)=0 , a_7 (t) = A sin^2/3(√(-3/8Λ) t) ,k= 0 , (3/8Λ)=-1,'(3/8Λ)=0 ,a_8 (t) = A cos^2/3(√(-3/8Λ) t) ,k= 0 , (3/8Λ)=-1,'(3/8Λ)=0 . We have three type of vacuum solutions in the closed and open FLRW space presented in the next two subsubsections.§.§.§ Cosmological solutionof the form a(t) = A e^±√(Λ/6) t,(k = ± 1) In the paper <cit.>, we presented the following exact solution:a_9(t) = A e^±√(Λ/6) t ,k = ± 1, ℱ(1/3Λ) = -1 , ℱ'(1/3Λ) = 0, Λ > 0.Note that this solution is different with respect to the de Sitter one.§.§.§ Solutions of the form a(t) = (α e^λ t + β e^-λ t)^γ,(k=± 1)When α≠ 0,β≠ 0 ,R≠ 2Λ,q ≠ 0 and k ≠ 0we obtainγ=1/2,q=1/3Λ, λ= ±√(2/3Λ) ,k ≠ 0 .The corresponding cosmological solutions are:a_10 (t) = A cosh^1/2(√(2/3Λ) t) , k= ± 1 ,(1/3Λ)=-1,'(1/3Λ)=0 , a_11 (t) = A sinh^1/2(√(2/3Λ) t) , k= ± 1 , (1/3Λ)=-1,'(1/3Λ)=0 .§.§ Cosmological Solutions in Anisotropic Space We are going now to present some solutions for the Bianchi type I anisotropic metric in the form (for details, we refer to <cit.>)ds^2 = -d t^2 +a(t)^2(e^2β_1(t)dx^2+ e^2β_2(t)dy^2 + e^2β_3(t)dz^2)with conditionβ_1(t)+ β_2(t) + β_3(t) =0. Let us introduceσ(t) as follows:σ(t)^2 = β̇_1(t)^2+ β̇_2(t)^2 + β̇_3(t)^2.One can obtain the following expressionsR= R_FLRW + σ^2, u(t) = _FLRWu(t),where index FLRW denotes quantities corresponding to the FLRW metric with scale factor a(t) and k=0. Solving the eigenvalue problem√(R - 2 Λ) = q√(R - 2 Λ) ,we obtain several solutions for scale factor a(t) together withfunction σ^2 (t), with thecorresponding conditions ℱ (q) = -1 and ℱ' (q) = 0. For flat space (k=0) and constantσ^2 we have:a_1(t) = A t^2/3e^Λ/14(1-η) t^2 , q = -3/7Λ (1-η), σ^2 = 2Λη, a_2(t) = A cosh^2/3(√(3 Λ/8) (1-η) t),q = 3 Λ/8 (1-η)^2, σ^2 = 2 Λη (2-η),a_3(t) = A sinh^2/3(√(3 Λ/8) (1-η) t),q = 3 Λ/8 (1-η)^2, σ^2 = 2 Λη (2-η),a_4(t) = A cos^2/3(√(-3 Λ/8) (1-η) t),q = 3 Λ/8 (1-η)^2, σ^2 = 2 Λη (2-η),a_5(t) = A sin^2/3(√(-3 Λ/8) (1-η) t),q = 3 Λ/8 (1-η)^2, σ^2 = 2 Λη (2-η). Note that the above anisotropic solutions (<ref>)–(<ref>) tend to the corresponding homogeneous and isotropic ones presented in the previous subsection, when σ^2 → 0 (η→ 0).§ CONCLUDING REMARKSFrom the Friedmann equations, it is useful to introduce effective energy density ρ̅ andpressure p̅:ρ̅(t) = 3/8π G(ȧ^2 + k/a^2 - Λ/3) , p̅ (t) = 1/8π G(Λ - 2ä/a - ȧ^2 + k/a^2) ,and theequation ofstatep̅ (t) = w̅(t)ρ̅ (t) ,where w̅(t) is the correspondingeffective state parameter. Using (<ref>) and (<ref>), one can compute ρ̅(t) and p̅ (t) for each of the above cosmological solutions a(t). In this way, it is shown in <cit.> that solution a(t) = A t^2/3 e^Λ/14 t^2 mimics dark energy and dark matter.Recently, we started to investigate Schwarzschild-de Sitter spacetime in √(dS) nonlocal gravity with metricds^2 = -A(r) dt^2 + A^-1(r) dr^2 + r^2 dθ^2 + r^2 sin^2θ dφ^2 .The corresponding eigenvalue problem√(R(r)- 2Λ)= 1/r^2∂/∂ r[r^2 A(r) ∂/∂ r√(R(r)- 2Λ)] = q √(R (r)- 2Λ)is rather nonlinear in A(r), since unknown function A(r) is contained also in the scalar curvature R(r), i.e R(r) =1/r^2∂^2/∂ r^2[r^2 (1-A(r) )]. We found an approximativesolution of (<ref>), while finding exact solution remains a challenge, see <cit.>.It is worth noting that action (<ref>) of nonlocal √(dS) gravity can be transformed toS = 1/16π G∫ [R- 2Λ + (R - 4Λ) ℱ() (R- 4Λ)]√(-g) d^4xwhen |R| ≪ |2 Λ|, for details, see <cit.>. Nonlocal gravity model (<ref>) contains cosmological solution a(t) = A √(t) e^Λ/4 t^2<cit.> which mimics an interplay between radiation √(t) and dark energy e^Λ/4 t^2. It also has some vacuumsolutions that change topology when local de Sitter gravity extends by this nonlocal one <cit.>.It is also worth noting that employment of nonlocal operator in the form of exponential function comes from string theory, in particular p-adic string theory <cit.>. Nonlocality in the matter sector is also considered, see, e.g. <cit.>.Ending, we can conclude that so far obtained results in √(dS) nonlocal gravity are encouraging. We plan to exploreother aspects of √(dS), in particular inflation, gravitational waves and inclusion ofmatter sector. § ACKNOWLEDGMENTS This research was partially funded by the Ministry of Education, Science and Technological Developments of the Republic of Serbia: grant number 451-03-47/2023-01/ 200104 with University of Belgrade, Faculty of Mathematics, and grant number 451-03-1/2023-01/4 with Faculty of Education, University of Belgrade. It is also partially supported by the COST Action: CA21136 – Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse). I.D. is thankful to prof. V. Dobrev for hospitality during the conference LT- 15 in Varna.99.Planck2018N. Aghanim, et al., Planck 2018 results. VI. Cosmological parameters,Planck 118 collaboration A&A 641 (2020) A6 [arXiv:1807.06209 [astro-ph.CO]]. nojiriS. Nojiri and S. D.Odintsov,Unified cosmic history in modified gravity: from F(R) theory to Lorentz non-invariant models, Phys. Rep. 505 (2011) 59–144 [arXiv:1011.0544v4 [gr-qc]].clifton T. Clifton, P. G. Ferreira,A. Padilla andC. Skordis,Modified gravity and cosmology, Phys. Rep. 513 (2012) 1 [arXiv:1106.2476v2 [astro-ph.CO]].nojiri1 S. Nojiri, S. D.Odintsov and V. K. Oikonomou,Modified gravity theories on a nutshell: Inflation, bounce and late-time evolution, Phys. Rep. 692 (2017) 1–104 [arXiv:1705.11098 [gr-qc]].capozzielloS. Capozziello and F. Bajardi, Nonlocal gravity cosmology: An overview, Int. J. Mod. Phys. D31(2022) 2230009 [arXiv:2201.04512 [gr-qc]]. dragovich0 B. Dragovich,On Nonlocal modified gravity and cosmology, Springer Proc. Mathematics & Statistics 111 (2014) 251–-262.biswas1 T. Biswas, A.Mazumdar and W.Siegel, Bouncing universes in string-inspired gravity, JCAP 0603 (2006) 009[arXiv:hep-th/0508194].biswas3 T. Biswas,E. Gerwick, T. Koivisto and A. Mazumdar,Towards singularity and ghost free theories of gravity, Phys. Rev. Lett. 108 (2012) 031101 [arXiv:1110.5249v2 [gr-qc]].biswas4 T. Biswas, A. Conroy, A. S. Koshelev and A. Mazumdar, Generalized gost-free quadratic curvature gravity, Class. Quantum Grav. 31 (2014) 015022 [arXiv:1308.2319 [hep-th]].biswas5 T. Biswas, A. S. Koshelev, A. Mazumdar and S. Yu. Vernov,Stable bounce and inflation in non-local higher derivative cosmology, JCAP 08 (2012) 024[arXiv:1206.6374v2 [astro-ph.CO]].deser S. Deser and R. Woodard, Nonlocal cosmology, Phys. Rev. Lett. 99 (2007) 111301 [arXiv:0706.2151 [astro-ph]].dimitrijevic13 I. Dimitrijevic, B.Dragovich,Z. Rakic and J. Stankovic,Nonlocal de Sitter gravity and its exact cosmological solutions, JHEP 12 (2022) 054 [arXiv:2206.13515v1 [gr-qc]].modesto2 L. Modesto and L. Rachwal,Super-renormalizable and finite gravitational theories,Nucl. Phys. B 889 (2014) 228 [arXiv:1407.8036 [hep-th]].koshelev2A. S. Koshelev, L. Modesto, L. Rachwal and A. A. Starobinsky, Occurrence of exact R^2 inflation in non-local UV-complete gravity, JHEP 2016 (2016) 67[arXiv:1604.03127 [hep-th]].koshelev3 L. Buoninfante, A. S. Koshelev, G. Lambiase and A. Mazumdar, Classical properties of non-local, ghost- and singularity-free gravity, JCAP 09 (2018) 034[arXiv:1802.00399 [gr-qc]].eliz E. Elizalde, E. O. Pozdeeva andS. Yu. Vernov, Stability of de Sitter solutions in non-local cosmological models, PoS QFTHEP2011 138 (2012)[arXiv:1202.0178].koivisto A. Conroy, T. Koivisto, A. Mazumdar and A. Teimouri, Generalised quadratic curvature, non-local infrared modifications of gravity and Newtonian potentials, Class. Quantum Grav.32(2015) 015024[arXiv:1406.4998v3 [hep-th]].capozziello2009 S. Capozziello,E. Elizalde,Sh. Nojiri and S. D. Odintsov, Accelerating cosmologies from non-local higher-derivative gravity, Phys. Lett. B. 671 (1)(2009) 193-198.dimitrijevic10 I. Dimitrijevic, B.Dragovich, A. S.Koshelev, Z. Rakic and J. Stankovic, Cosmological solutions of a nonlocal square-root gravity, Phys. Lett. B 797 (2019) 134848 [arXiv:1906.07560 [gr-qc]]. maggioreE. Belgacem, Y. Dirian, S. Foffa and M. Maggiore, Nonlocal gravity. Conceptual aspects and cosmological predictions, JCAP 03 (2018) 002 [arXiv:1712.07066 [hep-th]].barvinsky2012 A. O. Barvinsky, Dark energy and dark matter from nonlocal ghost-free gravity theory, Phys. Lett. B710 (2012) 12.dimitrijevic1I. Dimitrijevic, B. Dragovich, J. Grujic and Z. Rakic, On modified gravity, Springer Proc. Mathematics &Statistics 36 (2013) 251–259 [arXiv:1202.2352 [hep-th]].dimitrijevic2 I. Dimitrijevic, B. Dragovich, J. Grujic and Z. Rakic, New cosmological solutions in nonlocal modified gravity, Rom. J. Phys. 58 (5-6) (2013) 550–559[arXiv:1302.2794 [gr-qc]].dimitrijevic3 I. Dimitrijevic, B. Dragovich, J. Grujic and Z. Rakic,A new model of nonlocal modified gravity, Publications de l'Institut Mathematique 94 (108) (2013) 187–196.dimitrijevic4 I. Dimitrijevic, B. Dragovich, J. Grujic and Z. Rakic, Some power-law cosmological solutions in nonlocal modified gravity, Springer Proc. Mathematics & Statistics 111 (2014) 241–250.dimitrijevic5 I. Dimitrijevic, B. Dragovich, J. Grujic, A. S. Koshelev and Z. Rakic,Cosmology of non-local f(R) gravity, Filomat 33 (2019) 1163 [arXiv:1509.04254v2 [hep-th]].dimitrijevic6 I. Dimitrijevic, B. Dragovich, J. Stankovic, A. S. Koshelev and Z. Rakic, On nonlocal modified gravity and its cosmological solutions, Springer Proc. Mathematics & Statistics 191 (2016) 35–51 [arXiv:1701.02090 [hep-th]].dimitrijevic7 I. Dimitrijevic, B. Dragovich, J. Grujicand Z. Rakic, Some cosmological solutions of a nonlocal modified gravity,Filomat 29 (2015) 619[arXiv:1508.05583 [hep-th]].dimitrijevic8 I. Dimitrijevic, Cosmological solutions in modified gravity with monomial nonlocality, Appl. Math. Comput. 285 (2016) 195.dimitrijevic9 I. Dimitrijevic, B. Dragovich, Z. Rakic and J. Stankovic, Variations of infinite derivative modified gravity,Springer Proc. Mathematics & Statistics 263(2018)91. dimitrijevic11 I. Dimitrijevic, B.Dragovich, A. S.Koshelev, Z. Rakic and J. Stankovic,Some cosmological solutions of a new nonlocal gravity model, Symmetry 2020 12(2020) 917 [arXiv:2006.16041 [gr-qc]].dimitrijevic12 I. Dimitrijevic, B.Dragovich,Z. Rakic and J. Stankovic,New cosmological solutions of a nonlocal gravity model, Symmetry 2022 14 (2022) 3 [arXiv:2112.06312 [gr-qc]]. dimitrijevic14 I. Dimitrijevic, Some exact anisotropic cosmological solutions of a simple nonlocal de Sitter gravity, to be published in Intern. J. Mod. Phys. A (2023).dimitrijevic15 I. Dimitrijevic, B.Dragovich,Z. Rakic and J. Stankovic, On the Schwarzschild-de Sitter metric of nonlocal de Sitter gravity, to be published in Filomat, arXiv:2212.13896 [gr-qc], (2023).dragovich1 B. Dragovich, A. Yu. Khrennikov, S. V. Kozyrev, I. V. Volovich and E. I. Zelenov, p-Adic mathematical physics: the first 30 years, p-Adic Numb. Ultrametric Anal. Appl.9 (2) (2017) 87–121 [arXiv:1705.04758 [math-ph]].dragovich2p‐Adic and adelic cosmology: p‐adic origin of dark energy and dark matter, AIP Conference Proceedings 826 (1), 25-42 (2006).dragovich3 B. Dragovich, A p-Adic matter in a closed universe, Symmetry 202214(2022) 73 [arXiv:2201.02200 [hep-th]].arefeva2 I. Ya. Aref'eva andI. V. Volovich, Cosmological daemon, JHEP2011 (2011) 102[arXiv:1103.0273v2 [hep-th]].koshelev2011A. Koshelev and S. Yu. Vernov,Analysis of scalar perturbations in cosmological models with a non-local scalar field, Class. Quantum Grav.28 (2011) 085019. ]
http://arxiv.org/abs/2312.16673v1
{ "authors": [ "Ivan Dimitrijevic", "Branko Dragovich", "Zoran Rakic", "Jelena Stankovic" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20231227183542", "title": "On a nonlocal de Sitter gravity" }
Eigenvalues and factors: a survey [This work was supported by the National Natural Science Foundation of China (Grant Nos. 12271162, 12301454, 12271425), Natural Science Foundation of Shanghai (Nos. 22ZR1416300 and 23JC1401500), the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning (No. TP2022031), and NRF-2021K2A9A2A1110161711.] Dandan Fan^a,b,Huiqiu Lin^aCorresponding author; Email addresses: [email protected] (D. Fan), [email protected] (H. Lin), [email protected] (H. Lu), [email protected] (O. S)., Hongliang Lu^c,Suil O^d^a School of Mathematics, East China University of Science and Technology, Shanghai 200237, China^b College of Mathematics and Physics, Xinjiang Agricultural UniversityUrumqi, Xinjiang 830052, China^c School of Mathematics and Statistics, Xi'an Jiaotong UniversityXi'an, Shaanxi 710049, China^d Department of Applied Mathematics and Statistics, The State University of New YorkKorea, Incheon, 21985, Republic of Korea =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Abstract A factor of a graph is a spanning subgraph satisfying some given conditions. An earlier survey of factors can be traced back to the Akiyama and Kano [J. Graph Theory, 1985, 9: 1-42] in which they described the characterization of factors in (bipartite) graphs and digraphs, respectively. Soon after, Kouider and Vestergaard summarized the findings related toconnected factors [Graphs Combin., 2005, 21(1): 1-26]. Plummer extended the aforementioned research by providing a comprehensive overview of progress made in the study of graph factors and factorization from 1985 to 2003 [Discrete Math., 2007, 7-8: 791-821]. In this paper, we aim to summarize the relevant results regarding factors from the perspective of eigenvalues. Keywords: eigenvalue; factor; perfect matching; Hamiltonian cycle. AMS Classification: 05C50§ INTRODUCTION The initial study of factors was due to Danish mathematician Petersen in 1891, who proved that every graph of even degrees can be decomposed into the union of edge-disjoint 2 factors. Additionally, Petersen also demonstrated that every 2-connected cubic graph possesses a 1-factor. These two results can be regarded as precursors to factor theory.Based on the properties of induced subgraphs, the factor problem can be divided into two classes: degree-constrained factors and component factors. Degree-constrained factors refer to factors where all vertices have degrees that do not exceed a given value. Common types of degree-constrained factors include odd factors, even factors, k-factors and [a,b]-factors, and so on. These two factors are not completely disjoint. For instance, a 1-factor in a graph can be regard as a spanning subgraph where each vertex has the same degree 1, or as a spanning subgraph where each component is a path P_2. Factor problems may also overlap with other graph theory problems, such as Hamiltonian cycle problem and subgraph packing problem. For a graph G, let A(G) denotethe adjacency matrix of G and let λ_i(G) denote the ith largest eigenvalue of A(G). Particularly, the largest eigenvalues of A(G), denoted by ρ(G), is called the spectral radius of G. The Laplacian matrix of G is defined as L(G)=D(G)-A(G), where D(G) is the diagonal matrix of vertex degrees of G. The Laplacian matrix is positive semidefinite and we order its eigenvalues as μ_1≥μ_2≥…≥μ_n-1≥μ_n=0.In this paper, we only consider simply graphs unless otherwise stated. The paper is organized as follows. In section 2, we will provide an overview of eigenvalue conditions, specifically highlighting the spectral radius and third largest eigenvalue, which guarantee the existence of a degree-constrained factor in graphs. In section 3, we summarize relevant eigenvalue results concerning the existence of component factors such as path factors, star factors, Hamiltonian (path) cycles, and k-trees. In section 4, we conclude the eigenvalue and factor packing problem, with a specific focus on 1-factors and edge-disjoint spanning trees. Moreover, we list some problems in the end.§ EIGENVALUES AND DEGREE-CONSTRAINED FACTORSIn this section, we focus on the eigenvalue conditions for the existence of degree-constrainted factors, which include many well-known factors, such as 1-factors, k-factors and [a,b]-factors, and so on.§.§ Matchings§.§.§ Matchings and perfect matchingsA matching in a graph is a set of pairwise nonadjacent edges. The matching number β(G) of a graph G is the cardinality of a maximum matching of G. Erdős and Gallai<cit.> determined the maximum number of edges in a graph of order n with given matching number. Let G_1∇ G_2 be the graph obtained from the disjoint union G_1∪ G_2 by adding all edges between G_1 and G_2. Denote by e(G) the number of edges in G.Suppose that G is a graph of order n with matching number β. If n > 2β, thene(G) ≤max{2β+1 2, β 2+β(n-β)}.If 2 β+1 ≤ n<(5β+3)/2, then K_2β+1∪ (n-2β-1)K_1 is the unique extremal graph; if n=(5β+3)/2, then there are two extremal graphs K_2β+1∪ (n-2β-1)K_1 and K_β∇ (n-β)K_1. If n>(5β+3)/2, then K_β∇ (n-β)K_1 is the unique extremal graph.Let o(G) be the number of odd components in a graph G. Berge <cit.> gave the formula, which is called by Berge-Tutte Formula.For a graph G of order n,β(G)=1/2(n-max_S⊆ V(G){o(G-S)-|S|}).Brouwer andGregory<cit.> utilized the (k+1)th largest adjacency eigenvalue to provide a lower bound on the matching number in a regular graph. For a positive integer r, a graph is r-regular if every vertex has the degree r.Let G be a connected r-regular graph of even order n. Suppose that k>0 is an integer such that n≡ k 2. Ifr-λ_k+1(G)> {[0.1457; 1-3/r+1- 1/(r+1)(r+2);1-4/r+2- 1/(r+2)^2 ].then β(G)≥ (n-k)/2+1. Based on Berge-Tutte Formula, Feng, Yu and Zhang<cit.> provided a spectral radius condition with given matching number.Let G be a graph of order n with matching number β. * If n=2 β or 2 β+1, then ρ(G) ≤ρ(K_n), with equality if and only if G≅ K_n.* If 2 β+2 ≤ n<3 β+2, then ρ(G) ≤ 2 β, with equality if and only if G≅ K_2 β+1∪ (n-2 β-1)K_1.* If n=3 β+2, then ρ(G) ≤ 2 β, with equality if and only if G≅ K_β∇ (n-β)K_1 or G≅ K_2 β+1∪ (n-2 β-1)K_1.* If n>3 β+2, then ρ(G) ≤ (β-1+√((β-1)^2+4 β(n-β)))/2, with equality if and only if G≅ K_β∇ (n-β)K_1. Recognizing that the graphs considered by Feng, Yu, and Zhang may not necessarily be connected, Chen and Lu<cit.> undertaken the following research efforts specifically focusing on connected graphs.Let G be a connected graph of order n with matching number β. * If n=2β or 2β+1, then ρ(G) ≤ρ(K_n), with equality if and only if G≅ K_n.* If 2β+2 ≤ n ≤ 3β-1, then ρ(G) ≤ρ(K_1 ∇(K_2β-1∪(n-2β) K_1)), with equality if and only if G≅ K_1 ∇(K_2β-1∪(n-2β) K_1).* If n ≥ 3β, then ρ(G)≤ (β-1+√((β-1)^2+4 β(n-β)))/2, with equality if and only if G ≅ K_β∇ (n-β)K_1.Kim, O, Sim and Shin<cit.> proved an upper bound for the spectral radius conditions in an n-vertex connected graph G with β(G) ≤n-k/2.Let n and k be two positive integers, and let G be a connected graph of order n with matching number β≤n-k/2, where 2 ≤ k ≤ n-2 and n ≡ k( 2). Then we have the following conclusions. * If n ≤ 3 k, thenρ(G) ≤ρ(K_n-k/2∇n+k/2K_1), with equality if and only if G≅ K_n-k/2∇n+k/2K_1.* If n ≥ 3 k+2, thenρ(G) ≤ρ(K_1 ∇(K_n-k-1∪ kK_1)), with equality if and only if G≅ K_1 ∇(K_n-k-1∪ kK_1). For β<⌊n/2⌋, Zhang, Wang and Wang<cit.> utilized Berge-Tutte Formula to obtain that δ(G) ≤β. Take the minimum degree into consideration, they characterized the extremal graphs having maximum spectral radius with fixed matching number. For a graph H, let K_1 ∇_δ K_a ∇ H be the graph obtained from K_a ∇ H and a new vertex w by connecting w to (any) δ vertices of the part K_a in K_a ∇ H, where 1 ≤δ≤ a. Denote by δ(G) the minimum degree of a graph G. Clearly, δ(K_1 ∇_δ K_a ∇ H)=δ.Suppose that n, k, δ are three positive integers, where n ≥ 29, 2≤ k ≤ n-2, n ≡ k( 2), and 1 ≤δ≤n-k/2. Let G be a connected graph of order n with minimum degree δ and matching number β≤n-k/2. Thenρ(G) ≤max{ρ(K_δ∇(K_n+1-2 δ-k∪ (δ+k-1)K_1), ρ(K_1 ∇_δ K_n-k/2∇(n+k/2-1)K_1)},with equality if and only if G≅ K_δ∇(K_n+1-2 δ-k∪ (δ+k-1)K_1 or G≅ K_1 ∇_δ K_n-k/2∇ (n+k/2-1)K_1.LetΔ(n, t, k)=(n^2-k^2)^2-2(n^2-k^2)(k+t-1)(n+k+10 t-4)+16 t(k+t-1)^2(n+4t-k-2),where n, k and t are three positive integers.Suppose that n, t and k are three positive integers, where 2 ≤ k ≤ n-2,1 ≤ t ≤n-k/2 and n ≡ k( 2). Let G be a t-connected graph of order n with matching number β≤n-k/2. Then we have the following conclusions. * If Δ(n, t, k)>0, thenρ(G) ≤ρ(K_t ∇ (K_n+1-2 t-k∪ (t+k-1)K_1)),with equality if and only if G≅ K_t ∇ (K_n+1-2 t-k∪ (t+k-1)K_1).* If Δ(n, t, k)=0, thenρ(G) ≤ρ(K_t ∇ (K_n+1-2 t-k∪ (t+k-1)K_1))=ρ(K_n-k/2∇n+k/2K_1),with equality if and only if G≅ K_t ∇(K_n+1-2 t-k∪ (t+k-1)K_1) or G≅ K_n-k/2∇n+k/2K_1.* If Δ(n, t, k)<0, thenρ(G) ≤ρ(K_n-k/2∇n+k/2K_1), with equality if and only if G≅ K_n-k/2∇n+k/2K_1. Let ℱ be a family of graphs. A graph G is called ℱ-free if it does not contain any graph in ℱ as a subgraph. The classical Brualdi-Solheid-Turán type problems consider the maximum spectral radius of an n-vertex ℱ-free graph. Let T_k(n) be the complete k-partite graph of order n whose partition sets have sizes as equal as possible. In 2007, Nikiforov<cit.> characterized the extremal graph among all K_r+1-free graphs.If G is a K_r+1-free graph of order n, then ρ(G) ≤ρ(T_k(n)), with equality if and only if G ≅ T_k(n).Let ex(n,ℱ) and spex(n,ℱ) be the maximum number of edges and maximum value of spectral radius among all ℱ-free graphs of order n, respectively. In the meantime, Feng, Yu and Zhang<cit.> determined the exact value of spex(n, M_s+1), where M_s+1 denotes a matching with s+1 edges. Alon and Frankl<cit.> provided further research by showing that ex(n,{K_k+1, M_s+1})=max{e(T_k(2 s+1)),e(G_k(n, s))}, where G_k(n, s)=T_k-1(s) ∇ (n-s)K_1. Later, Wang, Hou and Ma<cit.> determined the exact value of spex(n,{K_k+1, M_s+1}).For n ≥ 4 s^2+9 s and k ≥ 2,spex(n,{K_k+1, M_s+1})=ρ(G_k(n, s)).A perfect matching is a matching that covers all vertices of the graph. It is well known that β(G) = n/2 if G has a perfect matching. Perfect matching as the simplest degree-constrained factor in a graph has been widely investigated. Obviously, a perfect matching is a 1-factor of graphs. Tutte<cit.> gave the characterization of 1-factors in 1947, which remains one of the cornerstone results in factor theory. Recall that o(G) is the number of odd components in a graph G.A graph G has a perfect matching if and only ifo(G-S)≤ |S|for all S⊆ V(G). Utilizing this Theorem, Brouwer and Haemers<cit.> described a regular graph to contain a perfect matching in terms of the third largest eigenvalue in 2005. Let G be a connected r-regular graph of even order n. Ifλ_3(G)≤{[ r-1+3/r+1; r-1+3/r+2 ].then G has a perfect matching.In <cit.>, for r is odd, the upper bound in Theorem <ref> can be improved to λ_3 ≤ r- 1+4/r+2. Later, Cioabă, Gregory and Haemers<cit.> further improved the above upper bounds as follows.Let G be a connected r-regular graph of even order n. Ifλ_3(G)< {[θ=2.85577…; r-2+√(r^2+12)/2; r-3+√((r+1)^2+16)/2; ].where θ is the largest root of x^3-x^2-6x+2=0, then G has a perfect matching.The graphic parameters play an important role in the study of factors. By imposing the minimum degree of a graph as a parameter, Liu, Liu and Feng<cit.> extended the above results to graphs that are not necessarily regular.Let G be a connected graph of even order n with minimum degree δ≥ 2. If n≥max{7+7δ+2δ^2, δ^3+3δ^2+2δ} andρ(G)≥ρ(K_δ∇(K_n-2δ-1∪ (δ+1)K_1)),then G has a perfect matching unless G≅ K_δ∇ (K_n -2δ-1∪(δ+1)K_1).In 2021, O<cit.> provided an edge condition to guarantee the existence of a perfect matching in a connected graph for general n.Let G be a connected graph of even order n. * For n ≥ 10 or n=4, if e(G)>n-2 2+2, then G has a perfect matching. * For n=6 or n=8, if e(G)>9 or e(G)>18, then G has a perfect matching. Observe that none of K_1∇ (K_n-3∪ 2K_1), K_2∇ 4K_1 and K_3∇ 5K_1 contains a perfect matching. Combining this with e(K_1∇ (K_n-3∪ 2K_1))=n-2 2+2 for n≥ 4, e(K_2∇ 4K_1)=9 and e(K_3∇ 5K_1)=18, it is easy to find that the bounds in Theorem <ref> are sharp. Moreover, O<cit.> obtained a similar result from the perspective of spectral radius.Let G be a connected graph of even order n. * For n≥ 8 or n=4, if ρ(G)>θ(n) where θ(n) is the largest root of x^3-(n-4)x^2-(n-1)x+2(n-4)=0, then G has a perfect matching.* For n=6, if ρ(G)>1+√(33)/2, then G has a perfect matching. One can verify that ρ(K_2∇ 4K_1)=1+√(33)/2 and ρ(K_1∇ (K_n-3∪ 2K_1))=θ(n). This implies that the conditions in Theorem <ref> are best possible. For matchings in bipartite graphs, the König-Hall Theorem, also known as Hall's Theorem or the Marriage Theorem, was established by König (1931) and Hall (1935). Due to its wide applications to many graph theory problems and to other branches of mathematics, König-Hall Theorem remains one of the most influential graph-theoretic results. For any v∈ V(G), let N_G(v) denote the neighborhood of v in G, and for S⊆ V(G), let N_G(S)=⋃_v∈ SN_G(v). A bipartite graph G=(X,Y) has a perfect matching if and only if |X|=|Y| and|N_G(S)|≥|S|for all S⊆ X. A bipartite graph G = (X, Y) is called balanced if |X| = |Y|. Motivated by Hall's condition, Fan, Goryainov, Huang and Lin<cit.> gave a spectral radius condition for the existence of a perfect matching in a balanced bipartite graphwith fixed minimum degree. Given two bipartite graphs G_1=(X_1, Y_1) and G_2=(X_2, Y_2), let G_1∇_1 G_2 denote the graph obtained from G_1 ∪ G_2 by adding all possible edges between X_2 and Y_1. Let G be a balanced bipartite graph of order 2nwith minimum degree δ≥ 1. If ρ(G)≥ρ(K_δ+1,δ∇_1 K_n-δ-1,n-δ),then G has a perfect matching unless G≅ K_δ+1,δ∇_1 K_n-δ-1,n-δ. By a simple computation, it is easy to find thatρ(K_δ+1,δ∇_1 K_n-δ-1,n-δ)=√(2n^2 - 2(δ + 1)n +2δ(δ+1) + 2f(n,δ))/2, where f(n,δ)=√(n^4 - 2(δ + 1)n^3 + (1 - δ^2)n^2 + 2δ(3δ^2 + 4δ + 1)n - 3δ^2(δ+1)^2). Considering that the expression of ρ(K_δ+1,δ∇_1 K_n-δ-1,n-δ)is complicated and the fact that ρ(K_δ+1,δ∇_1 K_n-δ-1,n-δ)≥ρ(K_n-δ-1,n∪ (δ+1)K_1)=√(n(n-δ-1)), they provided the following result which improves Theorem <ref> for sufficiently large n. Let G be a balanced bipartite graph of order 2 n with minimum degree δ≥ 1. If n ≥1/2δ^3+1/2δ^2+δ+4 andρ(G) ≥√(n(n-δ-1)),then G has a perfect matching unless G ≅ K_δ+1, δ∇_1 K_n-δ-1, n-δ. §.§.§ Rainbow matchingsLet 𝒢={G_1,…,G_t} be a family of not necessarily distinct graphs with common vertex set V. We say that a graph H with vertex set V is 𝒢-rainbow if there exists a bijection ϕ: E(H)→[t] such that e∈ E(G_ϕ(e)) for each e∈ E(H). Guo, Lu, Ma and Ma <cit.> generalized the results in <cit.> to rainbow version. Suppose that n and m are two positive integers such that n≥ 2m+2. Let 𝒢={G_1,G_2,…,G_m+1} be a family ofgraphs on the same vertex set [n]. Ifρ(G_i)≥max{2m,1/2(m-1+√((m-1)^2+4m(n-m)))}, then * for 2m+2≤ n<3m+2, 𝒢 admits a rainbow matching unless G_1=…=G_m+1≅ K_2m+1∪ (n-2m-1)K_1; * for n=3m+2, 𝒢 admits a rainbow matching unless G_1=…=G_m+1≅ K_m∇(n-m)K_1 or G_1=…=G_m+1≅ K_2m+1∪ (n-2m-1)K_1; * for n>3m+2, 𝒢 admits a rainbow matching unless G_1=…=G_m+1≅ K_m∇(n-m)K_1. §.§.§ Fractional matchings and fractional perfect matchingsA fractional matching of a graph G is a function f giving each edge a number in [0,1] such that ∑_e∈Γ(v)f(e)≤ 1 for each v∈ V(G), where Γ(v) is the set of edges incident to v. The fractional matching number of G, denote by α_*^'(G), is the maximum of ∑_e ∈ E(G) f(e) over all fractional matchings f. Let i(G) denote the number of isolated vertices of G. By replacing o(G-S) with i(G-S) in graph G-S, Scheinerman and Ullman<cit.> provided a fractional version of the Berge-Tutte Formula, that is, α_*^'(G)=min_S⊆ V(G)n-i(G-S)+|S|/2. In <cit.>, O established an upper bound for the spectral radius of a graph G of order n with minimum degree δ, stating that ρ(G)<δ√(1+2k/(n-k)), which guarantees that its fractional matching number satisfies α_*^'(G)>n-k/2 where k is a real number between 0 and n. In the same paper, O further investigated the relationships between the ρ(G), minimum degree δ and α_*^'(G). For two positive integers δ and k, let ℋ(δ, k) be the family of connected bipartite graphs H with the bipartitions A and B such that |A|=|B|+k, d_H(v)=δ for v∈ A and the degrees of vertices in B are equal.If G is a graph of order n with minimum degree δ, thenα_*^'(G) ≥n δ^2/ρ(G)^2+δ^2,with equality if and only if k=n(ρ(G)^2-δ^2)/ρ(G)^2+δ^2 is an integer and G is an element of ℋ(δ, k).Luo, Liu and Ao<cit.> considered a lower bound of spectral radius in a graph G to guarantee α_*^'(G)>n-k/2.Let k and n be two positive integer with k<n, and let G be a graph of order n ≥max{6 δ+5 k+1,5 δ+k^2+4 k+1} with minimum degree δ. Ifρ(G) ≥ρ(K_δ∇(K_n-2 δ-k∪ (δ+k) K_1)),then α_*^'(G)>n-k/2 unless G ≅ K_δ∇(K_n-2 δ-k∪ (δ+k) K_1). Afractional perfect matching of an n-vertex graph G is a fractional matching f with ∑_e∈ E(G)f(e)=n/2.A graph G has a fractional perfect matching if and only ifi(G-S)≤ |S|for all subset S⊆ V(G). Given a graph G, if it does not have fractional perfect matchings, by Theorem <ref>, there exists a vertex set S_0⊆ V(G) such that i(G-S_0)>|S_0|. Along this line, Fan, Lin and Lu<cit.> proved the following result.Suppose that G is a connected graph of order n with minimum degree δ. If n≥ 8δ+4 andρ(G)≥ρ(K_δ∇ (K_n-2δ-1∪ (δ+1)K_1)), then G has a fractional perfect matching unless G≅ K_δ∇ (K_n-2δ-1∪ (δ+1)K_1). In <cit.>, the spectral radius conditions of the existence of perfect fractional matching in a graph for general n were also provided. §.§ [1,b]-factors An odd [1,b]-factor of a graph G is a spanning subgraph H such that d_H(v) is odd and 1≤ d_H(v)≤ b for each v∈ V(G). As an extension of Tutte's 1-Factor Theorem, the well-known sufficient and necessary condition for the existence of an odd [1,b]-factor was established by Amahashi.Let G be a graph and let b be a positive odd integer. Then G has an odd [1,b]-factor if and only ifo(G-S)≤ b|S|for all S⊆ V(G).For X,Y⊆ V(G), we denote by E_G(X,Y) the set of edges with one endpoint in X and one endpoint in Y, and e_G(X,Y)=|E_G(X,Y)|. Theorem <ref> states that there exists a subset S ⊆ V(G) such that o(G-S)>b|S| if there is no odd [1, b]-factor in an r-regular graph G. By counting the number of edges between S and G-S, Lu, Wu and Yang <cit.> showed that G-S has at least three odd components G_1, G_2, G_3 such that e_G(V(G_i), S)<⌈r/b⌉ for 1≤ i≤ 3. Then they gave a sufficient condition for the existence of an odd [1,b]-factor in a graph in terms of the third largest eigenvalue.Let G be a connected r-regular graph of even order, where r≥ 3. Ifλ_3(G)≤{[ r-⌈r/b⌉-2/r+1+1/(r+1)(r+2) ; r-⌈r/b⌉-1/r+1+1/(r+1)(r+2) ;r-⌈r/b⌉-1/r+2+1/(r+2)^2 ;r-⌈r/b⌉-2/r+2+1/(r+2)^2, ].then G has an odd [1,b]-factor. They found the upper bounds appeared above are reachable in the family ℱ_r, b, where ℱ_r, b is a family of such a possible component depending on r and b. Later, Kim, O, Park and Ree<cit.> improved the spectral conditions of Theorem <ref> by constructing a graph H_r,η in ℱ_r, b with λ_1(H_r,η)=ρ(r, b). Letε={[ 2 ifrand ⌈r/b⌉ has same parity,; 1otherwise, ] and η=⌈r/b⌉-ε..Denote by G the complement graph of G, and C_n the cycle on n vertices, respectively. DefineH_r, η= K_r+1-η∇η/2K_2ifris even,C_η∇r+2-η/2K_2ifris odd.Consider the vertex partition {V(C_η), V(r+2-η/2K_2)} of H_r, η, then the quotient matrix of A(H_r, η) isB=([ η-3 r+2-η; η r-η ]).The characteristic polynomial of B isp(x)=(x-η+3)(x-r+η)-(r+2-η) η.Since the vertex partition is equitable, ρ(H_r, η)=λ_1(B)=r-3+√((r+3)^2-4 η)/2. When r is even, a similar result can also be obtained. Let G be an r-regular graph of even order, where r≥ 3. If λ_3(G)< ρ(r,b), then G has an odd [1,b]-factor, where ρ(r,b)= {[ r-2+√((r+2)^2-4(⌈r/b⌉-2))/2; r-2+√((r+2)^2-4(⌈r/b⌉-1))/2; r-3+√((r+3)^2-4(⌈r/b⌉-2))/2; r-3+√((r+3)^2-4(⌈r/b⌉-1))/2 . ].Motivated by the work of Amahashi<cit.> and O<cit.>, Li and Miao<cit.> considered the edge conditions for a connected graph to contain an odd [1,b]-factor. Let G be a connected graph of even order n and let b≤n/2-1 be an odd integer. * For n=4 or n ≥ 10, if e(G)>b+1+n-b-1 2, then G has an odd [1,b]-factor.* For n=6, if e(G)>9, then G contains a 1-factor; if e(G)>5, then G has an odd [1,3]-factor.* For n=8, if e(G)>18, then G contains a 1-factor; if e(G)>10, then G contains an odd [1,3]-factor; if e(G)>7, then G has an odd [1,5]-factor.From the perspective of spectral radius condition, they also gave the following result.Let G be a connected graph of even order n and let b≤n/2-1 be an odd integer. Assume that θ(n) is the largest root of x^3+(b-n+3) x^2-(n-1) x-(b+1)(b-n+3)=0. * For n=4 or n ≥ 8, if ρ(G)>θ(n), then G has an odd [1,b]-factor.* For n=6, if ρ(G)>1+√(33)/2, then G has a 1-factor; if ρ(G)>θ(6), then G has an odd [1,3]-factor. Observe that K_1∇ (K_n-b-2∪ (b+1)K_1) contains no odd [1,b]-factors and K_2∇ 4K_1 contains no 1-factors. Moreover, ρ(K_1∇ (K_n-b-2∪ (b+1)K_1))=θ(n) and ρ(K_2∇ 4K_1) =1+√(33)/2. Thus, the bounds in Theorem <ref> are best possible. Later, Fan, Lin and Lu<cit.> generalized their result by adding the condition of minimum degree in a graph.Suppose that G is a connected graph of even order n≥max{4(b+1)δ+4, bδ^3+δ} with minimum degree δ. Ifρ(G)≥ρ(K_δ∇ (K_n-(b+1)δ-1∪ (bδ+1)K_1)),then G has an odd [1,b]-factor unless G≅ K_δ∇ (K_n-(b+1)δ-1∪ (bδ+1)K_1). The following fundamental theorem provides a sufficient and necessary condition for the existence of a [1,b]-factor for b≥ 2. Recall that i(H) is the number of isolated vertices of a graph H.Let G be a graph and let b≥ 2 be an integer. Then G has a [1,b]-factor if and only ifi(G-S)≤ b|S|for all subset S⊆ V(G). By Theorem <ref>, Fan, Lin and Lu<cit.> obtained the following result.Suppose that G is a connected graph of order n with minimum degree δ. If n≥ 4(b+1)δ+4 andρ(G)≥ρ(K_δ∇ (K_n-(b+1)δ-1∪ (bδ+1)K_1)) where b≥ 2, then G has a [1,b]-factor unless G≅ K_δ∇ (K_n-(b+1)δ-1∪ (bδ+1)K_1).§.§ [a,b]-factorsAn [a,b]-factor of a graph G is a spanning subgraph H such that a≤ d_H(v)≤ b for each v∈ V(G). A [k,k]-factor is called a k-factor. Tutte <cit.> obtained the well-known k-Factor Theorem in 1952.Let k≥ 1 be an integer and let G be a graph. Then G has a k-factor if and only if for all disjoint subsets S, T ⊆ V(G),δ_G(S,T)=k|S|+∑_x ∈ Td_G(x)-k|T|-e_G(S, T)-q_G(S, T) ≥ 0,where q_G(S, T) denotes the number of components C of G-(S ∪ T) such that k|C|+e_G(V(C), T) ≡ 1( 2). Moreover, δ_G(S,T)≡ k|V(C)|( 2). In 1891, Petersen<cit.> demonstrated that every 2r-regular graph possesses a 2k-factor for 0 ≤ k ≤ r. One might naturally ask whether an even regular graph can have an odd factor, or an odd regular graph can have either an odd or even factor.Based on the Tutte's k-Factor Theorem, Lu<cit.> gave an answer to this question in terms of the third largest eigenvalue.Suppose that r and k are two integers such that 1 ≤ k<r. Let G be a connected r-regular graph of order n. Let m be an integer such that 1 ≤ m ≤ r andλ_3(G) ≤ r-m-1/r+1+1/(r+1)(r+2) .Let m^* ∈{m, m+1} such that m^* ≡ 1( 2). If one of the following conditions holds, then G has a k-factor. * r is even, k is odd, n is even, and r / m ≤ k ≤ r(1-1/m);* r is odd, k is even, and k ≤ r(1-1 / m^*);* both r and k are odd and r / m^* ≤ k.Using the eigenvalue interlacing Theorem, Lu<cit.> slightly improved the above upper bounds (in terms of r and k) on the third largest eigenvalue.Let m be an integer such that m^* ∈{m, m+1} and m^* ≡ 1( 2), and let G be a connected r-regular graph. Suppose thatλ_3(G)<{[ r-2+√((r+2)^2-4(m-1))/2; r-3+√((r+3)^2-4(m-1))/2 ].If one of the following conditions holds, then G has a k-factor. * r is odd, k is even and k ≤ r(1-1/m^*);* both r and k are odd and r/m^*≤ k.By considering the condition of edge-connectedness, Gallai<cit.> showed that an h-edge-connected r-regular graph G will contain a k-factor depending on the parity of r and k. In another study, Gu<cit.> studied eigenvalue conditions of t-edge-connected regular graphs with k-factors. Suppose that G is a t-edge-connected r-regular graph of order n. Let k be an integer with 1 ≤ k<d, and let t^'∈{t, t+1} be an even number and t^* ∈{t, t+1} be odd.* r is even, k is odd and n is even. Let k̂=min{k, r-k} and m=⌈r/k̂⌉. If d ≤k̂ t^', or, if d>k̂ t^' andλ_⌈2 r/r-k̂ t^'⌉(G)< r-2+√((r+2)^2-4(m-2))/2if m is even,r-2+√((r+2)^2-4(m-1))/2if m is odd ,then G has a k-factor.* r is odd and k is even. Let m=⌈r/r-k⌉. If r ≤(r-k) t^*, or, if r>(r-k) t^* andλ_⌈2 r/r-(r-k) t^*⌉(G)< r-3+√((r+3)^2-4(m-1))/2if m is even,r-3+√((r+3)^2-4(m-2))/2if m is odd ,then G has a k-factor.* both r and k are odd. Let m=⌈r/k⌉. If r ≤ k t^*, or, if r>k t^* andλ_⌈2 r/r-k t^*⌉(G)< r-3+√((r+3)^2-4(m-1))/2if m is even,r-3+√((r+3)^2-4(m-2))/2if m is odd,then G has a k-factor. Let G be a graph, and g, f be two integer-valued functions defined on V(G). A spanning subgraph H is called a (g, f)-factor of G if g(v) ≤ d_H(v) ≤ f(v) for any v ∈ V(G). In 1970, Lovász<cit.> provided the following theorem which generalizes the criteria of other factors, such as 1-factors, k-factors, [a,b]-factors.Let G be a graph and g, f be integer-valued functions defined on V(G) such that g(v) ≤ f(v) for any v ∈ V(G). Then G has a (g, f)-factor if and only if for all disjoint subsets S, T ⊆ V(G),∑_s ∈ S f(s)+∑_t ∈ T(d_G(t)-g(t))-e_G(S, T)-q_G(S, T) ≥ 0,where q_G(S, T) denotes the number of components C of G-(S ∪ T) such that g(v)=f(v) for any v ∈ V(C) and ∑_v ∈ V(C) f(v)+e_G(V(C), T) ≡ 1( 2). If f(x)≡ g(x) (  2) for all x∈ V(G), a (g,f)-factor F with d_F(x)≡ f(x) ( 2) for all x∈ V(G) is called a parity (g,f)-factor. The following criterion can be easily deduced from Lovász (g,f)-Factor Theorem. Let G be a graph, and let g, f be integer-valued functions defined on V(G) such that g(v) ≤ f(v) for any v ∈ V(G). Then G has a (g, f)-factor if and only if for all disjoint subsets S, T ⊆ V(G),∑_s ∈ S f(s)+∑_t ∈ T(d_G(t)-g(t))-e_G(S, T)-q_G(S, T) ≥ 0,where q_G(S, T) is the number of components C of G-(S ∪ T) such that ∑_v ∈ V(C) f(v)+e_G(V(C), T)≡ 1( 2). When g(x)=a and f(x)=b for all x∈ V(G), an even (or odd) [a,b]-factor of a graph G is a spanning subgraph H such that d_H(v) is even (or odd) and a≤ d_H(v)≤ b for each v∈ V(G). In 2022, O<cit.> proved upper bounds (in terms of a,b and r) for certain eigenvalues (in terms of a,b,r and h) in an h-edge-connected r-regular graph G to guarantee the existence of an even (or odd) [a,b]-factor. Denote by r_ab=min{r-a,b} and η={[ ⌈r/r_ab⌉-1,; ⌈r/r_ab⌉-2,;⌈r/b⌉-1,;⌈r/b⌉-2,;⌈r/r-a⌉-1,;⌈r/r-a⌉-2. ].Let ρ(r,a,b)={[ r-2+√((r+2)^2-4η)/2 ,; r-3+√((r+3)^2-4η)/2 ,; θ , ].where θ is the largest root of x^3-(r-2)x^2-2rx+r-1=0. Let r, a, b, h, h^', and h^* be positive integers such that r ≥ 3, a ≤ b<r, h ≤ r, h^'∈{h, h+1} is an even number, and h^* ∈{h, h+1} is an odd number. Suppose that G is an h-edge-connected r-regular graph of order n. * For even r, odd a, b, and even n, if r ≤ r_a b h^', or if r>r_a b h^' andλ_⌈2 r/r-r_ab h'⌉(G)<ρ(r, a, b),then G has an odd [a, b]-factor;* For both odd r and odd a, b, if r ≤ b h^*, or, if r>b h^* andλ_⌈2 r/r-b h^*⌉(G)<ρ(r, a, b),then G has an odd [a, b]-factor;* For odd r and even a, b, if r ≤(r-a) h^*, or, if r>(r-a) h^* andλ_⌈2 r/r-(r-a)h^*⌉(G)<ρ(r, a, b),then G has an even [a, b]-factor. Kim and O<cit.> extended the result in <cit.> to general factors, and investigated the sufficient conditions for a graph to have a (g, f)-parity factor in terms of the minimum degree, edge-connectivity and eigenvalues. For positive integers r and h with r>h, letρ(r, h)= r-2+√((r+2)^2-8⌊ h / 2⌋)/2ifrorhis even,r-3+√((r+3)^2-4 h)/2if bothr, hare odd andh ≥ 3,μifris odd andh=1,where μ is the largest real root of x^3-(r-2) x^2-2 r x+r-1=0. Suppose that G is an h-edge-connected graph and θ is a real number with 0<θ<1. Let g and f be integer-valued functions on V(G) such that g(v) ≤θ d_G(v) ≤ f(v) and g(v) ≡ f(v) ( 2) for all v ∈ V(G) and ∑_v ∈ V(G) f(v) is even. Let θ^*=min{θ, 1-θ} and let h_e and h_o be even and odd integers in {h, h+1}, respectively. If one of (a)-(e) holds, then G has a (g, f)- parity factor. * i. h ≥ 1 / θ^*, orii. h<1 / θ^* ≤δ(G) and λ_⌈2/1-θ^* h⌉<ρ(δ(G),⌈ 1 / θ^*⌉-1).* d_G(v) and f(v) are even for all v ∈ V(G).* d_G(v) is even for all v ∈ V(G), and one of the following holds.i. h_e ≥ 1 / θ^*.ii. h_e<1 / θ^* ≤δ(G), and λ_⌈2/1-θ^* h_e⌉<ρ(δ(G),⌈ 1 / θ^*⌉-1).* f(v) is even for all v ∈ V(G), and one of the following holds.i. h_o ≥ 1 /(1-θ).ii. h_o<1 /(1-θ) ≤δ(G) and λ_⌈2/1-(1-θ) h_o⌉<ρ(δ(G),⌈ 1 /(1-θ)⌉-1).* d_G(v) ≡ f(v)( 2) for all v ∈ V(G), and one of the following holds.i. h_o ≥ 1 / θ.ii. h_o<1 / θ≤δ(G) and λ_⌈2/1-θ h_o⌉<ρ(δ(G),⌈ 1 / θ⌉-1).For n ≥ 2 x>0, Cho, Hyun, O and Park<cit.> proved that a complete bipartite graph K_x, n-x has an [a, b]-factor if and only ifρ(K_x, n-x) ≥{[ √(a(n-a))ifn<a+b,;√(a b)·n/a+b ifn ≥ a+b . ].Among graphs of order n without an [a, b]-factor, they guessed that the graph K_a-1∇(K_1∪ K_n-a) attains the maximum spectral radius. Note that there are n-a vertices with degree n-2, a-1 vertices with degree n-1, and one vertex with degree a-1 in K_a-1∇(K_1∪ K_n-a). Thus, K_a-1∇(K_1∪ K_n-a) contains no [a, b]-factors. From this, they posed a conjecture regarding the spectral radius condition for the existence of an [a,b]-factor in graphs as follows.Let a· n be an even integer at least 2 where n≥ a+1, and let G be a graph of order n. Ifρ(G)>ρ(K_a-1∇(K_1∪ K_n-a)),then G has an [a,b]-factor. Li and Cai <cit.> showed that a graph G of order n ≥ 2 a+b+a^2-a/b contains an [a, b]-factor if the maximum degree of any two non-adjacent vertices of G is greater than a n/a+b. In light of this condition, Fan, Lin and Lu<cit.> confirmed Conjecture <ref> for n≥ 3a+b-1.Let a· n be an even integer, where n≥ 3a+b-1 and b≥ a≥ 1, and let G be a graph of order n. Ifρ(G)>ρ(K_a-1∇(K_1∪ K_n-a)),then G has an [a,b]-factor. For nonnegative integers a and b, an [a,b]-factor is a (g,f)-factor with g≡ a and f≡ b. By using Theorem <ref> as a tool, Wei and Zhang<cit.> confirmed the Conjecture <ref> completely. § EIGENVALUES AND COMPONENT FACTORSIn this section, we studycomponent factors.For a family 𝒮 of graphs, a 𝒮-factor of a graph G is a spanning subgraph of G such that each of its components is isomorphic to an element of 𝒮.§.§ Path factorsA path factor of a graph G is a spanning subgraph of G in which each component is a path of order at least 2. Since every path of order at least four has a {P_2,P_3}-factor, a graph G has a path factor if and only if G has a {P_2,P_3}-factor. Akiyama, Jin and Era<cit.> presented the characterization of a path factor.A graph G has a {P_2,P_3}-factor if and only ifi(G-S) ≤ 2|S|for all S ⊆ V(G).Li and Miao<cit.> used Theorem <ref> to provide a spectral radius condition that ensures a graph G contains a {P_2,P_3}-factor.Let G be a connected graph of order n. * For n ≥ 4 and n ≠ 7, if ρ(G)>θ(n), then G has a {P_2,P_3}-factor, where θ(n) is the largest root of x^3-(n-5)x^2+(1-n) x+3(n-5)=0.* For n=7, if ρ(G)>1+√(41)/2, then G has a {P_2,P_3}-factor. Note that ρ(K_1∇(K_n-4∪ 3K_1))=θ(n) and ρ(K_2 ∇ 5K_1)=1+√(41)/2. The conditions in Theorem <ref> are sharp since none of K_1∇(K_n-4∪ 3K_1) and K_2∇ 5K_1 contains no {P_2,P_3}-factors. Subsequently, Zhang<cit.> improved the result presented in <cit.> by incorporating the minimum degree condition.Suppose that n ≥ 2 and t ≥ 1 are two integers. Let G be a connected graph of order n with minimum degree δ(G) ≥ t and without a {P_2,P_3}-factor. Then n ≥ 3 t+1 andρ(G) ≤max{ρ(K_t ∇(K_n-1-3 t∪K_2t+1)), ρ(K_⌊n-1/3⌋∇(K_n-1-3⌊n-1/3⌋∪K_2⌊n-1/3⌋+1))},with equality if and only if G≅ K_t ∇(K_n-1-3 t∪K_2t+1) or G≅ K_⌊n-1/3⌋∇(K_n-1-3⌊n-1/3⌋∪ K_2⌊n-1/3⌋+1).. A graph G is said to be factor-critical if G-v has a perfect matching for every vertex v ∈ V(G). Let G be a factor-critical graph of order n≥ 3 and let V(G)={v_1, v_2, …, v_n}. Add n new vertices {w_1, w_2, …, w_n} to G together with edges v_i w_i for 1 ≤ i ≤ n. The resulting graph on 2n vertices is called a sun. Let sun(G) denote the number of components of G which are sun graphs.A graph G has a {P_3,P_4,P_5}-factor if and only ifsun(G-S) ≤ 2|S| for any S ⊆ V(G).By employing Theorem <ref> as a valuable tool, Zhang<cit.> gave the maximum spectral radius among all connected graph without a {P_3,P_4,P_5}-factor.Suppose that n ≥ 3 and t ≥ 1 are two integers. Let G be a connected graph of order n with minimum degree δ(G) ≥ t and without a {P_3,P_4,P_5}-factor. Then n≥ 3t+1 and the following conclusions hold. * If n ≡ 0( 3), thenρ(G) ≤max{ρ(K_t ∇ (K_n-1-3 t∪K_2t+1)), ρ(K_n-3/3∇(K_2 ∪ K_2∪K_2n-9/3))},with equality if and only if G≅ K_t ∇(K_n-1-3 t∪K_2t+1) or G≅ K_n-3/3∇(K_2 ∪ K_2 ∪K_2n-9/3).* If n ≡ 2( 3), thenρ(G) ≤max{ρ(K_t ∇(K_n-1-3 t∪K_2t+1)), ρ(K_n-2/3∇(K_2 ∪K_2(n-2)/3))},with equality if and only if G≅ K_t ∇(K_n-1-3 t∪K_2t+1) or G≅ K_n-2/3∇(K_2 ∪K_2(n-2)/3).* If n ≡ 1( 3), thenρ(G) ≤max{ρ(K_t∇(K_n-1-3t∪K_2t+1)), ρ(K_n-1/3∇K_2n+1/3)},with equality if and only if G≅ K_t∇(K_n-1-3t∪K_2t+1) or G≅ K_n-1/3∇K_2n+1/3. §.§ Star factorsLet G be a graph and let k ≥ 2 be an integer. A{K_1, j: 1 ≤ j ≤ k}-factor of a graph G is a spanning subgraph of G such that each component is isomorphic to a member in {K_1, j: 1 ≤ j ≤ k}. Miao and Li<cit.> established a lower bound on the spectral radius conditions to ensure the existence of a {K_1, j: 1 ≤ j ≤ k}-factor in a graph. Furthermore, they also constructed certain extremal graphs to demonstrate that all bounds presented below are indeed best possible.Let k ≥ 2 and n ≥ k+2 be two integers, and let G be a connected graph of order n. Assume that θ(n, k) is the largest root of x^3-(n-k-3) x^2-(n-1) x-(k+3-n)(k+1)=0. * For k=2, if ρ(G)>θ(n, 2) with n ≥ 4 and n ≠ 7, then G has a {K_1,1, K_1,2}-factor; if n=7 and ρ(G)>1+√(41)/2, then G has a{K_1,1, K_1,2}-factor.* For k=3, if ρ(G)>θ(n, 3) with n ≥ 5 and n ≠ 9, then G has a {K_1,1, K_1,2, K_1,3}-factor; if n=9 and ρ(G)>1+√(57)/2, then G has a{K_1,1, K_1,2, K_1,3}-factor.* For k ≥ 4, if ρ(G)>θ(n, k), then G has a {K_1, j: 1 ≤ j ≤ k}-factor.§.§ Cycle factorsFor k≥ 3, Luo, Liu and Ao<cit.> proved a tight spectral condition to guarantee the existence of a {K_2,{C_k}}-factor in a graph with minimum degree. Let k ≥ 3 be an integer, and let G be a graph of order n ≥ 5 δ+6 with minimum degree δ≥ 1. Ifρ(G) ≥ρ(K_δ∇(K_n-2 δ-1∪(δ+1) K_1)),then G has a {K_2,{C_k}}-factor unless G ≅ K_δ∇(K_n-2 δ-1∪(δ+1) K_1).§.§ Connected factorsA graph G has a (g,f)-factor,but it is not necessarily the case that G has a connected (g,f)-factor. The problem of determining whether a graph G has a connected (g,f)-factor is NP-complete<cit.>. As is well known, it remains so in the case of g≡ f≡ 2. A cycle C of a graph G is a Hamiltonian cycle if it passes once and only once through every vertex of G; likewise, a path is a Hamiltonian path if it passes every vertex exactly once.A graph is called a Hamiltonian graph if it contains a Hamiltonian cycle. Obviously, A Hamiltonian cycle is a connected 2-factor and a Hamiltonian path is a connected [1,2]-factor, respectively. Hence, the factor theory of graphs can be viewed as an extension of the Hamiltonian cycle problem.The following well-known theorem was given by Ore, which is an edge condition for the existence of a Hamiltonian cycle in a graph.Let G be a graph of order n. Ife(G)>n-12+1.then G has a Hamiltonian cycle. In 2010, Fiedler and Nikiforov<cit.> gave spectral radius conditions for the existence of a Hamiltonian cycle or Hamiltonian path in a graph, respectively.Let G be a graph of order n. Ifρ(G)≥ n-2, then G has a Hamiltonian path unless G≅ K_n-1∪ K_1. If strict inequality holds, then G has a Hamiltonian cycle unless G≅ K_n-1+e, where K_n-1+e denotes the complete graph on n-1 vertices with a pendent edge.Note that δ(G)≥ 2 is a trivial necessary condition for a graph to be Hamiltonian.Let G be a connected graph of order n≥ 14 with minimum degree δ(G)≥ 2. Ifρ(G)≥ρ(K_2∇(K_n-4∪ 2K_1)),then G has a Hamiltonian cycle unless G≅ K_2∇(K_n-4∪ 2K_1).For n≤ 13, K_2∇(K_n-4∪ 2K_1) is not necessarily the extremal graph that corresponds to the lower bound. For example, when n=7, ρ(K_3 ∇ 4 K_1)>ρ(K_2∇(K_3∪ 2K_1) and K_3 ∇ 4 K_1 contains no Hamiltonian cycles, and when n=9, ρ(K_4 ∇ 5 K_1)>ρ(K_2∇(K_5∪ 2K_1)) and K_4 ∇ 5K_1 contains no Hamiltonian cycles. Moreover, Ning and Ge<cit.> proposed the conjecture that the condition n≥ 14 in Theorem <ref> could be improved to n≥ 10, which was later verified by Chen, Hou and Qian<cit.>. In the same paper, they also considered a spectral radius condition for the existence of a Hamiltonian path in graphs.Let G be a graph of order n ≥ 4 with minimum degree δ(G) ≥ 1. Ifρ(G)>n-3,then G has a Hamiltonian path unless G∈{K_1∇(K_n-3∪ 2 K_1), K_2∇ 4 K_1, K_1∇ (K_1,3∪ K_1)}.Soon after, Benediktovich<cit.> improved the results in <cit.>.Let G be a graph of order n≥ 9 with minimum degree δ(G) ≥ 2. Ifρ(G)>n-3,then G has a Hamiltonian cycle unless G∈{K_4∇ 5K_1, K_3∇(K_1,4∪ K_1), K_1∇ (K_n-3∪ K_2), K_2∇(K_n-4∪ 2K_1)}. The celebrated Dirac theorem<cit.> states that every graph of order n≥ 3 with minimum degree at least n/2 has a Hamiltonian cycle. By taking the minimum degree as a parameter, Erdős<cit.> gave a sufficient condition for a graph to contain a Hamiltonian cycle which generalized Ore's theorem<cit.>.Let G be a graph of order n≥ 14 with minimum degree δ(G)≥ k where 1≤ k≤ (n-1)/2. Ife(G)>max{n-k2+k^2, ⌈n+1/2⌉2+⌊n-1/2⌋^2},then G has a Hamiltonian cycle. Li and Ning<cit.> presented the spectral analogues of Erdős theorem.Let k be an integer, and let G be a graph of order n with minimum degree δ(G) ≥ k. * If k ≥ 1, n≥max{6k+5,(k^2+6k+4)/2} andρ(G)≥ρ(K_k∇(K_n-2k∪ kK_1)),then G has a Hamiltonian cycle unless G≅ K_k∇(K_n-2k∪ kK_1).* Ifk ≥ 0, n≥max{6k+10,(k^2+7k+8)/2} andρ(G)≥ρ(K_k∇(K_n-2k-1∪ (k+1)K_1)),then G has a Hamiltonian path unless G≅ K_k∇(K_n-2k-1∪ (k+1)K_1).Nikiforov<cit.> proposed the following theorem, which generalizes the Theorems <ref> and <ref>, and strengthen the Theorem <ref> for n sufficiently large.Let k≥ 1 be an integer, and let G be a connected graph of order n with minimum degree δ(G)≥ k. * If n≥ k^3+k+4 andρ(G)≥ n-k-1,then G has a Hamiltonian cycle unless G≅ K_1∇(K_n-k-1∪ K_k) or G≅ K_k∇(K_n-2k∪ kK_1).* If n ≥ k^3+k^2+2 k+5 andρ(G)≥ n-k-2,then G has a Hamiltonian path unless G≅ K_k∇(K_n-2k-1∪ (k+1)K_1) or G≅ K_n-k-1∪ K_k+1. A graph G is t-tough if |S|≥ tc(G-S) for every subset S⊆ V(G) with c(G-S)>1, where c(G) is the number of components of a graph G. Being 1-tough is an obvious necessary condition for a graph to be Hamiltonian<cit.>. Conversely, Chvátal<cit.> proposed the following conjecture.There exists a finite constant t_0 such that every t_0-tough graph is Hamiltonian.Around this conjecture, Enomoto, Jackson, Katerinis and Saito <cit.> obtained that every 2-tough graph contains a 2-factor and there exist (2-ε)-tough graphs without a 2-factor, where ε>0 is an arbitrary small number, and hence without a Hamiltonian cycle. They conjectured that the value of t_0 might be 2. In 2000, this result was disproved by Bauer, Broersma and Veldman <cit.>, and they observed that if such a t_0 exists, then it must be at least 9/4. In general, the conjecture still remains open. By incorporating toughness and spectral conditions, Fan, Lin and Lu<cit.> considered Chvátal's conjecture from another perspective and determined the spectral condition to guarantee the existence of a Hamiltonian cycle among 1-tough graphs. Let K_n-4^+3 be the graph obtained from 3K_1∪ K_n-4 by adding three independent edges between 3K_1 and K_n-4, and let M_n=K_1∇ K_n-4^+3. Suppose that G is a connected 1-tough graph of order n≥ 18 with minimum degree δ(G)≥ 2. If ρ(G)≥ρ(M_n), then G has a Hamiltonian cycle, unless G≅ M_n.Moon and Moser<cit.> pointed out that δ(G)>n/2 is a necessary condition for a balanced bipartite on 2n vertices to be Hamiltonian. Similar to the work of Erdős in Theorem <ref>, they considered the corresponding result for the balanced bipartite graphs with minimum degree at most n/2.Let G be a balanced bipartite graph of order 2n. If the minimum degree δ(G) ≥ k for1 ≤ k ≤n/2 ande(G)>n(n-k)+k^2,then G has a Hamiltonian cycle. Let B_n, k be the bipartite graph obtained from the complete bipartite graph K_n, n by deleting all edges in its one subgraph K_k, n-k. One can verify that the condition in Theorem <ref> is best possible since e(B_n, k)=n(n-k)+k^2 and B_n, k contains no Hamiltonian cycles.Liu, Shiu and Xue<cit.> gave a sufficient condition on the spectral radius for a bipartite graph to be Hamiltonian. Let K_n,n-2+4e be a bipartite graph obtained from K_n, n-2 by adding two vertices which are adjacent to two common vertices with degree n-2 in K_n, n-2. Let G be a balanced bipartite graph of order 2n ≥ 8 with minimum degree δ(G) ≥ 2. Ifρ(G) ≥√(n(n-2)+4),then G has a Hamiltonian cycle unless G≅ K_n, n-2+4 e. Li and Ning<cit.> focused on a spectral analogue about the result in Theorem <ref>. Suppose that k ≥ 1 and n ≥(k+1)^2. Let G be a balanced bipartite graph of order 2n with minimum degree δ(G) ≥ k. Ifρ(G) ≥ρ(B_n, k),then G has a Hamiltonian cycle unless G≅ B_n, k. Since K_n, n-k is a proper subgraph of B_n, k,ρ(B_n, k)>ρ(K_n, n-k)=√(n(n-k)). By using this result, Ge and Ning<cit.> obtained an improved boundary on Theorem <ref>.Suppose that k ≥ 1 and n ≥ k^3+2 k+4. Let G be a balanced bipartite graph of order 2n with minimum degree δ(G) ≥ k. Ifρ(G) ≥√(n(n-k)),then G has a Hamiltonian cycle unless G≅ B_n, k. Notably, Jiang, Yu and Fang<cit.> independently derived the result stated in Theorem <ref> to be valid under a relaxed condition of n ≥max{k^3/2+k+3,(k+1)^2}.A basic result in graph theory asserts that every connected graph has a spanning tree. On the other hand, a spanning tree of a graph on n vertices is a connected [1,n-1]-factor. Many researchers investigated the existence of spanning trees under given conditions. A k-tree is a spanning tree with every vertex of degree at most k, where k≥ 2 is an integer. It is not difficult to see that a k-tree is a connected [1,k]-factor. In 1989, Win <cit.> proved the following result, which gives a sufficient condition for the existence of a k-tree in a connected graph, see <cit.> for a short proof. Let c(G) be the number of components of a graph G.Let G be a connected graph. If k≥ 2 andc(G-S)≤(k-2)|S|+2for S⊆ V(G), then G has a k-tree.Since a 2-tree is just a Hamiltonian path, the results in<cit.> provide the spectral conditions when k=2. For k ≥ 3, Wong<cit.> employed the result of Win and established the following theorem regarding the existence of k-trees in regular graphs.Let k ≥ 3 and let G be a connected r-regular graph. Ifλ_4(G) <r-r/(k-2)(r+1),then G has a k-tree. Cioabă and Gu<cit.> generalized Theorem <ref> by taking the edge connectivity into account.Let k ≥ 3 and let G be a connected r-regular graph with edge connectivity κ^'. Let l=r-(k-2) κ^'. Each of the following statements holds. * If l ≤ 0, then G has a k-tree.* If l>0 and λ_⌈3 r/l⌉<r-r/(k-2)(r+1), then G has a k-tree.Fan, Goryainov, Huang and Lin<cit.> gave a spectral radius condition for the existence of a k-tree in a connected graph of order n.Let k≥ 3, and letG be a connected graph of order n≥ 2k+16. Ifρ(G)≥ρ(K_1∇ (K_n-k-1∪ k K_1)),then G has a k-tree unless G≅ K_1∇ (K_n-k-1∪ kK_1). For any integer k≥ 2, a spanning k-ended-tree of a connected graph G is a spanning tree with at most k leaves. For an integer l ≥ 0, the l-closure of a graph G is the graph obtained from G by successively joining pairs of nonadjacent vertices whose degree sum is at least l until no such pair exists. Let C_l(G) denote the l-closure of G. The following closure theorem regarding the existence of a spanning k-ended-tree was established by Broersma and Tuinstra.Let G be a connected graph of order n, and let k be an integer with 2 ≤ k ≤ n-1. Then G has a spanning k-ended-tree if and only if the (n-1)-closure C_n-1(G) of G has a spanning k-ended-tree.Ao, Liu and Yuan<cit.> gave the following result. Let G be a connected graph of order n and k ≥ 2 be an integer. If n ≥max{6 k+5, k^2+3/2 k+2} and ρ(G) ≥ρ(K_1 ∇(K_n-k-1∪ k K_1)), then G has a spanning k-ended-tree unless G ≅ K_1 ∇(K_n-k-1∪ k K_1). Let T be a spanning tree of a connected graph G. The leaf degree of a vertex v ∈ V(T) is the number of leaves adjacent to v in T.Additionally, the leaf degree of T refers to the maximum leaf degree among all the vertices of T. Kaneko<cit.> gave a characterization of trees with a fixed leaf degree using the number of isolated vertices.Let k ≥ 1 be an integer, and let G be a connected graph. Then G has a spanning tree with leaf degree at most k if and only ifi(G-S)<(k+1)|S|for any nonempty subset S ⊆ V(G). Motivated by Kaneko's theorem, Ao, Liu and Yuan<cit.> presented a tight spectral condition for the existence of a spanning tree with leaf degree at most k in a connected graph. Let k ≥ 1 be an integer, and let G be a connected graph of order n ≥ 2 k+12. Ifρ(G) ≥ρ(K_1 ∇(K_n-k-2∪ (k+1) K_1)),then G has a spanning tree with leaf degree at most k unless G ≅ K_1 ∇(K_n-k-2∪ (k+1) K_1).§ EIGENVALUES AND FACTOR PACKING PROBLEM As an extension of the factor existence problem, many researchers have explored the maximum number of edge-disjoint 1-factor<cit.>, Hamiltonian cycle<cit.> and spanning tree<cit.> in a graph by using graph parameters. However, there are relatively few results that investigate this problem from the perspective of eigenvalues. §.§ Perfect matchings One problem concerned perfect matching which has attracted considerable interest is that of determining the structure of graphs with a unique perfect matching<cit.>. In 1985, Godsil<cit.> showed that the path attains the minimum smallest positive eigenvalue among all trees with a unique perfect matching, and posted an open problem to characterize all bipartite graphs with a unique perfect matching whose adjacency matrices have inverses diagonally similar to non-negative matrices. This problem was settled by Yang and Ye<cit.>.For a unicyclic graph G, if G contains a unique perfect matching, then A(G)^-1 is an integer matrix<cit.>. Let A(G)^-1=[b_i j], and let G_+ be the graph associated to the matrix A(G)^-1 such that V(G_+)=V(G) and the vertices i and j are adjacent in G_+ if and only if b_i j≠ 0. This implies that τ(G)=1/ρ(G_+), where τ(G) denotes the smallest positive eigenvalue of A(G). By using the concept of inverse graphs as a tool, Barik and Behera<cit.> characterized the unique extremal graph attains the minimum τ value among all connected bipartite unicyclic graphs on n=2m vertices. Let 𝒰_2 m be the class of connected bipartite unicyclic graphs on n=2m vertices. Consider a path P_2 m=v_1v_2⋯ v_mv_m+1⋯ v_2m. If m is even, let U_e be the graph obtained from P_2 m by adding an edge between the vertices v_m-2 and v_m+3. If m is odd, let U_o be the graph obtained from the path P_2 m by adding an edge between the vertices v_m-3 and v_m+2. It is worth mentioning that both U_e and U_o are very close to P_2 m.Let m ≥ 4 and U ∈𝒰_2 m. Then the following results hold. * If m is even, then τ (U_e) ≤τ(U).* If m is odd, then τ(U_o) ≤τ(U).Lovász<cit.> proved that a graph of order n with a unique perfect matching cannot have more than n^2/4 edges. An edge is said to be a cut edge if its removal increases the number of components of a graph. Kotzig<cit.> proved that a connected graph G with a unique perfect matching contains a cut edge uv that is an edge of the perfect matching of G. Based on the above two structural properties, Fan, Lin and Lu<cit.> determined the graph attaining the maximum spectral radius among all graphs of order 2n with a unique perfect matching. Suppose that G_1 is an empty graph with vertex set U={u_1,u_2,…, u_n}, and G_2 is a complete graph with vertex set W={w_1,w_2,…, w_n}. Let G(2n,1) be the graph of order 2n obtained from G_1∪ G_2 by letting N_G_2(u_i)={w_1,w_2,…,w_i} for 1≤ i≤ n. Clearly, G(2n,1) contains a unique perfect matching.If G is a connected graph of order 2n with a unique perfect matching, thenρ(G)≤ρ(G(2n,1)),with equality if and only if G≅ G(2n,1). Observe that the Laplacian matrix of a disjoint union of n/2 edges has eigenvalues 0 and 2 in a graph G of order n. This implies that deletion of the edges of a perfect matching in a graph G will reduce the eigenvalues of the Laplacian matrix of G with at most 2. Brouwer and Haemers<cit.> proved that a regular graph of even order with algebraic connectivity μ_n-1 has at least ⌊μ_n-1+1/2⌋ disjoint perfect matchings. Soon after, Cioabă, Gregory and Haemers<cit.> considered the number of edge-disjoint matchings in a regular graph G from the view of the second largest eigenvalue.An r-regular graph G of even order has at least ⌊r-λ_2(G)+1/2⌋ edge-disjoint perfect matchings.§.§ Spanning trees In 1889, Cayley<cit.> showed that the number of spanning trees in the complete graph K_n is n^n-2. The edge-disjoint spanning trees has many applications in fault-tolerance networks as well as network reliability <cit.>. Thus it is quite interesting to explore how many edge-disjoint spanning trees in a given graph. The spanning tree packing number (or simply STP number) of a graph G, denoted by τ(G), is the maximum number of edge-disjoint spanning trees contained in G. Nash-Williams <cit.> and Tutte <cit.> independently discovered a fundamental theorem that characterizes graphs G with τ(G)≥ k.For any partition π of V(G), let E_G(π)denote the set of edges in G whose endpoints lie in different parts of π, and let e_G(π)=|E_G(π)|.Let G be a connected graph. Then τ(G)≥ k if and only if for any partition π of V(G),e_G(π)≥ k(t-1),where t is the number of parts in the partition π.The well-known Matrix-Tree Theorem of Kirchhoff <cit.> indicates that the number of spanning trees (not necessarily edge-disjoint) of a graph G with n labelled vertices is ∏_i=1^n-1μ_i/n. For edge-disjoint spanning trees, Seymour proposed the following problem in private communication to Cioabă as mentioned in <cit.>.Let G be a connected graph. Determine the relationship between the spanning tree packing number τ(G) and the eigenvalues of G.Inspired by the Kirchhoff's Matrix-Tree Theorem and Problem <ref>, Cioabă and Wong <cit.> started to study the spanning tree packing number via the second largest eigenvalue of the adjacency matrix.Let k and r be two integers with r≥ 2k≥ 4 and G be a r-regular connected graph. If λ_2(G)<r-2(2k-1)/r+1, then τ(G)≥ k. Cioabă and Wong <cit.> further conjectured that the sufficient condition can be improved to λ_2(G)<r-2k-1/r+1. In the same paper, they did the preliminary work of this conjecture for k=2,3 and gave examples to show the bound is best possible. Later, Gu, Lai, Li and Yao<cit.>, Li and Shi<cit.>, Liu, Hong and Lai<cit.> independently extended the conjecture to graphs that are not necessarily regular.Let k≥ 2 be an integer, and let G be a graph with minimum degree δ≥ 2k. Ifλ_2(G)<δ-2k-1/δ+1,then τ(G)≥ k. Gu, Lai, Li and Yao<cit.> confirmed the Conjecture <ref> for k=2,3 and obtained the following partial result for k≥ 4.Let k≥ 4 be an integer, and let G be a graph with minimum degree δ≥ 2k. If λ_2(G)< δ-3k-1/δ+1,then τ(G)≥ k. In <cit.>, Li and Shi obtained the following result, which suggests an approximate formulation of Conjecture <ref> for a graph with a large order n and small minimum degree δ.Let k ≥ 2 be an integer, and let G be a graph of order n with minimum degree δ≥ 2k. Ifλ_2(G)<{[ δ-n(k-1)/(n-δ-1)(δ+1) ,;δ-8 k-7/3(δ+1) ,; δ-2 k-1/δ+1+2(k-1)/n-2(δ+1) , ].then τ(G)≥ k. Liu, Hong and Lai<cit.> made the investigation of Conjecture <ref> and further proved that this conjecture is hold for sufficiently large n.Let G be a graph of order n ≥(2 k-1)(δ+1) with minimum degree δ≥ 2k ≥ 4. Ifλ_2(G) ≤δ-2 k-2 / k/δ+1  or  λ_2(G) ≤δ-2 k-1/δ+1, then τ(G) ≥ k.Conjecture <ref> was completely settled in 2014 by Liu et al. <cit.> who proved a stronger result, which also implied the truth of the conjecture of Cioabă and Wong <cit.>. Moreover, the bound in Conjecture <ref> has been shown to be essentially best possible in <cit.> by constructing extremal graphs.For given integers δ and g with δ>0 and g ≥ 3, let t=⌊g-1/2⌋, and define the Moore function as follows.f(δ, g)= 2 t+1 if δ=2andg=2 t+1 1+δ+∑_i=2^t(δ-1)^i if δ≥ 3andg=2 t+1 2 t+2 if δ=2andg=2 t+2 2+2(δ-1)^t+∑_i=1^t-1(δ-1)^i if δ≥ 3andg=2 t+2By utilizing the Moore function, Liu, Lai, and Tian<cit.> investigated the spanning tree packing number of a graph. If a graph G has a cycle, the girth of G is the length of the shortest cycle in G.Let g and k be integers with g ≥ 3 and k ≥ 2, and G be a graph of order n with minimum degree δ≥ 2 k ≥ 4 and girth g.Ifλ_2(G)<δ-2 k-1/f(δ, g),then τ(G) ≥ k. Let 𝒢_t be the set of graphs such that for each graph G ∈𝒢_t there exist at least t+1 non-empty disjoint proper subsets V_1, V_2, …, V_t+1⊆ V(G) satisfying V(G) \(V_1 ∪ V_2 ∪⋯∪ V_t+1) ≠∅ and edge connectivity κ^'(G)=e_G(V_i, V(G) \ V_i) for i=1,2, …, t+1.Duan, Wang and Liu<cit.> gave the following spectral condition to guarantee τ(G) ≥ k in terms of its third largest eigenvalue among all 𝒢_1.Let k ≥ 2 be an integer, and let G ∈𝒢_1 be a graph with minimum degree δ≥ 2 k and maximum degree Δ. If λ_3(G)<2 δ-Δ-2(3 k-1) /δ+1, then τ(G) ≥ k. Later, Hu, Wang and Duan<cit.> improved the upper bound in Theorem <ref> to δ-8(2k-1)/3(δ+1), and determined the relationship between λ_4(G) and spanning tree packing number of a graph G ∈𝒢_2. Let k ≥ 2 be an integer, and let G ∈𝒢_2 be a graph with minimum degree δ≥ 3 k. Ifλ_4(G)<δ-9 k-3/δ+1,then τ(G) ≥ k.Motivated by the above results, Fan, Gu and Lin<cit.> studied the spanning tree packing number by means of the spectral radius of graphs. They first investigated an edge extremal result for τ(G)≥ k. Let G be a connected graph with minimum degree δ≥ 2k and order n≥ 2δ+2. Ife(G)≥δ+12+n-δ-1 2 +k,then τ(G)≥ k.The condition in Theorem <ref> is tight. Denote by K_n the complete graph on n vertices, and 𝒢_n,n_1^i the set of graphs obtained from K_n_1∪ K_n-n_1 by adding i edges between K_n_1 and K_n-n_1. Notice that any graph G in 𝒢_n,δ+1^k-1 has exactly δ+12 + n-δ-1 2 +k-1 edges but τ(G)<k. In the same paper, they then focus on a spectral analogue. Let B_n,δ+1^i be a graph obtained from K_δ+1∪ K_n-δ-1 by adding i edges joining a vertex in K_δ+1 and i vertices in K_n-δ-1. They discovered a sufficient condition for τ(G)≥ k via the spectral radius, and characterized the unique spectral extremal graph B_n,δ+1^k-1 among the structural extremal graph family 𝒢_n,δ+1^k-1.Let k≥ 2 be an integer, and let G be a connected graph with minimum degree δ≥ 2k and order n≥ 2δ+3. If ρ(G)≥ρ(B_n,δ+1^k-1), then τ(G)≥ k unless G≅ B_n,δ+1^k-1. Letη(G)=min{|X|/ω(G-X)-ω(G)}where the minimum is taken over all edge subsets X in G. Nash-Williams-Tutte Theorem indicates that for a connected graph G, τ(G) ≥ k if and only if η(G) ≥ k. Since η(G) is possibly fractional, we have τ(G)=⌊η(G)⌋. Thus, we call η(G) is the fractional spanning tree packing number of G. Hong, Gu, Lai and Liu<cit.> investigated the relationship between η(G) and the eigenvalue of G. Let s,t be two positive integers, and let G be a graph with minimum degree δ≥2 s/t. If λ_2(G)<δ-2 s-1/t(δ+1), then η(G) ≥s/t.When s/t is a positive integer, this result also confirms the Conjecture <ref>. § SOME POSSIBLE PROBLEMS This is a temporary section for discussing and some listing related problems, with a particular focus on spectral radius of graphs. The existence of rainbow matchings in graphs has been extensively investigated by many researchers in past several decades.Joos and Kim <cit.> posted the following question:[Joos and Kim <cit.>]Let H be a graph with t edges, 𝐆 be a family of graphs and 𝒢={G_1,…,G_t} be a collection of not necessarily distinct graphs on the same vertex set V such that G_i∈𝐆 for all 1≤ i≤ t. Which properties imposed on 𝐆 yield a 𝒢-rainbow graph isomorphic to H?In the same paper, they answered the Problem <ref> from version of Dirac's theorem. Let 𝒢={G_1,…,G_n/2} be a collection ofgraphs on the same vertex set [n] such that δ(G_i)≥ n/2 for 1≤ i≤ n/2. Then 𝒢 admits a rainbow matching.Specially, we would like to propose the following problems from the perspective of edge and spectral radius, respectively.Let n,r be two integers such that 1≤ r< n/2. Let 𝒢={G_1,…,G_n/2} be a collection ofgraphs on the same vertex set [n] such that δ(G_i)≥ r and e(G_i)>e(K_r-1∨ (rK_1∪ K_n-2r+1)) for 1≤ i≤ n/2, then 𝒢 admits a rainbow matching. Let n,r be two integers such that 1≤ r< n/2. Let 𝒢={G_1,…,G_n/2} be a collection ofgraphs on the same vertex set [n] such that δ(G_i)≥ r and ρ(G_i)>ρ(K_r-1∨ (rK_1∪ K_n-2r+1)) for 1≤ i≤ n/2, then 𝒢 admits a rainbow matching. For the existence of rainbow matchings in bipartite graphs, Aharoni and Howard <cit.> posed a conjecture.Let 𝒢={G_1,…,G_t} be t bipartite graphs such that Δ(G_i)≤Δ and e(G_i)>(t-1)Δ. Then 𝒢 admits a rainbow matching.By incorporating rainbow matching and spectral radius conditions in a bipartite graph, we can consider the above Conjecture <ref> from another perspective. Let 𝒢={G_1,…,G_t} be t bipartite graphs such that Δ(G_i)≤Δ. Find a spectral radius condition for a collection of graphs 𝒢 to admits a rainbow matching. An edge-colored graph is called rainbow if the colors on its edges are distinct. For graphs G and H, the anti-Ramsey number is the maximum number of colors in an edge-colored G with no rainbow copy of H. Note that the rainbow number is closely related to anti-Ramsey numbers.Then we can considered the following problem. Find a spectral condition for rainbow matching in edge-colored graphs (anti-Ramsey number spectral version).Concerning connected factors in a graph, some researchers have focused on finding spectral conditions for a graph to contain a Hamiltonian cycle or Hamiltonian path. In <cit.>, O raised a question regarding the identification of specific eigenvalue conditions that guarantee the existence of a connected (even or odd) [a,b]-factor in an h-edge-connected r-regular graph. It is therefore natural to inquire about the corresponding question for a more general condition. Characterize the eigenvalue conditions for the existence of a connected [a,b]-factor in graphs. For two positive integers n and k, let m(2n,k) be the maximum number of edges in a graph of order 2n with a unique k-factor. In 1984, Hendry<cit.> proved m(2n,2)=⌊n(2n+1)/2⌋ and conjectured m(2n,k)=nk+2n-k2 for k> n, and kn is even and m(2n,k)=n^2+(k-1)n/2 for n=ks, where s is a positive integer. In 2000, Johann<cit.> confirmed this conjecture and obtained the corresponding extremal graphs K_2n-k∇ H, where H is a 2(k-n)-regular graph, for k> n and G(2n,k) for n=ks, respectively. The construction of G(2n,k) is shown below.Let n=sk+t with s≥ 1 and 0≤ t≤ k-1. We give the process to construct the graph G(2n,k). First define a graph F_1 on 2(k+t) vertices as follows. Let H_1≅ K_t∇ tK_1, and let A_11=V(tK_1) and A_12=V(K_t). Denote by H_2 the graph obtained from kK_1∪ K_k by adding edges between V(kK_1) and V(K_k) such that d_H_2(v)=k-t for v∈ V(kK_1) and d_H_2(u)=2k-t-1 for u∈ V(K_k). Suppose that A_21=V(kK_1) and A_22=V(K_k). Let F_1 be the graph of order 2(k+t) obtained from H_1∪ H_2 by connecting all vertices of A_1j with A_2(3-j) for 1≤ j≤ 2 and adding all edges between A_12 and A_22. The resulting graph F_1 has exactly one k-factor. Suppose that U_1=A_11∪ A_21 and W_1=A_12∪ A_22. Next take s-1 copies of K_k∇ kK_1 labeled F_2,…, F_s. For 2≤ i≤ s, let U_i and W_i be the vertices set of V(kK_1) and V(K_k) in each F_i, respectively. Then the graph G(2n,k) is obtained by adding edges connecting all vertices of W_i in F_i to all vertices in F_j for each i,j with 1≤ i< j≤ s. The resulting graph G(2n,k) has a unique k-factor.From the perspective of spectral radius condition, we have the following problem. What is the maximum spectral radius and what is the corresponding extremal graph among all graphs with a unique k-factor for k≥ 1? In this paper, Theorem <ref> has given the answer to Problem <ref> for k=1. It is G(2n,1). However, the structure of graphs with a unique k-factor is more complicated for k≥ 2, and it seems difficult to determine the extremal graphs about the problem. For k≥ 2. Suppose that G is a graph of order 2n with a unique k-factor.(I) Does ρ(G)≤ρ(K_2n-k∇ H), where H is a 2(k-n)-regular graph for k> n?(II) Does ρ(G)≤ρ (G(2n,k)) for k≤ n? Theorem <ref> actually implies that B_n,δ+1^τ is the unique graph that has the maximum spectral radius among all graphs of fixed order n with minimum degree δ and spanning tree packing number τ. Let G be a minimal graph with τ(G)≥ k and of order n, that is, G consists of exactly k edge-disjoint spanning trees with no extra edges. This implies that τ(G)=k and e(G)=k(n-1). We are interested in the maximum possible spectral radius of G. Let G be a minimum graph with τ(G)≥ k and of order n. For each n≥ 4 and each k≥ 2, determine the maximum possible spectral radius of G and characterize extremal graphs. The following sharp upper bound on the spectral radius was obtained by Hong, Shu and Fang<cit.> and Nikiforov<cit.>, independently.Let G be a graph on n vertices and m edges with minimum degree δ≥ 1. Thenρ(G) ≤δ-1/2+√(2 m-n δ+(δ+1)^2/4),with equality if and only if G is either a δ-regular graph or a bidegreed graph in which each vertex is of degree either δ or n-1. To attack Problem <ref>, notice that δ(G)≥ k and e(G)=k(n-1), we can easily obtain an upper bound on ρ(G) by Theorem <ref>. However, this upper bound may not be tight, since for most values of n, G cannot be k-regular or a bidegreed graph in which each vertex is of degree either k or n-1. To attain the maximum spectral radius, it seems that G is obtained from a K_2k by continuously adding a vertex and k incident edges step by step until G has n vertices, and in particular, we guess the graph is K_k∇ (K_k∪ (n-2k)K_1).As a dual problem of spanning tree packing, Nash-Williams <cit.> ever studied the forest covering problem, seeking the minimum number of forests that cover the entire graph.The arboricity a(G) is the minimum number of edge-disjoint forests whose union equals E(G).Let G be a connected graph. Then a(G)≤ k if and only if for any subgraph H of G, |E(H)|≤ k(|V(H)|-1). Naturally, we have the following problem.Find a tight spectral radius condition for a graph G of order n with a(G)≤ k and characterize extremal graphs.When n≤ 2k, Problem <ref> is trivial. Any graph G of order n≤ 2k has the property a(G)≤ k and so the extremal graph is K_n. To see this, for any subgraph H of G, we can deduce that |E(H)|≤|V(H)| 2≤ k(|V(H)|-1) when n≤ 2k, and the conclusion follows from Theorem <ref>. The situation becomes more involved for n≥ 2k+1. In fact, this case is even stronger than Problem <ref>, since if G consists of exactly k edge-disjoint spanning trees with no extra edges, we have a(G) =τ(G)=k. Notice that a(K_k∇ (K_k∪ (n-2k)K_1))=k and e(K_k∇ (K_k∪ (n-2k)K_1))=k(n-1), thus we guess that the extremal graph w.r.t. the spectral radius is also K_k∇ (K_k∪ (n-2k)K_1).The eigenvalues of a graph are closely interconnected with graph parameters, such as chromatic number<cit.>, clique number<cit.> and cut edge<cit.>. For more results, we refer the reader to <cit.>. It is worth noting that the majority of researchers have focused primarily on studying the maximum spectral radius of a graph under fixed conditions of a single graph parameter, with relatively less attention paid to the results obtained from giving two graph parameters. Therefore, it is interesting to consider the maximum spectral radius and corresponding extremal graphs among all graphs with two given parameters. Find a tight spectral radius condition for a graph G of order n with Δ(G)≤ r and β(G)≤ s. 99 AH11 R. Aharoni, D. Howard, Size conditions of the existence of rainbow matchings, preprint.AH16 R. Aharoni, D. Howard,A rainbow r-partite version of the Erdös-Ko-Rado theorem, Combin. Probab. Comput. 26 (2017)321–337.A.A A. Amahashi, On factors with all degrees odd, Graphs Combin. 1(2) (1985) 111–114. Ao-Liu-Yuan G. Ao, R. Liu, J. Yuan, Spectral radius and spanning trees of graphs, Discrete Math. 346(8) (2023) 113400.Akiyama-Jin-Era J. Akiyama, A. Jin, H. Era, On a {1,2}-factor of a graph, TRU Math. 16(2) (1980) 97–102.Alon-Frankl N. Alon, P. Frankl, Turán graphs with bounded matching number, arXiv.org. https://arxiv.org/ abs/2210.15076, 2022.Akbari-Kirkland S. Akbari, S.J. Kirkland, On unimodular graphs, Linear Algebra Appl. 421(1) (2007) 3–15. D.B-3 D. Bauer, H.J. Broersma, H.J. Veldman, Not every 2-tough graph is Hamiltonian, Discrete Appl. Math. 99 (2000) 317–321.D.B D. Bal, A. Dudek, Z. Yilma, On the maximum number of edges in a hypergraph with a unique perfect matching, Discrete Math.311(21) (2011) 2577–2580.Barik-Behera S. Barik, S. Behera, On the smallest positive eigenvalue of bipartite unicyclic graphs with a unique perfect matching, Discrete Math. 346(2) (2023) 113252. Berge59 C. Berge, Sur le couplage maximum d'un graphe, C.R. Acad. Sci. Paris Ser.I Math. 247 (1958) 258–259. Berge-Vergnas C. Berge, M. Las Vergnas, On the existence of subgraphs with degree constraints, Nederl. Akad. Wetensch. Indag. Math.40(2) (1978) 165–176.Benediktovich V. Benediktovich, Spectral condition for Hamiltonicity of a graph, Linear Algebra Appl. 494 (2016) 70–79. Bondy-Murty J.A. Bondy, U.S.R. Murty, Graph Theory, in: Grad. Texts in Math, vol. 244, Springer, New York, 2008.A.B A. Brouwer, W. Haemers, Eigenvalues and perfect matchings, Linear Algebra Appl. 395 (2005) 155–162. Broersma-Tuinstra H. Broersma, H. Tuinstra, Independence trees and Hamilton cycles, J. Graph Theory 29(4) (1998) 227–237.Cayley A. Cayley, A theorem on trees, Q. J. Pure and Appl. Math 23 (1889) 376–378.Cho-Hyun-O-Park E. Cho, J. Hyun, S. O, J. Park, Sharp conditions for the existence of an even [a,b]-factor in a graph, Bull. Korean Math. Soc. 58(1) (2021) 31–46.V.C V. Chvátal, Tough graphs and Hamiltonian circuits, Discrete Math. 5 (1973) 215–228.Chen-Lu X. Chen,F. Lu, The maximal (signless Laplacian) spectral radius of connected graphs with given matching number, Ars Combin. 126 (2016) 237–247.Chen-Hou-Qian X. Chen, Y. Hou, J. Qian, Sufficient conditions for Hamiltonian graphs in terms of (signless Laplacian) spectral radius, Linear Multilinear Algebra 66(5) (2018) 919–936.S.C S.M. Cioabă, D. Gregory, W. Haemers, Matchings in regular graphs from eigenvalues, J. Combin. Theory Ser. B 99(2) (2009) 287–297.Cioaba2005 S.M. Cioabă, Perfect matchings, eigenvalues and expansion, C. R. Math. Acad. Sci. Soc. R. Can. 27(4) (2005) 101–104. SG07 S.M. Cioabă, D.A. Gregory, Large matchings from eigenvalues, Linear Algebra Appl. 422 (2007) 308–317.Cioaba-Gu S.M. Cioabă, X. Gu, Connectivity, toughness, spanning trees of bounded degree, and the spectrum of regular graphs, Czechoslovak Math. J. 66(141) (2016) 913–924.Cioaba-Wong S.M. Cioabă, W. Wong, Edge-disjoint spanning trees and eigenvalues of regular graphs, Linear Algebra Appl. 437 (2012) 630–647.coppww22 S.M. Cioabă, A. Ostuni, D. Park, S. Potluri, T. Wakhare, W. Wong, Extremal graphs for a spectral inequality on edge-disjoint spanning trees, Electron. J. Combin. 29 (2022), P2.56.Cunningham W.H. Cunningham, Optimal attack and reinforcement of a network, J. ACM 32 (1985) 549–561.Dirac1952 D.A. Dirac, Some theorems on abstract graphs, Proc. London Math. Soc. 2 (1952) 69–81.Duan-Wang-Liu C. Duan, L. Wang, X. Liu, Edge connectivity, packing spanning trees, and eigenvalues of graphs, Linear Multilinear Algebra 68(6) (2020) 1077–1095.M.Ellingham M. Ellingham, X. Zha, Toughness, trees, and walks, J. Graph Theory 33 (2000) 125–137.H.E-1 H. Enomoto, B. Jackson, P. Katerinis, A. Saito, Toughness and the existence of k-factors, J. Graph Theory 9(1) (1985) 87–95.Erdos P. Erdős, Remarks on a paper of Pósa, Magyar Tud. Akad. Mat. Kut. Int. Közl 7 (1962) 227–229.Erdos-Gallai P. Erdős, T. Gallai, On maximal paths and circuits of graphs, Acta Math. Acad. Sci. Hungar. 10 (1959) 337–356 D.F D. Fan, S. Goryainov, X. Huang, H. Lin, The spanning k-trees, perfect matchings and spectral radius of graphs, Linear and Multilinear Algebra 70(21) (2022) 7264–7275.Fan-Lin-Lu D. Fan, H. Lin, H. Lu, Spectral radius and [a,b]-factors in graphs, Discrete Math. 345(7) (2022) 112892.Fan-Gu-Lin D. Fan, X. Gu, H. Lin, Spectral radius and edge-disjoint spanning trees, J. Graph Theory 104 (2023) 697–711.Fan-Lin-Lu2023 D. Fan, H. Lin, H. Lu, Toughness, hamiltonicity and spectral radius in graphs, European J. Combin. 110 (2023) 103701.Faudree-Gould R. Faudree, R. Gould, L. Lesniak, Neighborhood conditions and edge-disjoint perfect matchings, Discrete Math. 91(1) (1991) 33–43.Feng-Yu-Zhang L. Feng, G. Yu, X. Zhang, Spectral radius of graphs with given matching number, Linear Algebra Appl. 422(1) (2007) 133–138.Fiedler-Nikiforov M. Fiedler, V. Nikiforov, Spectral radius and Hamiltonicity of graphs, Linear Algebra Appl. 432(9) (2010) 2170–2173.Garey-Johnson M.R. Garey, D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-completeness, W.H. Freeman & Company, San Francisco, 1979.Gallai1950 T. Gallai, On factorisation of graphs, Acta Math. Acad. Sci. Hung. 1 (1950) 133–153.Ge-Ning2020 J. Ge, B. Ning, Spectral radius and Hamiltonian properties of graphs II, Linear Multilinear Algebra 68 (2020) 2298–2315.Godsil1985 C. Godsil, Inverses of trees, Combinatorica 5(1) (1985) 33–39.Gu2014 X. Gu, Regular factors and eigenvalues of regular graphs, European J. Combin. 42 (2014) 15–25.Gu-Lai X. Gu, H.-J. Lai, P. Li, S. Yao, Edge-disjoint spanning trees, edge connectivity, and eigenvalues in graphs, J. Graph Theory 81(1) (2016) 16–29.GLMM M. Guo, H. Lu, X.Ma, X. Ma, Spectral radius and rainbow matchings of graphs, Linear Algebra Appl. 679 (2023), 30–37. Haemers W. Haemers, Interlacing eigenvalues and graphs, Linear Algebra Appl. 226/228 (1995) 593–616.Hall P. Hall, On representatives of subsets, J. London Math. Soc. 10 (1935) 26–30.Hendry G. Hendry, Maximum graphs with a unique k-factor, J. Combin. Theory Ser. B 37(1) (1984) 53–63. Hong-Gu-Lai-Liu Y. Hong, X. Gu, H. Lai, Q. Liu, Fractional spanning tree packing, forest covering and eigenvalues, Discrete Appl. Math. 213 (2016) 219–223.Hoffman A.J. Hoffman, On eigenvalues and colorings of graphs, in: Graph Theory and Its Applications, Proc. Advanced Sem., Math. Research Center, Univ. of Wisconsin, Madison, WI, 1969, Academic Press, 1970. Hong-Shu-Fang Y. Hong, J. Shu, K. Fang, A sharp upper bound of the spectral radius of graphs, J. Combin. Theory Ser. B 81 (2001) 177–183.Hobbs91 A.M. Hobbs, Network survivability, Applications of Discrete Mathematics (Edited by J.G. Michaels and K.H. Rosen), 1991, McGraw-Hill.Hu-Wang-Duan Y. Hu, L. Wang, C. Duan, Spectral conditions for edge connectivity and spanning tree packing number in (multi-)graphs, Linear Algebra Appl. 664 (2023) 324–348.Jiang-Yu-Fang G. Jiang, G. Yu, Y. Fang, Spectral conditions and Hamiltonicity of a balanced bipartite graph with large minimum degree, Appl. Math. Comput. 356 (2019) 137–143.Johann P. Johann, On the structure of graphs with a unique k-factor, J. Graph Theory 35 (2000) 227–243.JK20 F. Joos and J. Kim, On a rainbow version of Dirac's theorem,Bull. London Math. Soc. 52 (2020) 498–504.Kaneko2003 A. Kaneko, A necessary and sufficient condition for the existence of a path factor every component of which is a path of length at least two, J. Comb. Theory Ser. B 88 (2003) 195–218.Kaneko A. Kaneko, Spanning trees with constraints on the leaf degree, Discrete Appl. Math. 115 (2001) 73–76.Kano-Katona-Kiraly M. Kano, G.Y. Katona, Z. Király, Packing paths of length at least two, Discrete Math. 283 (2004) 129135. Kim-O2023 D. Kim, S. O, Eigenvalues and parity factors in graphs with given minimum degree, Discrete Math. 346(4)(2023) 113290. Kim-O-Sim-Shin M. Kim, S. O, W. Sim, D. Shin, Matchings in graphs from the spectral radius, Linear Multilinear Algebra 71(11) (2023) 1794–1803.Kirchhoff G. Kirchhoff, Über die Auflösung der Gleichungen, auf welche man bei der untersuchung der linearen verteilung galvanischer Ströme geführt wird, Ann. Phys. Chem. 72 (1847) 497–508.Kim-O-Park-Ree S. Kim, S. O, J. Park, H. Ree, An odd [1,b]-factor in regular graphs from eigenvalues, Discrete Math. 343(8) (2020) 111906. Kotzig1959 A. Kotzig, On the theory of finite graphs with a linear factor. I, Mat.-Fyz. Časopis. Slovensk. Akad. Vied. 9 (1959) 73–91. Li-Ning B. Li, B. Ning, Spectral analogues of Erdős' and Moon-Moser's theorems on Hamilton cycles, Linear Multilinear Algebra 64(11) (2016) 2252–2269.Li2000 G. Li, Edge disjoint Hamilton cycle in graphs, J. Graph Theory 35(1) (2000) 8–20. Li-Shi G. Li, L. Shi, Edge-disjoint spanning trees and eigenvalues of graphs, Linear Algebra Appl. 439 (2013) 2784–2789.Li-Miao S. Li, S. Miao, Characterizing P_2-factor and P_2-factor covered graphs with respect to the size or the spectral radius, Discrete Math. 344(11) (2021) 112588.Li-Miao2022 S. Li, S. Miao, Complete characterization of odd factors via the size, spectral radius or distance spectral radius of graphs, Bull. Korean Math. Soc. 59(4) (2022) 1045–1067.Li-Miao-Zhang S. Li, S. Miao, M. Zhang, On the size, spectral radius, distance spectral radius and fractional matchings in graphs, Bull. Aust. Math. Soc. 108(2) (2023) 187–199.Li-Cai Y. Li, M. Cai, A degree condition for a graph to have [a,b]-factors, J. Graph Theory 27(1) (1998) 1–6.Liu-Lu-Tian H. Liu, M. Lu, F. Tian, On the spectral radius of graphs with cut edges, Linear Algebra Appl. 389 (2004) 139–145.Liu-Hong-Lai Q. Liu, Y. Hong, H.-J. Lai, Edge-disjoint spanning trees and eigenvalues, Linear Algebra Appl. 444 (2014) 146–151. Liu-Hong Q. Liu, Y. Hong, X. Gu, H.-J. Lai, Note on edge-disjoint spanning trees and eigenvalues, Linear Algebra Appl. 458 (2014) 128–133. Liu-Lai-Tian R. Liu, H.-J. Lai, Y. Tian, Spanning tree packing number and eigenvalues of graphs with given girth, Linear Algebra Appl. 578 (2019) 411–424.Liu-Shiu-Xue R. Liu, W. Shiu, J. Xue, Sufficient spectral conditions on Hamiltonian and traceable graphs, Linear Algebra Appl. 467 (2015) 254–266.Liu-Liu-Feng W. Liu, M. Liu, L. Feng, Spectral conditions for graphs to be β-deficient involving minimum degree, Linear and Multilinear Algebra 66(4) (2018) 792–802. Lovasz1972 L. Lovász, On the structure of factorizable graphs, Acta Math. Acad. Sci. Hungar 23 (1972) 179–195.Lovasz1970 L. Lovász, Subgraphs with prescribed valencies, J. Combinatorial Theory 8 (1970) 391–416.Lovasz-PlaummerL. Lovász, M.D. Plummer, Matching theory, Ann. Discrete Math. (29) (1986)Luo-Liu-Ao J. Lou, R. Liu, G. Ao, Fractional matching, factors and spectral radius in graphs involving minimum degree, Linear Algebra Appl. 677 (2023) 337–351.Lu-Wu-Yang H. Lu, Z. Wu, X. Yang, Eigenvalues and [1,n]-odd factors, Linear Algebra Appl. 433(4) (2010) 750–757.Lu2010 H. Lu, Regular factors of regular graphs from eigenvalues, Electron. J. Combin. 17(1) (2010), Research Paper 159, 12 pp.Lu2012 H. Lu, Regular graphs, eigenvalues and regular factors, J. Graph Theory 69(4) (2012) 349–355.Miao-Li S. Miao, S. Li, Characterizing star factors via the size, the spectral radius or the distance spectral radius of graphs, Discrete Appl. Math. 326 (2023) 17–32.Moon-Moser J. Moon, L. Moser, On Hamiltonian bipartite graphs, Israel J. Math. 1 (1963) 163–165. Nash-Williams C.St.J.A. Nash-Williams, Edge-disjoint spanning trees of finite graphs, J. Lond. Math. Soc. 36 (1961) 445–450.Nash64 C.St.J.A. Nash-Williams, Decompositions of finite graphs into forests, J. London Math. Soc. 39 (1964), P.12.Ning-Ge B. Ning, J. Ge, Spectral radius and Hamiltonian properties of graphs, Linear Multilinear Algebra 63(8) (2015) 1520–1530.Nikiforov2016 V. Nikiforov, Spectral radius and Hamiltonicity of graphs with large minimum degree, Czechoslovak Math. J. 66(141) (2016) 925–940.Nikiforov2002 V. Nikiforov, Some inequalities for the largest eigenvalue of a graph, Combin. Probab. Comput. 11(2) (2002) 179–189.Nikiforov2007 V. Nikiforov, Bounds on graph eigenvalues. II, Linear Algebra Appl. 427 (2007) 183–189.Ore O. Ore, Arc coverings of graphs, Ann. Mat. Pura Appl. 55 (1961) 315–321. S.O S. O, Spectral radius and matchings in graphs, Linear Algebra Appl. 614 (2021) 316–324.O2022 S. O, Eigenvalues and [a,b]-factors in regular graphs, J. Graph Theory 100(3) (2022) 458–469.O2016 S.O, Spectral radius and fractional matchings in graphs, European J. Combin. 55 (2016) 144–148.S.P S. Panda, S. Pati, On the inverse of a class of bipartite graphs with unique perfect matchings, Electron. J. Linear Algebra 29 (2015) 89–101.Pan-Liu Y. Pan, C. Liu, Spectral radius and fractional perfect matchings in graphs, Graphs Combin. 39(3) (2023), Paper No. 52, 11 pp.Petersen1891 J. Petersen, Die Theorie der regulären graphs, Acta Math. 15(1) (1891) 193–220.Scheinerman-Ullman E. Scheinerman, D. Ullman, Fractional graph theory: a rational approach to the theory of graphs, Wiley & Sons, New York, 1997. Tutte1947 W.T. Tutte, The factorization of linear graphs, J. Lond. Math. Soc. 22 (1947) 107–111.Tutte1952 W.T. Tutte, The factors of graphs, Canad. J. Math. 1 (1952) 314–328. Tutte1961 W.T. Tutte, On the problem of decomposing a graph into n factors, J. Lond. Math. Soc. 36 (1961) 221–230.X.W X. Wang, W. Shang, J. Yuan, On graphs with a unique perfect matching, Graphs Combin. 31 (2015) 1765–1777.Wang-Hou-Ma H. Wang, X. Hou, Y. Ma, Spectral extrema of graphs with bounded clique number and matching number, Linear Algebra Appl. 669 (2023) 125–135. Wei-Zhang J. Wei, S. Zhang, Proof of a conjecture on the spectral radius condition for [a,b]-factors, Discrete Math. 346(3) (2023) 113269.S.Win S. Win, On a connected between the existence of k-trees and the toughtness of a graph, Graphs Combin. 5 (1989) 201–205. Wilf1986 H. Wilf, Spectral bounds for the clique and independence number of graphs, J. Combin. Theory Ser. B 40 (1986) 113–117. Wong2013 W. Wong, Spanning trees, toughness, and eigenvalues of regular graphs, Ph.D. Dissertation, University of Delaware, 2013.Yang-Ye Y. Yang, D. Ye, Inverses of bipartite graphs, Combinatorica 38 (2018) 1251–1263.W.Zhang W. Zhang, The spectral radius and 𝒫_l-factors of graphs involving minimum degree, Graphs Combin. 38(6) (2022), Paper No. 176, 11 pp.Zhang-Wang-Wang W. Zhang, J. Wang, W. Wang, The matchings and spectral radius of graphs involving minimum degree, Linear Algebra Appl. 668 (2023) 149–160.Zhang2022 W. Zhang, The maximum spectral radius of t-connected graphs with bounded matching number, Discrete Math. 345(4) (2022) 112775. Zhou-Bu-Lai J. Zhou, C. Bu, H.-J. Lai, Edge-disjoint spanning trees and forests of graphs, Discrete Appl. Math. 299 (2021) 74–81.
http://arxiv.org/abs/2312.15902v1
{ "authors": [ "Dandan Fan", "Huiqiu Lin", "Hongliang Lu", "Suil O" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20231226063655", "title": "Eigenvalues and factors: a survey" }
Capital Inequality Induced Business Cycles Eckehard Schöll January 14, 2024 ========================================== Consider the following Lane-Emden system with Dirichlet boundary conditions: -Δ U = |V|^-̱1V,-Δ V = |U|^-1U in Ω,U=V= 0 on ∂Ω,in a bounded domain Ω, for (α,β) subcritical. We study the asymptotic behavior of least-energy solutions when β→∞, for any fixed α which, in the case N≥ 3, is smaller than 2/(N-2). We show that these solutions converge to least-energy solutions of a semilinear equation involving the 1-bilaplacian operator, establishing a new relationship between these objects. As a corollary, we deduce the asymptotic behavior of solutions to p-bilaplacian Lane-Emden equations as the power in the nonlinearity goes to infinity.For the proof, we rely on the reduction by inversion method and on tools from nonsmooth analysis,considering an auxiliary nonlinear eigenvalue problem. We characterize its value in terms of the Green function, and prove a Faber-Krahn type result. In the case of a ball, we can characterize explicitly the eigenvalue, as well as the limit profile of least-energy solutions to the system as →̱∞. Asymptotic estimates, Faber-Krahn-type inequality, Green function, Lane-Emden systems, p-biharmonic equation, second order nonlinear eigenvalues. 35B40, 35G15, 35J30, 35J47, 35P30, 49J52, 49Q10 § INTRODUCTION In this paper we study the asymptotic behavior of least-energy solutions of the following Lane-Emden system -Δ U_=̱ |V_|̱^-̱1V_,̱-Δ V_=̱ |U_|̱^-1U_ in Ω,U_=̱V_=̱ 0 on ∂Ω,where Ω is a bounded domain in ^N with N≥ 1 and >0,>̱0 satisfy α·β≠ 1 and 1/+1+1/+̱1 > N-2/N,that is, the pair (α,β) is below the critical hyperbola. The case α·β=1 is different in nature, being an eigenvalue problem (see <cit.>), which we do not consider here. We are mainly interested in characterizing the limit of (U_,̱V_)̱ as the power $̱ tends to infinity, for any fixedα>0ifN=1,2, orα∈(0,2/N-2)ifN≥3.The existence of least-energy solutions of (<ref>) is well known and several methods for this (and to study qualitative properties) have been developed, see the survey <cit.>.In particular, the reduction-by-inversion method establishes a bijection between classical solutions of (<ref>) and critical points of the energy functionalJ_,:W^2,β+1/β(Ω)∩W_0^1,β+1/β(Ω) →given by J_,(U) :=/+̱1∫_Ω |Δ U|^+̱1/dx - 1/+1∫_Ω |U|^+1dx.More concretely, givenU∈W^2,β+1/β(Ω)∩W^1,β+1/β_0(Ω)andV:=-|ΔU|^1/β-1ΔU, then(U,V)is a classical solution of (<ref>) if, and only, if,J_α,β'(U)=0(see, for instance, <cit.>, whose proof can be found in <cit.> and <cit.>). Observe that a critical point ofJ_,is a solution of the Navier+̱1/-bilaplacian equationΔ(|Δ u|^1/-1Δ u)=|u|^-1u in Ω, u=Δ u=0 on ∂Ω.Within this framework, one says that(U_,̱V_)̱≠(0,0)is a least-energy solution of (<ref>) if J_,(U_)̱=c_α,β :=inf{J_,(U): Uis a nontrivial critical point ofJ_α,β}=min{J_,(U):(U,V) is a nontrivial solution of (<ref>)}.We point out that (<ref>) implies that the embeddingW^2,β+1/β(Ω)↪L^+1(Ω)is compact, and that the homogeneity on the left- and right-hand sides of (<ref>) do not coincide. For other (equivalent) characterizations of the least energy level, see, for instance, <cit.>.Our asymptotic analysis establishes a new link between the limit profiles (asβ→∞) of least-energy solutions of the Hamiltonian system (<ref>) and the (Navier) 1-bilaplacian equationΔ(Δ u/|Δ u|) = |u|^-1uin Ω, u=Δ u/|Δ u|=0on ∂Ω,where for any fixedα>0ifN=1,2, orα∈(0,2/N-2)ifN≥3. Equation (<ref>) is a formalism to represent a critical point of a nonsmooth variational problem.Indeed, note that (<ref>) cannot be understood in a pointwise sense:Δu/|Δu|is constant (and, at least formally, harmonic) on{Δu≠0}, while it is not well defined on{Δu=0}. This suggests thatΔushould not be thought of as a function, but rather as a distribution, and this allows to give a consistent notion for a solution of (<ref>). To be more precise, following <cit.>, letBL_0(Ω):={u ∈ W_0^1,1(Ω): |Δ u|_T <∞},where|Δ u|_T=sup{∫_Ω u Δφ : φ∈ C^∞_c(Ω), |φ|_∞≤ 1}is the total variation of the Radon measureΔu, see <cit.> and Lemma <ref>. We say thatu∈BL_0(Ω)is a solution of (<ref>) if there isv∈W^1,1_0(Ω)∩L^∞(Ω)such that -Δ v = |u|^q-2uin Ω (in distributional sense), |v|_∞≤ 1,and ∫_Ω u(-Δ v) = |Δ u|_T.In this sense, the functionvgives a consistent meaning to the quotient-Δu/|Δu|. The energy of a solutionu∈BL_0(Ω)of(<ref>) is given by Φ(u)=|Δ u|_T-1/+1|u|_+1^+1, |u|_+1:=(∫_Ω |u|^+1)^1/+1,and we callua least-energy solution of (<ref>) ifΦ(u)=min{Φ(w): wis a nontrivial solution of(<ref>)}.The functionalΦcan be treated variationally using techniques from nonsmooth analysis, see Section <ref>.The following result characterizes the limit profiles and establishes a bridge between (<ref>) and (<ref>).LetG_Ωdenote the Dirichlet Green function for the LaplacianinΩ. Denote by1^*=N/N-1ifN≥ 2 ∞ifN=1 , 1^**=N/N-2ifN≥ 3 ∞ifN=1,2 .Let Ω⊂^N be a bounded domain of class C^2,γ for some γ∈(0,1]. Let ∈(0,1^**-1), >̱0, and let (U_,̱V_)̱ be a least-energy solution of (<ref>). There is a least-energy solution U_∞∈ BL_0(Ω) of (<ref>) such that, up to a subsequence, as →̱∞,U_→̱U_∞in W^1,r_0(Ω) for everyr∈[1,1^*), and|Δ U_|̱_1→ |Δ U_∞|_T.Furthermore, let V_∞∈ W^2,1+1/(Ω)∩ W_0^1,1+1/(Ω) be the unique solution of-Δ V_∞= |U_∞|^α-1U_∞ in Ω, V_∞=0on ∂Ω.Then, up to a subsequence, as →̱∞ V_→̱V_∞weakly in W^2,1+1/α(Ω), strongly inW^1,η_0(Ω)andC^m,σ(Ω),for some η>1^* and σ∈ (0,1) depending on α, with m=1 if α<1^*-1, m=0 if α≥ 1^*-1.IfΩis the unit ballB_1, we can obtain more explicit formulas. We present them forN≥3only, but it is possible to do the same forN=1,2.Consider N≥ 3. Under the assumptions and notations of Theorem <ref>, it holdsU_∞(x) = A_N,(|x|^2-N-1) for x∈ B_1,where A_N, := (2N Γ(2/N-2)/Γ (α +2) Γ(2/N-2-α))^1/.Moreover, the limit (U_∞,V_∞) is unique, and the convergence (U_,̱V_)̱→ (U_∞,V_∞) holds without necessarily considering subsequences.For the exact statement regarding the convergence of V_β to V_∞ in Theorems <ref> and <ref>, see Corollary <ref> below. We remark that Theorems <ref> and <ref> can also be interpreted in the setting of the Navier+1/-bilaplacian Lane-Emden equations when the power of the nonlinearity goes to infinity.Let Ω⊂^N be a bounded domain of class C^2,γ for some γ∈(0,1). Let ∈(0,1^**-1), >̱0, and let v_$̱ be a least-energy solution ofΔ(|Δ v_|̱^1/-1Δ v_)̱=|v_|̱^-̱1v_ in Ω, v_=̱Δ v_=̱0 on ∂Ω.Thenv_→̱V_∞asβ→∞, where the convergence andV_∞are as in Theorem <ref>.This result is new even forN ≤ 3andα=1, where (<ref>) reduces to the Navier bilaplacian equation Δ^2 v_=̱|v_|̱^-̱1v_ in B_1, v_=̱Δ v_=̱0 on ∂ B_1.It is interesting to note that the behavior described in Corollary <ref> for (<ref>) is different from the results obtained in <cit.>, where the asymptotic profile of the solutionv_βof (<ref>) is studied as→̱∞in dimensionN=4.We shall return to this issue later in this introductionwhen we discuss the previous literature. Next, we state Theorem <ref> in the setting of the Navier+̱1/-bilaplacian equation (<ref>).Let Ω⊂^N be a bounded domain of class C^2,γ for some γ∈(0,1). Let ∈[0,1^**-1),and let u_β be a least-energy solution ofΔ(|Δ u_β|^1/-1Δ u_β)=|u_β|^-1u_β in Ω, u_β=Δ u_β=0 on ∂Ω.Then, up to a subsequence and with U_∞ as in Theorem <ref>, u_β→ U_∞ in W^1,r(Ω) as n→∞ for r∈[1,1^*) and |Δ u_β|_1→ |Δ U_∞|_T. Moreover, if Ω is the unit ball, then the convergence holds with U_∞ as in Theorem <ref>. To show Theorems <ref> and <ref> we consider an auxiliary nonlinear eigenvalue problem, which is of independent interest.To be more precise, letp,qbe such thatq≥1, p≥ 1,(N-2p)q<Np,and letΛ_p,q:=inf_u∈ W^2,p_Δ(Ω)∖{0}|Δ u|_p/|u|_q =inf{|Δ u|_p:u∈ W^2,p_Δ(Ω), |u|_q=1},where W^2,p_Δ(Ω):={u∈ W^1,p_0(Ω): Δ u∈ L^p(Ω)}. We summarize in the next theorem our main results regarding this auxiliary problem, since they are of independent interest.Let Ω be convex or of class C^1,γ for some γ∈ (0,1]. Then:(a) if p>1 and (N-2p)q<Np, then Λ_p,q is achieved by a function u_p,q∈ W^2,p_Δ (Ω) with |u_p,q|_q=1, andΔ(|Δ u_p,q|^p-2Δ u_p,q)=Λ_p,q|u_p,q|^q-2u_p,q in Ω, u_p,q=Δ u_p,q=0 on ∂Ω.(b)if p=1 and q∈[1,1^**), thenΛ_1,q= inf_u∈ BL_0(Ω)\{0}|Δ u|_T/|u|_q,and Λ_1,q is achieved by a function u_1,q∈ BL_0(Ω) with |u|_q=1. Moreover,Δ(Δ u_1,q/|Δ u_1,q|) = Λ_1,q|u_1,q|^q-2u_1,qin Ω, u_1,q=Δ u_1,q/|Δ u_1,q|=0on ∂Ω.Moreover, for (p,q) satisfying (<ref>), the map(p,q)↦Λ_p,qis continuousand, for q∈ [1,1^**) and(p_n,q_n)→ (1,q), given u_n aminimizer for Λ_p_n,q_n with |u_n|_q_n=1, then there is a minimizer u_1,q of Λ_1,q, with |u_1|_q=1, such that, up to a subsequence:u_n→ u in W^1,r_0(Ω) for every r∈ [1,1^*) and |Δ u_n|_T→ |Δ u|_T. We make some comment on the proof of this theorem, as well as its connection with Theorem <ref>. Regarding items (a) and (b), ifp>1, thenW^2,p_Δ(Ω)=W^2,p(Ω)∩ W^1,p_0(Ω), butW^2,1(Ω)∩ W^1,1_0(Ω)is strictly contained inW^2,1_Δ(Ω).Furthermore, the spaceW^2,1(Ω)is not reflexive, and in particular one cannot guarantee that a minimizing sequence has a weakly convergent subsequence.A consequence of these facts is thatΛ_p,qis achieved inW^2,p_Δ(Ω)forp>1, butΛ_1,qis not achieved inW^2,1_Δ(Ω); however, we show that, forq<1^**,Λ_1,qis achieved in the larger spaceBL_0(Ω)(see Lemma <ref> for more details). This is due to the compact embeddings ofBL_0(Ω)intoL^q(Ω), see Proposition <ref>. We point out that the casep=1,q=1has been previously shown in <cit.>, and we refer to that paper for the exact notion of solution in this situation.The advantage of considering (<ref>) is that a minimizeru_p,qforΛ_p,qyields, up to a multiplication by a constant, a least-energy critical point of (<ref>) withp=β+1/βandq=α+1; in particular,(U_,̱V_)̱=(u_p,q,-|Δ u_p,q|^p-2Δ u_p,q)is a solution of (<ref>).Similarly, a minimizer ofΛ_p,qyields, up to a multiplication by a constant, a least energy solution of (<ref>), see Proposition <ref>.The continuity of the map(p,q)↦Λ_p,qproved in Theorem <ref>, and the convergence of the associated minimizers, is the main tool to prove the convergence stated in Theorem <ref>.On the other hand, in particular, it also provides a convergence of the nonlinear value problemΛ_p,qto the linear oneΛ_1,1(see Remark <ref> for more details). We point out that our work also provides a rigorous meaning to the formal limitΔ^2_1 u:=Δ(Δ u/|Δ u|)=lim_p→ 1Δ (|Δ u|^p-2Δ u)=:lim_p→ 1Δ^2_p.This complements the work <cit.>, where the authors study the behavior ofΛ_p,pasp→∞, in connection with the∞-bilaplacian. While minimizers ofΛ_p,qare well characterized forp>1(see for instance <cit.> and references therin), this is not the case forp=1. In this paper we also provide a characterization of minimizers ofΛ_1,q. To be more precise, recall thatG_Ω(·,·)denotes the Dirichlet Green function of the Laplacian inΩand that|·|_qdenotes theL^q-norm.Let Ω be convex or of class C^1,γ for some γ∈ (0,1] and q∈ [1,1^**-1).Then,Λ_1,q(Ω)=1/|G_Ω(·,x_M)|_q,where x_M∈Ω is such that|G_Ω(·,x_M)|_q=max_x∈Ω|G_Ω(·,x)|_q.Moreover, the functionu_1(x)=G_Ω(x,x_M)/|G_Ω(·,x_M)|_q for x∈Ωachieves Λ_1,q, namely, u_1∈ BL_0(Ω), |u_1|_q=1, and |Δ u_1|_T=Λ_1,q.If the maximum point x_M is unique, then u_1 is, up to a multiplicative constant, the only minimizer achievingΛ_1,q.In general, any other possible function u achieving Λ_1,q is, up to sign, positive in Ω and μ:=-Δ u is a positive Radon measure. This is an extension of <cit.>, which considered the caseq=1; in this case,|G_Ω(·,x)|_1=Ψ(x), whereΨis the torsion function ofΩ.Note thatΛ_p,q(Ω)provides the best constant for the continuous (and compact) embeddingW^2,p_Δ(Ω)↪ L^q(Ω)and also forBL_0(Ω)↪ L^q(Ω).The explicit value ofΛ_1,q(B_1)is computed in Theorem <ref> (see also Remark <ref> regardingΛ_1,1(B_1)). As stated in the previous theorem, when the maximum pointx_M(see (<ref>)) is unique, then(<ref>)is the unique minimizer ofΛ_1,q, up to sign andnormalization inL^qsense (see Proposition <ref> for the proof of this particular statement).Using Talenti's comparison principle, we can show thatx_M=0is a maximum point in the case of the unit ballB_1centered at the origin (Proposition <ref>), which yields the following result. The function G_B_1(x,0) is aminimizer of Λ_1,q(B_1). In particular, for N≥ 3 we have that |x|^2-N-1 achieves Λ_1,q(B_1) for any q∈ [1,N/N-2), and Λ_1,q(B_1)=4π^N/2/Γ(N/2-1)( Γ(N/N-2)Γ(N/2+1)/π^N/2Γ(N/N-2-q) Γ(q+1))^1/q.It is interesting to note that the minimizer|x|^2-N-1ofΛ_1,q(B_1)is independent ofq.Here, the symmetries of the ball play an important role.However, in more general domains, the concentration pointx_Mmight also depend onqand one would obtain different minimizers for everyq∈ [1,N/N-2), see Remark <ref> in this regard. We can also show the following Faber-Krahn-type inequality result.For any Ω⊆^N convex or of class C^1,γ for some γ∈(0,1], and such that |Ω|=|B_1|, it holdsΛ_1,q(B_1)≤Λ_1,q(Ω).The results in this paper complement other asymptotic characterizations that have been previously studied in the literature for systems and equations. First, we mention the case of the Lane-Emden equation-Δ u_=̱ |u_|̱^-̱1u_ in Ω, u_=̱0 on ∂Ω.Note that (<ref>) reduces to (<ref>) when=$̱ and U_=̱V_>̱0. If N≥ 3, then 0<+̱1<2^*=2N/N-2 and therefore one cannot consider the asymptotic behavior of solutions as →̱∞. However, if N=2, then (<ref>) is subcritical for all >̱1 and the asymptotic profile of the solution u_$̱ is well understood.The interest in these characterizations started with the seminal works by Ren and Wei <cit.>. It is known that least-energy solutions exhibit a single concentration point phenomenon (solutions go to zero locally uniformly outside the concentration point as→̱∞). Furthermore, these solutions do not blow-up, they remain uniformly bounded and|u_|̱_∞:=sup_Ω|u_|̱→√(e)asp→∞,see <cit.>.IfΩis a ball, then a sharp asymptotic profile is known, namelyu̱_→̱4√(e)log1/|x| in C^1(Ω\{0})as β→∞,see<cit.> (see also <cit.> for a similar result regarding nodal solutions).If=1, then (<ref>) reduces to the subcritical Navier bilaplacian equationΔ^2 v_=̱ |v_|̱^-̱1v_ in Ω, v_=̱Δ v_=̱0 on ∂Ω.WhenΩ⊂^4is a bounded and smooth domain, an asymptotic analysis as→̱∞is done in <cit.>; in particular, it is shown that, up to a subsequence, there isx_0∈Ω(which is a critical point of the Robin function) such thatv̱_→̱C G(·,x_0)inC^4(Ω\{x_0}), whereC>0is an explicit constant andGis the Navier Green function of the bilaplacian inΩ.Note that this is a very different behavior with respect to the one described in Corollary <ref> forN≤ 3andα=1.In particular, the limit profile in the caseN=4is unbounded inL^∞(Ω), whereas the one inN=3is a function inW^2,2(Ω)∩ W_0^1,2(Ω)∩ C^0,σ(Ω).This difference is mainly due to the fact that the Sobolev critical exponent for the bilaplacian is2^**=2N/N-4andN=4is the transition between2^**being finite or infinite. Regarding the Lane-Emden system (<ref>), the following results are known. In <cit.> the author considers the case=2/N-2(N≥ 3) andβ→∞; namely,(,)̱approaches the asymptote of the critical hyperbola (portrayed in Figure <ref>).The main result in <cit.> shows that, for a smooth convex and bounded domainΩ⊂^NwithN≥ 3, if there is only one concentration pointx_0∈Ω, then (up to a subsequence)U_/(∫_Ω|U_|̱_^)^2N-2→ W in C^2(Ω\{x_0}) as →̱∞,whereWis a solution of-Δ W = G_Ω^2/N-2(x,x_0)inΩandW=0on∂ΩandG_Ωis the Dirichlet Green function of Laplacian inΩ.This is also a different behavior compared to the description in Theorems <ref> and <ref>.ForN=2, the behavior of positive solutions of (<ref>) in smooth bounded domains when bothand$̱ tend to infinity has also been studied in some particular cases in <cit.>.In <cit.> it is shown that if =+̱θ_$̱ withθ_$̱ and θ_→̱θ as →̱∞, then there is a finite set of concentration points S={x_1,…,x_k}⊂Ω such that, up to a subsequence,U̱_,̱V̱_→̱8π√(e)∑_i=1^k G_Ω(x,x_i) in C^2_loc(Ω\ S).The concentration points x_i are also characterized, see <cit.>.On the other hand, in <cit.>, positive solutions of (<ref>) in planar bounded C^2 domains are shown to be uniformly bounded as →̱∞ whenever K^-1≤≤̱K for some K>1.In <cit.> it is also shown that (<ref>) cannot be removed, since it is shown that, if Ω is a disc and =1, then any positive solution (U_,̱V_)̱ of (<ref>) satisfies that C^-1log()̱≤ |U_|̱_∞≤ Clog()̱ as →̱∞.We also mention that the limit profiles of solutions of(<ref>) as (α,β) goes to a point(_0,_̱0) on the critical hyperbola (namely, 1/_0+1+1/_̱0+1 = N-2/N) has been studied in <cit.>.In this case, both components exhibit a blow-up (concentration) behavior at critical points of the Robin function. Furthermore, a suitable rescaling of this solution converges to a solution of the Lane-Emden system in ^N.Although these limit profiles are not explicit, their decay rate at infinity are known with precision can be characterized. The paper is organized as follows. In Section <ref> we study the existence of minimizers for the auxiliary nonlinear eigenvalue problem (<ref>), its link with solutions of (<ref>) and the proof of Theorems <ref> and <ref>.The proof of Theorem <ref> can be found in Section <ref>, while Section <ref> is devoted to the proof of Theorems <ref> and <ref> and of Proposition <ref>.Finally, we include a self-contained appendix with several known useful results regarding the space BL_0(Ω).§.§ NotationLet Ω⊂^N be a bounded Lipschitz domain of ^N (N≥ 3) and let ,∈̱ be positive andsubcritical, that is,,>̱0,1/+1+1/+̱1 > N-2/N,see <cit.>. Let |u|_t=(∫_Ω |u|^t)^1/t for t≥ 1, |u|_∞=sup_Ω |u|.For p≥ 1, we denoteW^2,p_Δ(Ω):={u∈ W^1,p_0(Ω): Δ u∈ L^p(Ω)}.Under additional geometrical or regularity assumptions on Ω (see Lemma <ref> below for the details), this is a Banach space when endowed with the norm |Δ·|_p. Moreover, W^2,p_Δ(Ω)=W^2,p(Ω)∩ W^1,p_0(Ω)for p>1,and the norm |Δ· |_p is equivalent to the standard W^2,p(Ω) one. However, W^2,1(Ω)∩ W^1,1_0(Ω) ⊊ W^2,1_Δ(Ω). In <cit.> one finds an example for Ω=B_1 and N≥ 3, of a function u∈ W^2,1_Δ(Ω) which is not in W^2,1(Ω). Moreover, in any domain, it is shown that the inequality u_W^2,1≤ C |Δ u|_1, with C>0 a universal constant, may never hold (see Remark <ref> for more details).Following <cit.>, given u∈ L^1_loc(Ω), we define|Δ u|_T=sup{∫_Ω u Δφ : φ∈ C^∞_c(Ω), |φ|_∞≤ 1}andBL_0(Ω) :={u ∈ W_0^1,1(Ω): |Δ u|_T <∞},which is a Banach space when endowed with the norm |Δ· |_T (see Lemma <ref>). § AN EIGENVALUE PROBLEM Let p,q be such thatq≥ 1, p≥ 1, (N-2p)q<Np.We study first an auxiliary nonlinear eigenvalue problem. Given p and q satisfying (<ref>), letΛ_p,q:=inf_u∈ W^2,p_Δ(Ω)∖{0}|Δ u|_p/|u|_q =inf{|Δ u|_p:u∈ W^2,p_Δ(Ω), |u|_q=1}.In particular, for p=1,Λ_1,q :=inf_u∈ W^2,1_Δ(Ω)\{0}|Δ u|_1/|u|_q =inf_u∈ W^2,1_Δ(Ω)\{0}|Δ u|_T/|u|_q,where we used the fact that |Δ u|_1=|Δ u|_T for u∈ W^2,1_Δ(Ω) (see Lemma <ref>). For the following, recall the definition of 1^** in (<ref>). Let Ω be convex or of class C^1,γ for some γ∈ (0,1]. If q∈[1,1^**), thenΛ_1,q= inf_u∈ BL_0(Ω)\{0}|Δ u|_T/|u|_q,and Λ_1,q is achieved in BL_0(Ω); namely, there is u_1∈ BL_0(Ω) such that |u_1|_q=1 and |Δ u_1|_T=Λ_1,q.We start by checking (<ref>). Since W^2,1_Δ(Ω)⊂ BL_0(Ω), we have thatΛ_1,q≥inf_u∈ BL_0(Ω)\{0}|Δ u|_T/|u|_q.Conversely, given u∈ BL_0(Ω)∖{0}, by Lemma <ref> there exists (u_n)_n∈⊂ C^∞(Ω)∩ C(Ω)∩ BL_0(Ω) such that u_n→ u strongly in W^1,1_0(Ω) and |Δ u_n|_T→ |Δ u|_T. By Lemma <ref>, Δ u_n∈ L^1(Ω) and so u_n∈ W^2,1_Δ(Ω) for any n∈. Moreover, since (u_n)_n∈ is bounded in BL_0(Ω) and q<1^**, by the compact embedding BL_0(Ω)↪ L^q(Ω) (see Proposition <ref>), we have u_n→ u in L^q(Ω), up to a subsequence. Then,|Δ u|_T/|u|_q =lim_n→∞|Δ u_n|_T/|u_n|_q =lim_n→∞|Δ u_n|_1/|u_n|_q≥Λ_1,q.Identity (<ref>) now follows by taking the infimum in u∈ BL_0(Ω). Finally, sinceΛ_1,q=inf{|Δ u|_T: u∈ BL_0(Ω), |u|_q=1},this infimum is actually a minimum by using again the compact embedding BL_0(Ω)↪ L^q(Ω) combined with the lower semicontinuity of |Δ·|_T (Lemma <ref>).The previous result was known in the case q=1, see <cit.>.Let Ω be convex or of class C^1,γ, for some γ∈ (0,1]. If p>1, then Λ_p,q is achieved by a function u∈ W^2,p_Δ(Ω) with |u|_q=1, andΔ (|Δ u|^q-2Δ u)=Λ_p,q|u|^q-2uin Ω, u=Δ u=0on ∂Ω.Under (<ref>),observe that the embedding W^2,p(Ω)↪ L^q(Ω)is always compact. Indeed, in the case 2p<N, the last condition in (<ref>)is equivalent to q<N p/N-2p;in any case, the compactness follows from Lemma <ref> below. For p>1, the space W^2,p_Δ(Ω) endowed with the norm |Δ· |_p is reflexive (see Lemma <ref>), hence the infimum Λ_p,q is always achieved by a function in W^2,p_Δ(Ω) by the direct method of Calculus of Variations. The remaining statement also standard.§.§ The 1-bilaplacian Lane-Emden equationLet Ω be a bounded domain which is either convex or with C^1,γ boundary, for some γ∈ (0,1]. Following the ideas and the notations from <cit.>, we prove the existence of solutions to the (Navier) 1-bilaplacian equation Δ(Δ u/|Δ u|) = |u|^q-2uin Ω, u=Δ u/|Δ u|=0on ∂Ω,for q∈ (1,1^**), making a connection with the nonlinear eigenvalue Λ_1,q. To give a consistent notion of a solution for (<ref>), we introduce some notation. Consider the functional E:L^q(Ω)→ [0,∞] given by E(u):= |Δ u|_T if u∈ BL_0(Ω), ∞if u∈ L^q(Ω)\ BL_0(Ω).Observe that E does not always coincide with |Δ· |_T, since BL_0(Ω) is strictly contained in the set {u ∈ L^q(Ω): |Δ u|_T<∞}, because the definition (<ref>) of BL_0(Ω) directly encodes homogeneous boundary conditions.The functional E is convex and lower-semicontinuous in L^q(Ω) for q∈(1,1^**).The convexity follows from the definition of E and the convexity of the functional |Δ·|_T. As for the lower-semicontinuity, let (u_n)⊂ L^q(Ω) be a sequence such that u_n→ u in L^q(Ω); in particular, the convergence takes place also in L^1(Ω). Then, either lim inf E(u_n)=∞ or, on the contrary, we may consider (u_n') ⊂ BL_0(Ω), a subsequence of (u_n), such that lim inf E(u_n)= lim inf |Δ u'_n|_T<∞. Then, by Lemma <ref>, we have u∈ BL_0(Ω) and E(u)=|Δ u|_T≤lim inf |Δ u_n|_T. Recall that the subdifferential of E at u∈ L^q(Ω) (see, for example, <cit.> and <cit.>) is given by∂ E(u):={f∈ L^q'(Ω)=L^q/q-1(Ω):E(φ)≥ E(u)+∫_Ωf(φ-u) for any φ∈ L^q(Ω)}, Let u∈ BL_0(Ω). Then f∈∂ E(u) if and only if there is v∈ W^1,1_0(Ω)∩ L^∞(Ω) such that |v|_∞≤ 1, -Δ v=f∈ L^q/q-1(Ω) in the sense of distributions, and E(u)=-∫_Ω u Δ v. The proof is basically the same as in <cit.> (which, in turn, follows closely the proof of <cit.>).The only change is that the functional E in <cit.> is considered over L^1^*(Ω), but exactly the same proof holds if E is considered instead on L^q(Ω), noting that q'=q/q-1.Next, let G:L^q(Ω)→ be given byG(u):=1/q∫_Ω |u|^q.Since q>1, we have that G is convex, of class C^1,G'(u)φ = ∫_Ω|u|^q-2u φfor any φ∈ L^q(Ω),and ∂ G(u) = {|u|^q-2u},see <cit.>.Now we define the Euler-Lagrange functional associated with (<ref>) given byΦ:L^q(Ω)→ (-∞,∞],Φ(u)=E(u)-G(u).Observe that its effective domain, that is, the set of elements in the domain at which E is finite, is BL_0(Ω). Following <cit.>, we say that u∈ BL_0(Ω) is a critical point of Φ ifG'(u)∈∂ E(u),that is, if |Δ f|_T≥ |Δ u|_T+∫_Ω |u|^q-2u (f-u) for every f∈ L^q/q-1(Ω).Given u∈ BL_0(Ω), we have that u is a critical point of Φ if and only ifthere is v∈ W^1,1_0(Ω)∩ L^∞(Ω) such that|v|_∞≤ 1,-Δ v = |u|^q-2uin the sense of distributions in Ω,and -∫_Ω uΔ v = |Δ u|_T.This is a direct consequence of Proposition <ref> and the definition of critical point.We say that u∈ BL_0(Ω) is a solution to (<ref>) if it is a critical point of Φ (or, equivalently, if (<ref>) holds true. We say that u∈ BL_0(Ω)∖{0} is a least energy solution to (<ref>) if u is a solution andΦ(u)=c_q:=min{Φ(w): wis a nontrivial solution of (<ref>)}.The function v in (<ref>) gives a consistent meaning to the quotient -Δ u/|Δ u|.Observe that BL_0(Ω)↪ L^q(Ω) for q∈[1,1^**), see Proposition <ref>. Therefore, the integral ∫_Ω u Δ v is well defined.For each λ>0, one can define analogously the notion of solution and least energy solution for the 1-Biharmonic equation Δ(Δ u/|Δ u|)=λ|u|^q-2u with Navier boundary conditions.For q∈ (1,1^**), let u_1∈ BL_0(Ω)∖{0} achieve Λ_1,qand |u_1|_q=1. Then,Δ(Δ u_1/|Δ u_1|) = Λ_1,q|u_1|^q-2u_1in Ω, u_1=Δ u_1/|Δ u_1|=0on ∂Ω.Moreover,u=Λ_1,q^1/q-1u_1 is a least energy solution of (<ref>) andmin_v∈ L^q(Ω)Φ(v)=q-1/qΛ_1,q^q/q-1for Φ as in (<ref>).Before providing the proof of this result, we recall the following version of the Lagrange multiplier rule for nonsmooth convex functionals. Let X be a Banach space,E: X ↦ (-∞,∞]be a convex functional and G: X ↦ℝ be convex and continuous. Assume there exists u∈ X such that* E(u)=min{E(v): v∈ X, G(v)=1};*there exists ũ∈ X such thatE(u+ũ)<E(u), G(u+ũ)<1, G(u-ũ)<∞ .Then∂ G(u) ⊂⋃_t ≥ 0 t ∂ E(u).Observe that0<Λ_1,q=min{E(u): u∈ L^q(Ω), |u|_q^q=1}<∞,which, by Lemma <ref>, is achieved at some u_1∈ BL_0(Ω)⊆ L^q(Ω) with |u_1|_q=1.By applying Lemma <ref> with X=L^q(Ω), E as in (<ref>), G as in (<ref>), and ũ =-u_1 and recalling (<ref>), we have |u_1|^q-2u_1∈⋃_t≥ 0 t ∂ E(u_1).In particular, there are t≥ 0 and f∈∂ E ⊂ L^q/q-1(Ω) such that |u_1|^q-2u_1=tf.Since u_1≠0, then t>0. Moreover, by Proposition <ref>, there is v∈ W^1,1_0(Ω)∩ L^∞(Ω) with |v|_∞≤ 1, -Δ v=f in distributional sense, and E(u_1)=-∫_Ω u_1Δ v. Then, we have that -Δ v = t^-1|u_1|^q-2u_1 and, since |u_1|_q=1, Λ_1,q=|Δ u_1|_T=E(u_1)=-∫_Ω u_1 Δ v=t^-1∫_Ω |u_1|^q=t^-1.In particular, u_1 is a solution to (<ref>).If u=Λ_1,q^1/q-1 u_1, then-∫_Ω u Δ v =-Λ_1,q^1/q-1∫_Ω u_1 Δ v =Λ_1,q^1/q-1 |Δ u_1|_T =|Δ (Λ_1,q^1/q-1u_1)|_T = |Δ u|_Tand -Δ v=Λ_1,q|u_1|^q-2u_1=|Λ_1,q^1/q-1 u_1|^q-2Λ_1,q^1/q-1 u_1=|u|^q-2uin Ω.This yields that u is a solution of (<ref>). A direct calculation shows thatΦ(u)=Φ(Λ_1,q^1/q-1u_1) =Λ_1,q^1/q-1|Δ u_1|_T-1/qΛ_1,q^q/q-1|u_1|_q =q-1/qΛ_1,q^q/q-1.Moreover, if U∈ BL_0(Ω) is a nontrivial solution of (<ref>), then |U|_q^q=|Δ U|_T (by (<ref>)); then |Δ U|_T=(|Δ U|_T/|U|_q)^q/q-1 and thereforeΦ(U) =(1-1/q)|Δ U|_T =q-1/q(|Δ U|_T/|U|_q)^q/q-1≥q-1/qΛ_1,q^q/q-1.As a consequence, u given by Λ_1,q^1/q-1u_1 is a least-energy solution of Φ.There are other possible standard characterizations of the least energy level. Indeed, if q∈ (1,2), then c_q can be achieved by global minimization:c_q=min{Φ(u): u∈ L^q(Ω)}while, for q∈ (2,1^**), it is a Mountain-Pass levelc_q=inf_γ∈Γsup_t∈ [0,1]Φ(γ(t)),where Γ:={γ∈ C([0,1]: γ(0)=0, Φ(γ(1))<0}. The proof follows standard arguments combined with results from nonsmooth analysis. It is not hard to check that Φ satisfies the Palais-Smale condition at any c∈, namely (see <cit.>) that whenever (u_n)⊂ L^q(Ω) is a sequence such that Φ(u_n)→ c andE(v) ≥ E(u_n) + ∫_Ω |u_n|^q-2u_n (v-u_n)+∫_Ω z_n (v-u_n)for allv ∈ L^q(Ω),where z_n→ 0 in L^q/q-1(Ω) as n→∞, then (u_n) possesses a convergent subsequence in L^q(Ω). Then we may use <cit.> when q∈ (1,2) (global minimization), and <cit.> when q∈ (2,1^**) (the mountain pass theorem).In <cit.>, the authors show the existence of least-energy solutions for problems involving the 1-bilaplacian operator and with more general nonlinearities f(x,s)on the right-hand-side (not necessarily homogeneous, superlinear at the origin, and subcritical at infinity).See also Remark <ref> below.§.§ Characterization of solutions In <cit.>, the authors show that, for any convex domain Ω or any domain with C^1,γ boundary, the first eigenvalue of the 1-bilaplacian equation is achieved and it is given by Λ_1,1(Ω)=inf_u∈ BL_0(Ω)\{0}|Δ u|_T/|u|_1=1/Ψ_M(Ω),where Ψ_M(Ω)=max_ΩΨ=Ψ(x_M) for x_M∈Ω a global maximum point of Ψ,the torsion function of Ω, namely, the solution of -ΔΨ=1 in Ω with Ψ=0 on ∂Ω.Furthermore, the authors in <cit.> also show that a first eigenfunction is given byG_Ω(·,x_M), where G_Ω is the Green's function for the Dirichlet Laplacian in Ω.In this section, we study the equivalent result in the nonlinear case Λ_1,q for q∈[1,1^**).For this, we need the following remark. Leth:Ω→ be given byh(x)=|G_Ω(·,x)|_q^q.Then h is well defined (because, by the maximum principle, 0≤ G_Ω(x,y)≤ C|x-y|^2-N for some C>0 and because q∈ [1,1^**)). Furthermore, h is continuous in Ω. Indeed, let x∈Ω, (x_n)⊂Ω be a sequence such that x_n→ x as n→∞, and let B be a large ball so that Ω-x_n⊂ B for all n; then, by a change of variables,|G_Ω(x_n,·)|_q^q=∫_B |G_Ω(x_n,y+x_n)|^qχ_Ω(y+x_n)dy.Since |G_Ω(x_n,y+x_n)|^qχ_Ω(y+x_n)→ |G_Ω(x,y+x)|^qχ_Ω(y+x) as n→∞ for a.e. y∈ B and|G_Ω(x_n,y+x_n)|^qχ_Ω(y+x_n)≤1/|y|^q(N-2)∈ L^1(B),then, by dominated convergence,h(x_n)=|G_Ω(x_n,·)|_q^q→∫_B|G_Ω(x,y+x)|^qχ_Ω(y+x)dy=|G_Ω(x,·)|_q^q=h(x).Finally, note also that h(x)=0 for all x∈∂Ω (because G_Ω(x,y)=0 for all x∈∂Ω and y∈Ω). These facts guarantee the existence of x_M∈Ω such that h(x_M)=max_Ω h. Let Ω be convex or of class C^1,γ for some γ∈ (0,1] and q∈ [1,1^**).Then,Λ_1,q(Ω)=1/|G_Ω(·,x_M)|_q,where x_M∈Ω is a maximum of the function h introduced in (<ref>). Moreover, the functionu_1(x)=G_Ω(x,x_M)/|G_Ω(·,x_M)|_q for x∈Ωachieves Λ_1,q Note thatΛ_1,q(Ω)≤|Δ G_Ω(x_M,·)|_T/|G_Ω(x_M,·)|_q=1/|G_Ω(x_M,·)|_q,because G_Ω(x_M,·)∈ BL_0(Ω) and|Δ G_Ω(x_M,·)|_T= sup{∫_Ω G_Ω(x_M,·)Δφ:φ∈ C^2_c(Ω), |φ|_∞≤ 1}= sup{-φ(x_M):φ∈ C^2_c(Ω), |φ|_∞≤ 1}=1.On the other hand, let u_1∈ BL_0(Ω) be a minimizer for Λ_1,q(Ω) such that |u_1|_q=1. Then -Δ u_1=μ for some Radon measure μ and, in particular, u_1(x)=∫_Ω G_Ω(x,y)dμ(y) for x∈Ω (see Remark <ref>). Moreover, by Jensen's inequality, Fubini's theorem, and (<ref>),1 =|u_1|^q_q =∫_Ω|∫_Ω G_Ω(x,y)dμ(y) |^q dx ≤ |μ|(Ω)^q-1∫_Ω∫_Ω G_Ω(x,y)^qd|μ|(y)dx=|μ|(Ω)^q-1∫_Ω∫_Ω G_Ω(x,y)^qdxd|μ|(y) ≤ |μ|(Ω)^q-1|G_Ω(·,x_M)|_q^q∫_Ωd|μ|(y)=|μ|(Ω)^q|G_Ω(·,x_M)|_q^q,which yields, by (<ref>), that Λ_1,q=|Δ u_1|_T=|μ|(Ω)≥1/|G_Ω(·,x_M)|_q.Note that, if q=1, then h(x)=|G(x,·)|_1=∫_ΩG_Ω(x,y)dy=Ψ(x), where Ψ is the Dirichlet torsion function of Ω.Therefore, <cit.> is recovered from Theorem <ref>, when q=1.Let Ω be convex or of class C^1,γ, for some γ∈ (0,1]. Let x_M∈Ω be such that|G_Ω(·,x_M)|_q=max_x∈Ω|G_Ω(·,x)|_q. Then, the function u(x)=|G_Ω(·,x_M)|_q^-q/q-1 G_Ω(x,x_M)for x∈Ω is a least-energy solution of (<ref>).The claim follows from Proposition <ref> and Theorem <ref>. The next two results extend <cit.> and <cit.> to our setting, with very similar proofs. Let u∈ BL_0(Ω) be a minimizer for Λ_1,q(Ω).Then, up to a sign, μ:=-Δ u is a positive Radon measure and u is positive in Ω.The proof is exactly the same as in <cit.> using L^q norms instead of L^1 norms. Let u_1 be a minimizer for Λ_1,q(Ω) such that |u_1|_q=1 and let h be as in (<ref>). If h admits only one maximum point x_M∈Ω, then, u_1 is given by (<ref>) and it is the unique (up to sign) minimizer of Λ_1,q(Ω) such that |u_1|_q=1.Let u_1∈ BL_0(Ω) be a minimizer for Λ_1,q(Ω). Then, without loss of generality, we may assume that μ=-Δ u_1 is a positive Radon measure (by Proposition <ref>). By the definition of h and x_M, we have that (<ref>) holds with equalities; in particular,∫_Ω h(y)dμ(y)=∫_Ω∫_Ω |G_Ω(x,y)|^qdxdμ(y) =|G_Ω(·,x_M)|_q^q∫_Ωdμ(y) =∫_Ω h(x_M) dμ(y),with h as in (<ref>).Since μ is a positive Radon measure and h(x_M)-h(y)≥ 0 for all y∈Ω, we deduce that μ has support in the set {y∈Ω:h(x_M)=h(y)}={x_M}.This implies that μ=μ(Ω)δ_x_M and the claim follows.This is now a consequence of Propositions <ref>, <ref>, and <ref>. In Section <ref> we show that, if Ω is a ball, then the point x_M∈Ω given by (<ref>) can be taken as the origin x_M=0. However, in more general domains, we conjecture that the point x_M=x_M(q) may vary depending on q.To see this, here we only provide a heuristic argument[We thank Sven Jarohs and Tobias Weth for helpful discussions in this regard.]. Consider a “spinning top" domain Ω given by Ω={(x,y,z)∈^3:|(x,y)|<1, z∈(|(x,y)|,2)},see Figure <ref>.By symmetry and convexity, it is expected that the point x_M lies on the set {(x,y,z)∈Ω :x=y=0} (the dotted vertical line in Figure <ref>).For q=1, the point x_M(1) should be close to the center of mass of Ω. As q increases, this enhances the singularity of the Green function and penalizes more the points that are close to the boundary (note that G(x,·)→ 0 as (x,∂Ω)→ 0 and |G(x,·)|^q goes faster to zero for q large).As a consequence, the point x_M(q) should move towards the insphere center, namely, the center of the largest sphere contained in Ω (represented by the dotted circle in Figure <ref>).This phenomenon can already be seen, for example, with the fundamental solution in dimension 3.Indeed, for N=3 and q∈(0,3), consider the function h:(0,2)→ (0,∞) given byh_q(t) :=∫_Ω |(0,0,t)-(x,y,z)|^-qdx dy dz =2π∫_0^1ρ∫_ρ-t^2-t |ρ^2+z^2|^-qdzdρ.Since we are interested in the (unique) maximum point of h_q, we differentiate and obtain thath_q'(t)=2π∫_0^1ρ(|ρ^2+(ρ-t)^2|^-q-|ρ^2+(2-t)^2|^-q)dρ.These integrals can be computed explicitly for q=1 and q=2, and the corresponding roots can be found numerically. In particular, the points y_M(1)∼ 1.28 and y_M(2)∼1.27 maximize h_1 and h_2 respectively. §.§ Asymptotics for p and qNote that, when p=1, then (<ref>) implies thatq∈[1,1^**)or, equivalently, q-1∈[0,1^**-1), (see (<ref>)). Recall the definition of Λ_p,q given in (<ref>). The following result provides some uniform bounds for the minimizers. Let (p_n,q_n) and (p,q)satisfy (<ref>) for every n∈, and let p_n→ p and q_n→ q as n→∞. For any n∈, consider a minimizer u_n for Λ_p_n,q_n such that |u_n|_q_n=1. Then there exists C>0 such that, for every n∈,|Δ u_n|_p_n≤ C, whenever p_n>1, or|Δ u_n|_T≤ C, whenp_n=1.For every n∈, fix φ∈ C^∞_c(Ω)∖{0}, which, in particular, lies in W^ 2,p_n_Δ(Ω), as well as in BL_0(Ω). Assume that p_n>1 for every n∈. Using that Λ_p_n,q_n is an infimum and dominated convergence,|Δ u_n|_p_n≤|Δφ|_p_n/|φ|_q_n→|Δφ|_p/|φ|_qas n→∞, so that |Δ u_n|_p_n is uniformly bounded.If, on the other hand, β_n=1 for infinitely many n∈, then|Δ u_n|_T ≤|Δφ|_T/|φ|_q_n→|Δφ|_1/|φ|_qas n→∞, so that |Δ u_n|_T is also uniformly bounded. Next, we show the convergence of minimizers. The map(p,q)↦Λ_p,qis continuous for (p,q) satisfying (<ref>).Let (p_n,q_n) and (p,q)satisfy (<ref>) for every n∈, and let p_n→ 1 and q_n→ q as n→∞. Let u_n be the minimizer for Λ_p_n,q_n with |u_n|_q_n=1. There is a minimizer u of Λ_p,q such that:* u ∈BL_0(Ω)∩ W^1,r_0(Ω) for every r∈ [1,1^*), with |u|_q=1; * u_n→ u in W^1,r_0(Ω) for every r∈ [1,1^*) and |Δ u_n|_T→ |Δ u|_T, up to a subsequence.We first focus on a situation where p_n>1 for all n∈, so that minimizers of Λ_p_n,q_n belong to W^ 2,p_n_Δ(Ω). Recall also that q<1^** in this case (Remark (<ref>)).Then|Δ u_n|_1 ≤|Ω|^p_n-1/p_n |Δ u_n|_p_n=|Ω|^p_n-1/p_nΛ_p_n,q_n.Therefore, by Lemma <ref> and Lemma <ref>-2, we have that (u_n)_n∈ is uniformly bounded in W^2,1_Δ(Ω). By Proposition <ref>(see also Remark <ref>), there is u∈ W^1,r_0(Ω) such that, up to extracting a subsequence, u_n→ ustrongly in W^1,r_0(Ω) for every r∈[1,1^*) and strongly inL^t(Ω) for t∈ [1,1^**). Moreover, by Lemma <ref>, u∈ BL_0(Ω) and|Δ u|_T≤lim inf_n→∞ |Δ u_n|_T=lim inf_n→∞ |Δ u_n|_1.Let t̅=q+1^**/2. For large n, we have q_n<t̅ and u_n→ u strongly in L^t̅(Ω). Hence, up to a sequence, u_n→ u a.e., and there exists h∈ L^t̅(Ω) such that |u_n|≤ h a.e. in Ω (see, for example, <cit.>). By dominated convergence, we have that1=|u_n|_q_n→ |u|_q.Fix >0. By the definition of Λ_1,q (see (<ref>)), there is v_1∈ W^2,1_Δ(Ω)∖{0} such that|Δ v_1|_T/|v_1|_q≤Λ_1,q+. Since v_1∈ W^2,1_Δ(Ω), we have |Δ v_1|_T=|Δ v_1|_1, and (by Lemma <ref>) there is (v_k)⊂ C^2(Ω)⊂ W^2,p_n(Ω) for every n such that Δ v_k →Δ v_1 in L^1(Ω) and (again by compact embeddings, as q<1^**), v_k→ v in L^q(Ω).Then Λ_1,q+≥ |Δ v_1|_1/|v_1|_q=lim_k→∞|Δ v_k |_1/|v_k|_q =lim_k→∞lim_n→∞|Δ v_k |_p_n/|v_k|_q_n≥lim_k→∞lim_n→∞Λ_p_n,q_n=lim_n→∞Λ_p_n,q_n = lim_n→∞|Δ u_n |_p_n≥lim_n→∞|Ω|^-p_n-1/p_n | Δ u_n |_1=lim_n→∞|Δ u_n|_1≥ |Δ u|_T≥Λ_1,q,where we have used Lebesgue dominated convergence, (<ref>), Hölder's inequality, that u_n is a minimizer for Λ_p_n,q_n,(<ref>) and (<ref>). Summarizing, |u|_q=1,Λ_1,q≤ |Δ u_1|_T≤lim_n→∞ |Δ u_n|_1= lim_n→∞Λ_p_n,q_n≤Λ_1,q+ for every >0.By letting → 0, equality holds true and the conclusion of the lemma follows for case 1 under the situation p_n>1.Assume now that p=p_n=1 for every n. Then the minimizer u_n of Λ_p_n,q_n=Λ_1,q_n belongs to BL_0(Ω) and not necessarily to W^2,1_Δ(Ω). By Lemma <ref>, we have|Δ u_n|_T≤ Cand we can repeat almost word by word the proof of the previous case, simply replacing |Δ· |_p_n with |Δ·|_T.Parts (a) and (b) are a direct consequence of Lemmas <ref>, <ref> and Proposition <ref>, while the last paragraph is a consequence of Proposition <ref>.In the case p=1, since W^2,1(Ω) is not reflexive, we cannot say that u_n⇀ u weakly in W^2,1(Ω). This kind of convergence plays a key role in the asymptotic analysis of solutions. To overcome this difficulty, we use the space BL_0(Ω), as in <cit.>. In this paper, the authors studied in detail the quantity Λ_1,1, which is associated with aneigenvalue problem for the 1-biharmonic operator:Δ(Δ u/|Δ u|)=Λ_1,1u/|u| in Ω, u=Δ u/|Δ u|=0in ∂Ω.See <cit.> for the notion of solution, which does not fall in the framework of our Section <ref>, since the function G_1(u)=|u|_1 is not of class C^1. As a particular case of our study, we show that, as p,q→ 1 (p≥ 1), Λ_p,q→Λ_1,1, and that solutions of Δ(|Δ u|^p-2Δ u)=Λ_p,q|u|^q-2uin Ω, u=Δ u/|Δ u|=0in ∂Ω, (which minimize Λ_p,q), converge to solutions of (<ref>) (which minimize Λ_1,1). § HAMILTONIAN SYSTEMS AND LIMIT PROFILES Let Ω⊂^N be a bounded domain of class C^2,γ. For ,$̱ satisfying (<ref>), by <cit.>, there is a positive least-energy strong solution(U_β,V_β)of the system -Δ U_=̱ |V_|̱^-̱1V_ in Ω,-Δ V_=̱ |U_|̱^-1U_ in Ω, U_=̱V_=̱ 0 on ∂Ω.More precisely, one shows that there existsU_β∈ W^2,β/β+1(Ω)∩ W^1,β+1/β_0(Ω)such thatJ_α,β(U_β)=inf{J_,(U): Uis a nontrivial critical point ofJ_α,β},whereJ_α,βis as in (<ref>). In particular, Δ(|Δ U_|̱^1/-1Δ U_)̱=|U_|̱^-1U_ in Ω, U_=̱Δ U_=̱0 on ∂Ωand, forV_β:=-|Δ U_β|^1/β-1Δ U_β, we haveV_β∈ W^2,/+1(Ω)∩ W^1,+1/β_0(Ω)and(U_β,V_β)is a (strong) solution of (<ref>). Moreover,U_β· V_β>0inΩ, and a standard bootstrap argument yields that(U_,̱V_)̱∈ C^2,σ_1(Ω)× C^2,σ_2(Ω)for someσ_1,σ_2∈(0,1). For the details, see <cit.> and references therein.There is a (least-energy) solution U_∞∈ BL_0(Ω) of (<ref>) such that, up to a subsequence, as β→∞:U_→̱U_∞in W^1,r_0(Ω) for every r∈[1,1^*), and|Δ U_β|_1→ |Δ U_∞|_T.Moreover, U_∞=Λ_1,+1^1/ u_1, where u_1 is a minimizer for Λ_1,+1, given in (<ref>), with |u_1|_+1=1. By <cit.>,u_=̱Λ_1+1/ ̱,+1^-/-̱1U_is a minimizer for Λ_1+1/ ̱,+1 with |u_β|_+1=1. Then, by Proposition <ref>,U_=̱Λ_1+1/β,+1^/-̱1u_β→Λ_1,+1^1/u_1=U_∞ in W^1,r_0(Ω) as →̱∞and|Δ U_β|_1=Λ_1+1/β,+1^/-̱1|Δ u_β|_1→Λ_1,+1^1/|u_1|_T =|Δ U_∞|_T. By Proposition <ref>, U_∞ is a least-energy solution of (<ref>). Theorem <ref> also yields the convergence for the componentV_$̱.There is V_∞∈ W^2,1+1/(Ω)∩ W_0^1,1+1/(Ω) such thatV_⇀̱V_∞weakly in W^2,1+1/α(Ω).In particular:*if α∈ (0,1^*-1), then V_∞∈ C^1, σ(α)(Ω) and V_β→ V_∞ inC^1,σ(Ω),for every 0<σ<σ(α):=1-Nα/α+1;*if α=1^*-1=1/N-1, then V_∞∈ C^0, γ(Ω) andV_β→ V_∞ strongly inW^1,η_0(Ω)andC^0,σ(Ω),for every η≥ 1 and 0<σ<1;*if α∈ (1^*-1,1^**-1), thenV_∞∈ W^1,N(α+1)/α(N-1)-1_0(Ω) ∩ C^0, σ(α)+1(Ω) and V_β→ V_∞ strongly inW^1,η_0(Ω)andC^0,σ(Ω), for every1≤η<N(α+1)/α(N-1)-1 and0<σ<σ ()+1.Since (U_,̱V_)̱ is a least-energy solution of (<ref>), there is a constant C_1>0 independent of $̱ such that|U_|̱_+1<C_1. Indeed, recalling (<ref>), we have|U_β|_+1=Λ_1+1/,+1^/-̱1|u_|̱_+1=Λ_1+1/,+1^/-̱1,which is bounded forfixed and large$̱, by Proposition <ref>.Then, by elliptic regularity, there is C_2>0 independent of q such that V__W^2,1+1/≤ C_2 ||U_|̱^|_1+1/=C_2|U_|̱_+1^<C_2 C_1^p.Then, there is V_∞∈ W^2,1+1/(Ω)∩ W_0^1,1+1/(Ω) with V_V_∞ weakly in W^2,1+1/(Ω). The other statements follow from Sobolev embeddings (Lemma <ref>), observing that 2α+1/α>N if and only if α<1^**-1, that0< 1-Nα/α+1<1 α < 1^*-1,and (α+1/)^*=N(α+1)/α(N-1)-1. We are ready to show Theorem <ref>. The theorem follows from Theorem <ref> and Corollary <ref>.Note that, since U_→̱U_∞ in W^1,r_0(Ω) for all r∈[1,1^*), then U^_→̱U^_∞ in L^t(Ω) for all t∈ [1,1^**/), therefore, for a.e. x∈Ω, V_∞(x)=lim_→̱∞ V_(̱x) =lim_→̱∞∫_Ω G_Ω(x,y) U_^̱(y) dy=∫_Ω G_Ω(x,y) U_∞^α(y) dy,where G_Ω denotes the Green function of the Dirichlet Laplacian in Ω and where we have used that it lies in the dual of L^t(Ω). § THE CASE OF THE BALL In the case of the ball some explicit formulas can be obtained.We collect some auxiliary lemmas first. Let, for x∈B_1\{0}, N≥ 3. G_B_1(x,0)=c_N(|x|^2-N-1), c_N=1/(N-2)|∂ B_1|=Γ(N/2)/2(N-2)π^N/2.In particular, -Δ G_B_1(· ,0) = δ_0in B_1,G_B_1(· ,0)=0on ∂ B_1.Let Ω=B_1. Then|Δ G_B_1(· ,0)|_T=1 and|G_B_1(· ,0)|_q =c_N(π^N/2/Γ(N/2+1)Γ(N/N-2-q) Γ(q+1)/Γ(N/N-2))^1/q.Passing to spherical coordinates and applying the change of variables τ=t^N-2, |G_B_1(· ,0)|_q^q= c_N^q∫_B(|x|^2-N-1)^q dx= c_N^q|∂ B_1|∫_0^1(t^2-N-1)^qt^N-1 dt = c_N^q|∂ B_1| ∫_0^1t^N-1-(N-2)q(1-t^N-2)^q dt= c_N^q|∂ B_1|/N-2∫_0^1τ^N-1/N-2-q(1-τ)^qτ^-N-3/N-2 dτ= c_N^q|∂ B_1|/N-2∫_0^1τ^2/N-2-q(1-τ)^q dτ=B(2/N-2-q+1,q+1)=c_N^q|∂ B_1|/N-2Γ(q+1)Γ(2/N-2-q+1)/Γ(2(N-1)/N-2)=c_N^qπ^N/2Γ(q+1)Γ(2/N-2-q+1)/Γ(N/2+1) Γ(N/N-2),where we used the definition of the Beta function B(·, ·) and its relation with the Gamma function Γ(·), the fact that zΓ(z)=Γ(z+1),2(N-1)/N-2=N/N-2+1 and the characterization of c_N given in (<ref>). On the other hand, by (<ref>) and Lemma <ref>,|Δ G_B_1(·,0)|_T= |δ_0|(Ω) = sup{-φ(0):φ∈ C^2_c(Ω), |φ|_∞≤ 1}=1.Let Ω⊆^N be open, bounded, and such that |Ω|=|B_1|.For any fixed x∈Ω, if G_Ω(x,·)^♯ denotes the radial symmetric decreasing rearrangement of G_Ω(x,·), then it holdsG_Ω(x,·)^♯(y)≤ G_B_1(0,y) for a.e. y∈ B_1∖{0}.Consider ψ∈ C^∞_c(B_1) to be radially decreasing, and such that ψ≥0,|ψ|_1=1. Construct the sequenceψ_j(y)=j^Nψ(j y) for any y∈^N, j∈,which is weakly converging to the Dirac measure δ_0. Consider the sequences defined by, for any j∈,-Δ u_j=ψ_j in B_1,u_j=0on ∂ B_1,and , for x∈Ω fixed,-Δ v_j=ψ_j(·-x) in Ω,v_j=0on ∂Ω.As ψ_j(·-x)^♯=ψ_j, by the Talenti's comparison principle <cit.>, we deducev_j^♯≤ u_j in B_1, for any j∈.Asv_j⟶ G_Ω(x,·),u_j→ G_B_1(0,·), pointwisely as j→+∞,by continuity of the radial symmetric decreasing rearrangement in measure <cit.> we can push inequality (<ref>) to the limit to get (<ref>).The function B_1∋ x↦|G_B_1(x,·)|_q^q has a maximum point at x_M=0.Let x∈ B_1. By the properties of the symmetric decreasing rearrangement and by Lemma <ref>,|G_B_1(x,·)|_q=|G_B_1(x,·)^♯|_q≤|G_B_1(0,·)|_q,as claimed. We are ready to show Theorem <ref>.This follows from Theorem <ref>, Lemma <ref>, and Proposition <ref>. Note that lim_q→ 1^+Λ_1,q(B_1)=2N,which is, by <cit.>, the first eigenvalue of the 1-bilaplacian equation on the unitary ball. Indeed, lim_q→ 1^+Λ_1,q(B_1)= 4π^N/2/Γ(N/2-1)Γ(N/N-2)Γ(N/2+1)/π^N/2Γ(N/N-2-1) =4(N/N-2-1)N/2(N/2-1)=2N,where we have used several times the recurrence identity Γ(t+1)=tΓ(t) for t>0.Exploiting the characterization of Λ_1,q(Ω) given in Theorem <ref>, we have, by Lemma <ref> and Proposition <ref>,Λ_1,q(Ω)=1/|G_Ω(x_M,·)|_q =1/|G_Ω(x_M,·)^♯|_q≥1/|G_B_1(0,·)|_q=Λ_1,q(B_1).§.§ Limiting profiles in a ball Let Ω=B_1 be the unitary ball of ^N (N≥ 3) centered at the origin, ∈(0,2/N-2), >̱0, and let (U_,̱ V_)̱ be a least-energy solution of -Δ U_=̱ |V_|̱^-̱1V_ in B_1, -Δ V_=̱ |U_|̱^-1U_ in B_1,U_=̱V_=̱ 0 on ∂ B_1. In this section, we give an explicit characterization of the limiting profiles U_∞ and V_∞, given byTheorem <ref>, in B_1. There is κ_N,>0 such that, as β→∞,U_→̱U_∞=κ_N, G_B_1(· ,0)in W^1,r_0(B_1)for all r∈ [1,1^*), |Δ U_|̱_1→κ_N,.Moreover, V_∞∈ W^2,1+1/(B_1)∩ W_0^1,1+1/(B_1) is a strong solution of -Δ V_∞ = U_∞^ in B_1.Let (U_,̱ V_)̱ be a positive least-energy solution of (<ref>). By (<ref>), (U_,̱ V_)̱ is also a classical solution.A standard symmetrization argument yields that U_$̱ andV_$̱ areradially symmetric, anddecreasing in the radial variable (see for example <cit.>). By Theorem <ref>, we have that U_→̱U_∞ in W^1,r(B_1) and V_→̱V_∞ in C^0,σ(Ω) as →̱∞. In particular, U_∞ and V_∞ are also nonnegative, radially symmetric, and decreasing in the radial variable.Now we claim that V_∞≤ 1 in B_1.If there is z_0∈ B_1 such that =V_∞(z_0)-1>0, then (by monotonicity and uniform convergence), we have that V_β(x)>1+/2 for every x∈ B_1 with |x|≤|z_0| and for $̱ sufficiently large. But then-Δ U_(̱x) = V_(̱x)^→̱∞as→̱∞for everyx∈ B_1with|x|≤|z_0|. In particular, ifφ∈ C^∞_c(B_1)is a nonnegative function such thatφ=1inB_|z_0|and|φ|_∞≤ 1, then|Δ U_∞|_T=lim_→̱∞|Δ U_|̱_T≥lim_→̱∞∫_B_1U_(-Δφ) =lim_→̱∞∫_B_1(-Δ U_)̱ φ =∞ as →̱∞,which would contradict the fact thatU_∞∈ BL_0(Ω).Next we show thatV_∞<1inB_1∖{0}. Indeed, assume by contradiction that there isr_0∈(0,1)such thatV_∞=1inB_r_0.Then,0=-Δ V_∞ = U_∞^inB_r_0. SinceU_∞is a nonnegative monotone function, this implies thatU_∞≡ 0inB_1, but this contradicts the fact thatU_∞is nontrivial (see Lemma <ref>).SinceV_∞<1inB_1∖{0}, this implies thatV_^̱→̱0locally uniformly inB_1∖{0}as→̱∞. Therefore,-Δ U_=̱ V_^̱→̱0 locally uniformly in B_1∖{0}as →̱∞.As a consequence, for everyφ∈ C^∞_c(B_1\{0}),∫_Ω U_∞ Δφ =lim_→̱∞∫_Ω U_Δφ =-lim_→̱∞∫_Ω V_^̱φ =0,namely, the nontrivial Radon measure-Δ U_∞has support only on{0}. Then, for everyφ∈ C^∞_c(B_1), it holds∫_B_1φd(-Δ U_∞)=φ(0)(-Δ U_∞)(B_1),which implies that-Δ U_∞is a multiple ofδ_0and thereforeU_∞(x)=κ_N,G_B_1(·,0)inB_1for someκ_N,>0, whereG_B_1(·,0)is given by (<ref>).If κ_N, is as in Proposition <ref>, then κ_N, = Λ_1,+1^+1/ =c_N^-1-1/( Γ(N/N-2)Γ(N/2+1)/π^N/2Γ(2/N-2-) Γ(+2))^1/.By Theorem <ref> and Proposition <ref>, we have Λ_1,+1^1/u_1=κ_N,G_B_1(·, 0), so thatΛ_1,+1^+1/=Λ_1,+1^1/|Δ u_1|_T=κ_N,p |Δ G_B_1(·,0)|_T=κ_N,p,and the claim now follows from Theorem <ref>. LetG_B_1denote the Green's function for the Dirichlet Laplacian in the ballB_1and recall thatκ_N,is given in Lemma <ref>,and(U_,̱V_)̱denotes a least-energy solution of (<ref>).We are ready to show Theorem <ref>. The proof follows from Proposition <ref>. Note that the constant A_N, given in (<ref>) is A_N,=κ_N,c_N, see Lemma <ref>. Consider now a general bounded domain Ω⊆^N of class C^2,γ and a least-energy positive solution (U_,̱V_)̱ of -Δ U_=̱ |V_|̱^-̱1V_ in Ω, -Δ V_=̱ |U_|̱^-1U_ in Ω,U_=̱V_=̱ 0 on ∂Ω.We conjecture that |V_|̱^-̱1V_→̱C_N,δ_x_M in the sense of distributions as →̱∞, with x_M as in Corollary <ref> and C_N,=|G_Ω(·,x_M)|_+1^-1-1/.Then, -Δ U_∞=C_N,δ_x_M and U_∞(x)=C_N,G_Ω(x,x_M) for x∈Ω, that is, U_∞ is a multiple of the Green's function for the Dirichlet Laplacian in Ω centered at x_M. Then,V_∞(x)=C_N,^∫_Ω G_Ω(x,y) G_Ω(y,x_M)^ dy for x∈Ω.This is a conjecture, because Corollary <ref> does not claim that all the minimizers of Λ_1,p(Ω) have the same shape (see also Proposition <ref> for a uniqueness statement in this regard). § USEFUL RESULTSThe purpose of this appendix is to give a self-contained description of the spaceBL_0(Ω)given by (<ref>), and some of its properties.In some cases we write new proofs of known results (Lemma <ref>) or present new results (Lemma <ref>); other short proofs are included for completeness. Remarks <ref> and <ref>, on the other hand, are intended to comment some references, while Remark <ref> presents a way of introducing the Green function on a Lipchitz domain via regularity results.Unless otherwise stated, we takeΩto be a bounded Lipschitz domain, and we recall from (<ref>) the definitions of1^*and1^**. §.§ General results Recall the definitions ofW^2,p_Δ(Ω), forp≥ 1, andBL_0(Ω)given in (<ref>) and (<ref>) respectively, and the total variation|Δ· |_Tof the Laplacian of anL^1_loc(Ω)-function, given in (<ref>). We use the symbol∼for the equivalence of two norms. Recall also that the total variation of a Radon measureμonΩis defined as|μ|(Ω)=sup{∫_Ωφdμ : φ∈ C_c(Ω),|φ|_∞≤ 1}.We haveBL_0(Ω)={u ∈ W_0^1,1(Ω): Δ uis a Radon measure with|Δ u|(Ω)<∞}.In particular, for u∈ BL_0(Ω), we have|Δ u|_T =|Δ u|(Ω)=sup{∫_Ωφ dΔ u: φ∈ C_c^∞(Ω),|φ|_∞≤ 1}.The alternative characterizations of BL_0(Ω) and of |Δ u|_T correspond to <cit.>. Let u∈ L^1_ loc(Ω) be such that Δ u∈ L^1(Ω). Then|Δ u|_1=|Δ u|_T.In particular, W^2,1_Δ(Ω)⊆ BL_0(Ω) and the inclusion is strict.The proof of the inclusion can be found in <cit.>. It is strict by the following: a solution of -Δ u=δ_y in Ω, u=0 on ∂Ω, where δ_y is the Dirac delta concentrated at y∈Ω, belongs to BL_0(Ω) but not to W^2,1_Δ(Ω), see <cit.>.Let (u_n)_n∈⊂ BL_0(Ω) be a sequence such that u_n→ uin L^1(Ω) as n→∞andsup_n∈|Δ u_n|_T<∞.Then u ∈ B L_0(Ω) and|Δ u|_T ≤lim inf _n→∞ |Δ u_n|_T .See <cit.>. For completeness, we include the proof here. Take φ∈ C^∞_c(Ω) with |φ|_∞≤ 1. Then∫_Ω u Δφ=lim_n→∞∫_Ω u_n Δφ≤lim inf_n→∞|Δ u_n|_T,and the result now follows by taking the supremum on φ. Next, we recall a few things about the notion of solution to linear Dirichlet boundary problems with measurable data (see <cit.> and references before it).Let μ be a finite Radon measure in Ω. We say that u∈ L^1(Ω) is a very weak solution[Following the wording of, for example, <cit.>.] to-Δ u=μ in Ω, u=0on ∂Ω,if, for every φ∈ C^∞_0(Ω)={ζ∈ C^∞(Ω): ζ=0 on ∂Ω},∫_Ωφdμ= -∫_Ω u Δφ.The authors of <cit.> call this a solution in the sense of Stampacchia, but take, instead, test functions φ∈ W^1,2_0(Ω)∩ C(Ω) such that Δφ∈ C(Ω). Let μ be a finite Radon measure in Ω. We say that u∈ L^1_loc(Ω) is a distributional solution to-Δ u=μ in Ωif (<ref>) holds for every φ∈ C^∞_c(Ω).Related to this, we have the following result that was shown in <cit.> and <cit.> (see also <cit.>). Let μ be a finite Radon measure on Ω. Then there exists exactly one very weak solution u of (<ref>). Moreover, for every q∈ [1,1^*), we have u∈ W^1,q_0(Ω) and there exists a universal constant C>0 such thatu_W^1,q_0≤ C |μ|(Ω).We note that Theorem <ref> can be used to build the Green function G_Ω of a Lipschitz domain Ω, and to deduce some of its properties. Indeed, for every fixed y∈Ω, G(·, y) can be defined as the unique very weak solution to -Δ u=δ_yin Ω, u=0on ∂Ω,where δ_y stands for the Dirac delta centred at y. Then, by Definition <ref>, for every φ∈ C^∞_0(Ω) it holdsφ(y)=-∫_Ω G_Ω(x,y) Δφ(x) dx,which is a representation formula for φ. Take now φ,ψ∈ C^∞_0(Ω) such that Δφ,Δψ∈ C^∞_c(Ω): then, on the one hand, using (<ref>) above, one obtains-∫_Ωφ(y) Δψ(y) dy=-∫_Ω(∫_Ω G_Ω(x,y) Δψ(y) dy)Δφ(x) dx,while, on the other hand, using (<ref>) on ψ,-∫_Ωφ(x) Δψ(x) dx=-∫_Ωψ(x) Δφ(x) dx=-∫_Ω(∫_Ω G(y,x) Δψ(y) dy)Δφ(x) dx.From this, one deduces ∫_Ω G_Ω(x,y) Δψ(y) dy=∫_Ω G_Ω(y,x) Δψ(y) dy for a.e. x∈Ω.and thereforeG_Ω(x,y)=G_Ω(y,x)for a.e. x,y∈Ω.Furthermore, the unique very weak solution to (<ref>) can be represented byu(x)=∫_Ω G_Ω(x,y) dμ(y) for a.e. x∈Ω,because, for every φ∈ C^∞_0(Ω), using (<ref>), (<ref>), and Fubini's theorem:-∫_Ω u Δφ=∫_Ωφ dμ=-∫_Ω(∫_Ω G_Ω(x,y) dμ(y)) Δφ(x) dx.The following result shows the equivalence between the two notions of solution under geometric or regularity assumptions onΩ. Let Ω be a bounded domain, which is either convex or of class C^1,γ, for some γ∈ (0,1]. Take a finite Radon measure μ on Ω. Then u ∈ W^1,1_0(Ω) is a distributional solution of (<ref>) if, and only if, u∈ L^1(Ω) is a very weak solution of (<ref>). Letu∈ L^1(Ω) be a solution of (<ref>) in the sense of Definition <ref> (i.e., a very weak solution). Then u∈ W^1,1_0(Ω), by Theorem <ref>.Moreover, u is a distributional solution since C^∞_c(Ω)⊆ C^∞_0(Ω).The converse statement, on the other hand, is the content of <cit.>.Let Ω be a domain, either convex or of class C^1,γ, for some γ∈ (0,1]. Given q∈ [1,1^*), there exists C>0 such thatu_W^1,q_0≤ C |Δ u|_T for anyu∈ BL_0(Ω).Any u∈ BL_0(Ω) is a W^1,1_0(Ω)–distributional solution of the equationΔ w=Δ uin Ω.Indeed, for φ∈ C^∞_c(Ω), by the embedding of the space of Radon measures in the space of distributions and since Δ u∈𝒟'(Ω) is a Radon measure:∫_ΩΔ u φ = ⟨Δ u,φ⟩_𝒟'(Ω)×𝒟(Ω)=∫_Ω uΔφ.Then, by Theorem <ref> and Proposition <ref>, u is the (unique) very weak solution of (<ref>) with zero Dirichlet boundary conditions. The result now follows directly from Theorem <ref> and (<ref>).In <cit.>, the authors give an example of a Lipschitz bounded domain in ^2 and of a nontrivial function u∈ BL_0(Ω) such that Δ u=0 in the classical sense. Then, in particular, u is a distributional solution of-Δ u=0in Ω.Therefore, it is not a very weak solution of (<ref>) with u=0 on ∂Ω(which is unique and is the trivial one, by Theorem <ref>). This also shows that an inequality like (<ref>) may not hold for a general Lipschitz domain, and more regularityis needed (or convexity).The following are Banach spaces:*(Ω is of class C^1,1) W^2,p_Δ(Ω) (p>1),when endowed with|Δ u |_p∼u_W^1,p+|Δ u|_p∼u_W^2,p. In particular, W^2,p_Δ(Ω)=W^2,p(Ω)∩ W^1,p_0(Ω). *(Ω convex, or C^1,γ, for some γ∈ (0,1]) W^2,1_Δ(Ω),when endowed with |Δ u |_1∼u_W^1,1+|Δ u|_1.*(Ω convex, or C^1,γ, for some γ∈ (0,1]) BL_0(Ω), when endowed with |Δ u |_T∼u_W^1,1+|Δ u|_T. 1. From elliptic regularity theory (see for instancein <cit.>), we have the existence of C>0 such thatu_W^2,p≤ C |Δ u|_p for every u∈ W^2,p(Ω)∩ W^1,p_0(Ω). and so the conclusion follows. 3. As for BL_0(Ω), the fact that it is a Banach space when endowed with u_W^1,1(Ω)+|Δ u|_T follows from <cit.>. We include here the proof for completeness. Take a Cauchy sequence (u_n)_n∈⊆ BL_0(Ω). Then (u_n)_n∈ is a Cauchy sequence in W^1,1_0(Ω); since this space is complete, there exists u∈ W^1,1_0(Ω) such thatu_n → uinW^1,1_0(Ω),hence also inL^1(Ω). Given >0, take n̅∈ such that |Δ u_n-Δ u_m|_T< for every n,m≥n̅. Since, for each n≥n̅, we have u_n-u_m∈ BL_0(Ω), u_n-u_m→ u_n-u as m→∞ in L^1(Ω) and sup_m≥n̅|Δ (u_n-u_m)|_T<∞, then by Lemma <ref> we have u_n-u∈ BL_0(Ω) (so that also u∈ BL_0(Ω)) and |Δ u_n-Δ u|_T=|Δ (u_n-u)|_T≤lim_m→∞ |Δ (u_n-u_m)|_T≤ε. Therefore, |Δ u_n-Δ u|_T→ 0 as n→∞.The equivalence of the norms |Δ u |_T∼u_W^1,1+|Δ u|_T is a direct consequence of Corollary <ref>, which yields (for q=1): u_W^1,1≤ C |Δ u|_T. 2.As for W^2,1_Δ(Ω), theequivalence of the norms is a consequence of (<ref>) together with the fact that |Δ u|_1=|Δ u|_T when u∈ W^2,1_Δ(Ω) (Lemma <ref>). The fact that W^2,1_Δ(Ω) is a Banach space is shown in <cit.>, but we include a proof for completeness: taking a Cauchy sequence (u_n)_n∈ in W^2,1_Δ(Ω), then (u_n)_n∈ is a Cauchy sequence in W^1,1_0(Ω) and (Δ u_n)_n∈ is a Cauchy sequence in L^1(Ω). Then there exist u∈ W^1,1_0(Ω) and v∈ L^1(Ω) such thatu_n→ u inW^1,1(Ω),Δ u→ v inL^1(Ω).Given φ∈ C^∞_c(Ω), we have∫_Ω uΔφ=lim_n→∞∫_Ω u_n Δφ=lim_n→∞∫_ΩΔ u_nφ=∫_Ω v φ,so Δ u=v∈ L^1(Ω) and u_n→ u in W^2,1_Δ(Ω).§.§ Embeddings ForN≥ 3, ifΩis a bounded set, thenW^1,1_0(Ω)↪ L^t(Ω)is continuous for t∈ [1,1^*], compact for t∈ [1,1^*),where1^*is defined as in (<ref>). IfΩis a bounded Lipschitz domain, thenW^2,1(Ω)↪ L^t(Ω) is continuous for t∈ [1,1^**], compact for t∈ [1,1^**),where1^**is defined as in (<ref>). See for instance <cit.>. The last embedding also holds true forW^2,1(Ω)∩ W^1,1_0(Ω), which is a closed subset ofW^2,1(Ω).It is also useful to recall the general case:The following hold true when Ω is a bounded Lipschitz domain:*For 2p<N, W^2,p(Ω)↪ L^t(Ω) continuous if t∈[1,Np/N-2p], compact if t∈ [1,Np/N-2p);*For 2p=N, W^2,p(Ω)↪ L^t(Ω) is compact for every t∈ [1,∞);*For 2p>N, W^2,p(Ω)↪ C^m,γ(Ω) is continuous, where m is the largest positive integer such that γ=2-N/p-m∈ (0,1), and W^2,β(Ω)↪ C^m',γ'(Ω) is compact for m'≤ m, γ'≤γ and either m'<m or γ'<γ.We recall the definition of the weak-L^qspaces, which are nothing else than the Lorentz spacesL^q,∞(Ω). Given a measurable functionu:Ω→, its distribution functionμ_u:^+→is given byμ_u(t):=|{x∈Ω: |u(x)|>t}|. Forq≥ 1, we defineu_q,∞^q:=sup_t>0 t^q μ_u(t), L^q,∞(Ω):={u:Ω→ measurable: u_q,∞<∞}.Recall thatL^p(Ω)↪ L^p,∞(Ω)andL^p,∞(Ω)↪ L^q(Ω), for1≤ q<p, are continuous embeddings (see for instance <cit.>).Let Ω be either a convex set or a set of class C^1,γ, for some γ∈ (0,1]. The following embedding is continuousB L_0(Ω) ↪ L^1^**,∞(Ω).Moreover, the following embeddings are compact:BL_0(Ω)↪ W^1,q_0(Ω) forq∈ [1,1^*),BL_0(Ω)↪ L^r(Ω) forr∈ [1,1^**), BL_0(Ω)∩ L^∞(Ω)↪ W_0^1,q(Ω) for q∈ [1,2).The assumptions on Ω allow to use both Corollary <ref> and other estimates for elliptic problems with measure data.Indeed, the continuity of the embedding B L_0(Ω) ↪ L^1^**,∞(Ω) follows from <cit.> (see also <cit.> for the optimal constant).The proof of the compactness of the embedding BL_0(Ω) ↪ L^r(Ω) can be found in <cit.>, and the one of BL_0(Ω) ↪ W^1,q_0(Ω) in <cit.>.Finally, the fact that BL_0(Ω)∩ L^∞(Ω)↪ W_0^1,q(Ω) is compact follows from the interpolation inequalities∇ u_L^2(Ω)≤u_L^∞|Δ u|_Tfor everyu∈ BL_0(Ω)∩ L^∞(Ω)(see <cit.>), ∇ u_L^q(Ω)≤∇ u_1^2/q-1∇ u_2^2-2/q and the compact embedding BL_0(Ω)↪ W^1,1_0(Ω).Since W^2,1_Δ↪ BL_0(Ω) is continuous (Lemma <ref>), the results of Proposition <ref> are true with BL_0(Ω) replaced by W^2,1_Δ(Ω).§.§ DensityLet u ∈ B L_0(Ω). Then there exists a sequence (u_n)_n∈⊆ C^∞(Ω) ∩ C(Ω) ∩ B L_0(Ω) converging strictly to u, that is:u_n → ustrongly inW^1,1_0(Ω), |Δ u_n|_T→ |Δ u|_T.See <cit.>. We also mention that the closure ofC^∞_c(Ω)with respect to the norm|Δ·|_1is denotedW^2,1_Δ,0(Ω)and it is studied in <cit.> in the context of eigenvalue problems. If u∈C^∞(Ω) ∩ C(Ω)∩ BL_0(Ω), then Δ u∈ L^1(Ω). In particular,C^∞(Ω)∩ C(Ω)∩ BL_0(Ω)⊆ W^2,1_Δ(Ω).This is stated without proof in <cit.>. We include here a proof for completeness.Given n∈ sufficiently large, take φ_n∈ C^∞_c(Ω) such that φ_n=1 in the set {x∈Ω: (x,∂Ω)≥ 1/n}. Define the functionv_n=(sign(Δ u)∗η_1/n) φ_n/|(sign(Δ u)∗η_1/n) φ_n|_∞∈ C^∞_c(Ω),where η_1/n is a sequence of mollifiers. Then v_n_∞=1, v_n→sign(Δ u) a.e. in Ω. By Fatou's lemma (v_n Δ u is bounded and |Ω|<∞),∫_Ω |Δ u|=∫_Ωlim_nΔ uv_n≤lim inf_n ∫Δ u v_n=lim inf_n ∫ u Δ v_n≤ |Δ u|_T<∞.Let Ω be a bounded domain of class C^2,γ, for some γ∈ (0,1]. Then the space C^2,γ(Ω) is dense in W^2,1_Δ (Ω). Consider u∈ W^2,1_Δ(Ω), which in particular means that Δ u∈ L^1(Ω), and fix >0. There exists f∈ C^∞_c(Ω)⊂ C^2,γ(Ω) such that |f-Δ u|_1<. If v denotes the solution to the Dirichlet problem Δ v=f in Ω and v=0 on ∂Ω then, by elliptic global regularity (see <cit.>), v∈ C^2,γ(Ω) and |Δ v-Δ u|_1=|f-Δ u|_1<,by construction. Recalling that|Δ·|_1 is a norm in W^2,1_Δ(Ω) (by Lemma <ref>), we see that C^∞(Ω) is dense in W^2,1_Δ(Ω).Proposition 2.7 in <cit.> states that BL_0(Ω)↪ L^1^**(Ω) is continuous, among other results. This is not true at least for N≥ 3, as it can be shown by a simple counterexample: on Ω=B_1, we have G_B_1(·,0)∈ BL_0(B_1) with |Δ G_B_1(·,0)|_T=|δ_0|_T=1 and, more explicitly,G_B_1(x,0)=c_N(|x|^2-N-1) for x∈ B_1,which clearly does not lie in L^1^**(B_1) (see also the counterexample in <cit.>). This also implies thatC^∞(Ω)∩ C(Ω) ∩ BL_0(Ω)isnotcontained inW^2,1(Ω).In fact, if this were true, then the proof of <cit.> would be correct, showing the wrong fact that BL_0(Ω)↪ L^1^**(Ω) is continuous. The “proof” would goas follows: given u∈ BL_0(Ω), by Lemma <ref> we can take a sequence (u_n)⊂ C^∞(Ω)∩ C(Ω) ∩ BL_0(Ω) such that u_n→ u strictly in BL_0(Ω). If C(Ω) ∩ C^∞(Ω) ∩ BL_0(Ω)⊂ W^2,1(Ω), then (u_n)⊂ W^2,1(Ω), which is continuously embedded in L^1^**(Ω) andthe rest of the proof easily follows. Although this claim in <cit.> is not true, we emphasize that this fact is never used in the proof of the main theorems, which remain valid. Since the paper deals with subcritical nonlinearities, only the compact embeddings presented in our Proposition <ref> are needed.We also point out that the same incorrect fact is also usedin <cit.> to prove the compactness of BL_0(Ω)↪ L^q(Ω) for q∈ [1,1^**); although the proof contains a mistake, the result is correct, see our Proposition <ref>. 10G04 Adimurthi and M. Grossi,Asymptotic estimates for a two-dimensional problem with polynomial nonlinearity. Proc. Amer. Math. Soc., 132(4):1013–1019, 2004.zbMATH04132888 F.J. Almgren, Jr. and E.H. Lieb,Symmetric decreasing rearrangement can be discontinuous. Bull. Amer. Math. Soc. (N.S.), 20(2):177–180, 1989.BP18 S. Barile and M.T.O. Pimenta,Some existence results of bounded variation solutions to 1-biharmonic problems. J. Math. Anal. Appl., 463(2):726–743, 2018.AG06 M. Ben Ayed, K. El Mehdi, and M. Grossi,Asymptotic behavior of least energy solutions of a biharmonic equation in dimension four. Indiana Univ. Math. J., 55(5):1723–1749, 2006.BMT14 D. Bonheure, E. Moreira dos Santos, and H. Tavares,Hamiltonian elliptic systems: a guide to variational frameworks. Port. Math., 71(3-4):301–395, 2014.BSR12 D. Bonheure, E. Moreira dos Santos, and M. Ramos,Ground state and non-ground state solutions of some strongly coupled elliptic systems. Trans. Amer. Math. Soc., 364(1):447–491, 2012.zbMATH00840906 H. Brezis, T. Cazenave, Y. Martel, and A. Ramiandrisoa,Blow up for u_t-Δ u=g(u) revisited. Adv. Differential Equations, 1(1):73–90, 1996.CassaniRufTarsi D. Cassani, B. Ruf, and C. Tarsi,Best constants in a borderline case of second-order Moser type inequalities. Ann. Inst. H. Poincaré C Anal. Non Linéaire, 27(1):73–93, 2010.ZHW22 Z. Chen, H. Li, and W. Zou,Asymptotic behavior of positive solutions to the Lane-Emden system in dimension two. preprint available at arXiv:2204.03422, 2022.CK19 W. Choi and S. Kim,Asymptotic behavior of least energy solutions to the Lane-Emden system near the critical hyperbola. J. Math. Pures Appl. (9), 132:398–456, 2019.ClementdeFigueiredoMitidieri Ph. Clément, D.G. de Figueiredo, and E. Mitidieri,Positive solutions of semilinear elliptic systems. Comm. Partial Differential Equations, 17(5-6):923–940, 1992.ClementvanderVorst Ph. Clément and R.C.A.M. Van der Vorst,On a semilinear elliptic system. Differential Integral Equations, 8(6):1317–1329, 1995.GT98 D. Gilbarg and N.S. Trudinger, Elliptic partial differential equations of second order.Classics in Mathematics. Springer-Verlag, Berlin, 2001.Reprint of the 1998 edition.Grafakos L. Grafakos, Classical Fourier analysis, volume 249 ofGraduate Texts in Mathematics.Springer, New York, third edition, 2014.guerra2007asymptotic I.A. Guerra,Asymptotic behaviour of a semilinear elliptic system with a large exponent. J. Dynam. Differential Equations, 19(1):243–263, 2007.G08 I.A. Guerra,Solutions of an elliptic system with a nearly critical exponent.Ann. Inst. H. Poincaré C Anal. Non Linéaire, 25(1):181–200, 2008.ianni2021sharp I. Ianni and A. Saldaña,J. Sharp asymptotic behavior of radial solutions of some planar semilinear elliptic problems.Differential Equations, 304:102–164, 2021.KS23 N. Kamburov and B. Sirakov,Uniform a priori estimates for positive solutions of the Lane-Emden equation in the plane.Calc. Var. Partial Differential Equations, 57(6):Paper No. 164, 8, 2018.inftybilaplacian N. Katzourakis and E. Parini,The eigenvalue problem for the ∞-bilaplacian.NoDEA Nonlinear Differential Equations Appl., 24(6):Paper No. 68, 25, 2017.KS07 B. Kawohl and F. Schuricht,Dirichlet problems for the 1-Laplace operator, including the eigenvalue problem.Commun. Contemp. Math., 9(4):515–543, 2007.LSW63 W. Littman, G. Stampacchia, and H.F. Weinberger,Regular points for elliptic equations with discontinuous coefficients.Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3), 17:43–77, 1963.Montenegro M. Montenegro,The construction of principal spectral curves for Lane-Emden systems and applications.Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 29:193–229, 2000. S08 E. Moreira dos Santos,Multiplicity of solutions for a fourth-order quasilinear nonhomogeneous equation.J. Math. Anal. Appl., 342(1):277–297, 2008.PRT12 E. Parini, B. Ruf, and C. Tarsi,The eigenvalue problem for the 1-biharmonic operator.Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 13(2):307–332, 2014.PRT15 E. Parini, B. Ruf, and C. Tarsi,Higher-order functional inequalities related to the clamped 1-biharmonic operator.Ann. Mat. Pura Appl. (4), 194(6):1835–1858, 2015.PonceBook A.C. Ponce, Elliptic PDEs, measures and capacities, volume 23 ofEMS Tracts in Mathematics.European Mathematical Society (EMS), Zürich, 2016.From the Poisson equations to nonlinear Thomas-Fermi problems.ren1994two X. Ren and J. Wei,On a two-dimensional elliptic problem with large exponent in nonlinearity.Trans. Amer. Math. Soc., 343(2):749–763, 1994.ren1996single X. Ren and J. Wei.Single-point condensation and least-energy solutions.Proc. Amer. Math. Soc., 124(1):111–120, 1996.Stampacchia G. Stampacchia.Le problème de Dirichlet pour les équations elliptiques du second ordre à coefficients discontinus.Ann. Inst. Fourier (Grenoble), 15:189–258, 1965.s86 A. Szulkin,Minimax principles for a class of lower semicontinuous functions and applications to nonlinear boundary value problems.InNonlinear functional analysis and its applications (Maratea, 1985), volume 173 ofNATO Adv. Sci. Inst. Ser. C: Math. Phys. Sci., pages 393–399. Reidel, Dordrecht, 1986.zbMATH03531830 G. Talenti,Elliptic equations and rearrangements.Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 3(4):697–718, 1976.W97 M. Willem. Minimax theorems, volume 24 ofProgress in Nonlinear Differential Equations and their Applications.Birkhäuser Boston, Inc., Boston, MA, 1996. Nicola AbatangeloDipartimento di Matematica Alma Mater Studiorum Università di Bologna P.zza di Porta S. Donato, 540126 Bologna, Italy Alberto SaldañaInstituto de MatemáticasUniversidad Nacional Autónoma de MéxicoCircuito Exterior, Ciudad Universitaria04510 Coyoacán, Ciudad de México, Mexico Hugo TavaresCAMGSD - Centro de Análise Matemática, Geometria e Sistemas DinâmicosDepartamento de Matemática do Instituto Superior TécnicoUniversidade de Lisboa1049-001 Lisboa, Portugal
http://arxiv.org/abs/2312.16696v1
{ "authors": [ "Nicola Abatangelo", "Alberto Saldaña", "Hugo Tavares" ], "categories": [ "math.AP", "math.FA", "35B40, 35G15, 35J30, 35J47, 35P30, 49J52, 49Q10" ], "primary_category": "math.AP", "published": "20231227192451", "title": "An asymptotic relationship between Lane-Emden systems and the 1-bilaplacian equation" }
E-mail address: [email protected] York Plasma Institute, Department of Physics, University of York, Heslington, York, YO10 5DD, United KingdomOak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USAUnited Kingdom Atomic Energy Authority, Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, Oxon, OX14 3DB, United KingdomYork Plasma Institute, Department of Physics, University of York, Heslington, York, YO10 5DD, United KingdomDIFFER-Dutch Institute for Fundamental Energy Research, De Zaale 20, 5612 AJ Eindhoven, The NetherlandsUnited Kingdom Atomic Energy Authority, Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, Oxon, OX14 3DB, United KingdomYork Plasma Institute, Department of Physics, University of York, Heslington, York, YO10 5DD, United KingdomOak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USACommonwealth Fusion Systems, Cambridge, MA 02139, USADIFFER-Dutch Institute for Fundamental Energy Research, De Zaale 20, 5612 AJ Eindhoven, The NetherlandsAalto University, Otakaari 24, 02150 Espoo, FinlandUnited Kingdom Atomic Energy Authority, Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, Oxon, OX14 3DB, United KingdomDIFFER-Dutch Institute for Fundamental Energy Research, De Zaale 20, 5612 AJ Eindhoven, The NetherlandsDIFFER-Dutch Institute for Fundamental Energy Research, De Zaale 20, 5612 AJ Eindhoven, The Netherlands The linear plasma machine Magnum-PSI can replicate similar conditions to those found in a tokamak at the end of the divertor leg. A dedicated capacitor bank, in parallel to the plasma source, can release a sudden burst of energy, leading to a rapid increase in plasma temperature and density, resulting in a transient heat flux increase of half of an order of magnitude, a so called ELM-like pulse. Throughout both the steady state and the pulse, the neutral pressure in the target chamber is then increased, causing the target to transition from an attached to a detached state. In the first paper related to this study<cit.> direct measurements of the plasma properties are used to qualitatively determine the effect of detachment on the ELM-like pulse.Here measurements from a purposely improved optical emission spectrometer were used in conjunction with other diagnostics to build a Bayesian algorithm capable of inferring the most likely properties of the plasma, poloidally and temporally resolved.This is used to show the importance of molecular assisted reactions. Molecular processes, and especially molecular activated dissociation, are found to be important in the exchange of potential energy with the plasma, while less so in radiating the energy from the ELM-like pulse. At low target chamber pressure, the plasma generated via ionisation during the part of the ELM-like pulse with the higher temperature is more than that produced by the plasma source, a unique case in linear machines. At high target chamber pressure molecular activated recombination contributes up to a third of the total recombination rate, contributing to the reduction of the target particle flux. Some metrics that estimate the energy lost by the plasma per interactions with neutrals, potentially relevant for the portion of the tokamak divertor leg below ∼10eV, are then tentatively obtained.Effect of detachment on Magnum-PSI ELM-like pulses: II. Spectroscopic analysis and role of molecular assisted reactions Magnum-PSI Team======================================================================================================================= § INTRODUCTION In tokamaks, the divertor region is crucial for managing the exhaust of high-temperature plasma. Current technology limits the heat flux to the divertor target, and mitigation methods are needed to reduce it to acceptable levels (<10MW/m^2). <cit.> One mitigation method is detachment, creating a low-temperature buffer in front of the target, which dissipates the plasma energy and particles via radiation and recombination. Detachment can be induced by increasing divertor density, seeding impurities, or reducing power flow. <cit.> Detachment has been demonstrated to significantly reduce target heat flux in experiments like Alcator C-Mod and AUG.<cit.>In high performance core regimes, short bursts of heat and particles called edge localised modes, ELMs can increase heat flux to intolerable levels. <cit.> When ELMs occur during detachment, the divertor plasma temporarily reattaches, helping to reduce target heat load by dissociating and ionizing neutrals. However, studying this dynamic process is challenging.This study aims to investigate how ELM-like pulses interact with a detached target plasma, and to gain insights into the processes responsible for removing energy from the ELM before it reaches the target. During detachment the region in front of the target is cold so neutral and molecular hydrogen densities are high and thus the role of those species in removing power, momentum and particles is studied. Of particular interest is the relevance of molecular assisted processes (ionisation (MAI), recombination (MAR) and dissociation (MAD)) over atomic. This was studied before in tokamaks and linear machines but never during the ELM burn through. <cit.> This work is also important because different phenomena are at play and have to be correctly understood to gain predictive capability for the ELM burn through in tokamaks. The filamentary nature of the tokamak ELM makes a 3D treatment necessary<cit.> while its fast transient nature makes kinetic effects relevant.<cit.> On top the interaction with cold neutrals, with solid surfaces and the cooling of the plasma to sub eV temperature require transport and a large number of interactions to be accounted.<cit.> The presence of molecular precursors like H_2^+ and H^- and their interactions with plasma and neutrals further complicate the picture, so it is necessary to asses if they play a significant role in the burn through process. This is the second paper deriving from this study. The first shows what can be inferred from direct measurements of the plasma properties (temperature, density, light emission) and interaction with the target (thermography).<cit.>To investigate consistently these phenomena, ELM-like pulses are reproduced in Magnum-PSI, a linear plasma device in the DIFFER laboratory, The Netherlands. Unlike a tokamak, Magnum PSI and other linear machines feature a simpler geometry, allowing for an easier interpretation, improved repeatability, and enhanced diagnostic access. Magnum-PSI is also capable of producing steady-state target perpendicular heat fluxes that are comparable to those expected at the ITER target. A capacitor bank (CB) is connected in parallel to the plasma source power supply and generates the ELM-like pulses, while increasing hydrogen neutral pressure in the target chamber induces detachment. The Optical Emission Spectroscopy (OES) setup was improved to increase the time resolution in order to collect data on the ELM-like pulse behaviour. An analysis of thepower and particle balance in the plasma column inside the target chamber, separating molecular and atomic contributions to power and particle losses, was performed through a purpose built Bayesian routine that also makes use of the new OES data.Linear machines are significantly different from the exhaust of tokamaks. In a tokamak divertor, especially in the high recycling regime preceding detachment, the main contributor to the ion target flux is not the flow from upstream but the ionisation of neutrals recycling at the target, while the upstream acts as a power source. The divertor leg, in terms of particle balance, acts more as a closed system being supplied with energy from the upstream boundary <cit.>. In linear machines, instead, the upstream plasma flow usually dominates, and the ionisation source is minor. The plasma flows from the source to the target, where its temperature and density are usually too low to cause recycling. Another difference is the connection length, of the order of tens of meters in tokamaks while it is less than one in Magnum-PSI. In tokamaks, the wave of hotter plasma due to the ELM can propagate along field lines, progressively burning through the barrier of cold neutrals generated by detachment. The temporal dynamics of the upstream conditions have comparable timescales to the transport of heat along the magnetic field lines<cit.>. In linear machines the flow from source to target is much faster than the evolution upstream, so that the discharge behaves more as a succession of steady states. In linear machines is difficult, therefore, to study the effect of the ELM burning through the gas buffer created by detachment, while they should be more representative of the cross field neutral transport.The ELM frequency can be high in tokamaks, from fractions to tens of kHz. Considering recycling confines the neutrals at the target, it was postulated from simulations that the neutral pressure in the divertor could progressively increase as more ELMs happen,<cit.> making it difficult to study a quasi steady state reference state. Due to the lower ELM frequency achievable in Magnum-PSI, and the low ion source, this effect is usually not observed and the effect of repeated ELMs can be studied with respect to the same steady state case. This enables delineating the impact of an ELM on a neutral gas buffer from changes of the neutral gas buffer due to previous ELMs interacting with the target. In tokamaks there is also a strong correlation between ELM frequency and energy released during the ELM,<cit.> making the independent study of these two aspects difficult, while in linear machines these can be more easily controlled.The presence of impurities likely plays a role in the interactions of ELMs with detachment, both due to increased radiative and ionisation power losses, and because of the increased sputtering when reaching the target. This is not guaranteed to be reproduced in linear machines unless specific setups are arranged. Linear machines, despite these differences, remain a valuable tools in understanding the effects of neutral pressure on tokamak ELMs and what processes are involved.In <cit.> it was demonstrated that increasing the neutral pressure within the target chamber results in removing energy from the ELM-like pulses. In some cases, the ELM-like pulses have no effect on the target temperature (Section V). The target heating from the ELM-like pulse is comparable to measurements from current tokamaks, although considerably lower than what is anticipated in larger scale devices like ITER. Furthermore, this can be mitigated by raising the neutral pressure. Observations from the visible light fast camera (Section III) were also used to divide the interaction of the ELM-like pulses in phases depending on the detachment in steady state and during the ELM. In Stage 1 the plasma is attached to the target both in steady state and during the ELM. In Stage 2, the target is cold in between ELM-like pulses but a significant target heating can be provided transiently. In Stage 3 there is no significant heat transferred to the target the target either before or during the ELM-like pulse. The visible light emission is not homogeneous in the whole target chamber, with a peak close to the target. The emission in the bulk of the target chamber quite uniform and decreases at the target as the density increases. This observation will be used later to support the approximation that volumetric power losses can be considered constant from skimmer to target. It will here be shown that for increasing neutral pressure, the energy and particles of the ELM like pulse will increasingly be removed in the Magnum-PSI target chamber volume and that an important role is played by plasma-molecule interactions. The radiated power losses are a significant power loss channel, but elastic collisions with neutrals and exchanges of potential energy dominate in reducing the plasma temperature to levels where recombination becomes important.§ EXPERIMENTAL CONDITIONS A diagram of the Magnum-PSI device is shown in <ref>. <ref> shows the portion of experimental conditions used on Magnum-PSI for this study that will be analysed in this paper, termed strong pulses. An additional set of pulses characterized by a weaker magnetic field and lower energy stored in the capacitor banks, referred to as weak pulses, also exists but is analysed in this paper. For weak pulses the plasma temperature becomes so low that the TS measurement only works at the peak of the ELM-like pulse and is not representative of the entire plasma column. Consequently, weak pulses are unsuitable for analysis using the Bayesian model. For a more detailed description of the discharge procedure and the sampling strategy, please refer to Appendix A of <cit.>. § DIAGNOSTICS In this section we will be describethe diagnostics utilised in this study. While some have the time resolution required to observe the evolution of individual ELM-like pulses, others can observe only a limited portion. Therefore, the sampling strategy adopted allows to reconstruct the full ELM-like pulse evolution.The diagnostics used in this paper are:* Thomson scattering (TS): electron temperature and density * Jarell-Ash, Czerny-Turner spectrometer OES: Hydrogen atomic line emission (Balmer series p=4-8 → 2)* Power source (ADC): temporal variation of the power delivered to the plasma§.§ Optical emission spectrometer (OES) The main component is a Jarell-Ash spectrometer connected to a fibre optic bundle with 40 individual cores that view the plasma radially with a spatial resolution of 1.06mm, individual line of sight (LOS) width of ∼1mm. See the OES LOS in the target chamber in <ref>, further detail from Barrois. <cit.>Before this work was conducted the camera connected to the spectrometer was a Princeton Instruments PIXIS 2048B, with a shutter speed of the order of seconds. In order to obtain information on the behaviour during the ELM-like pulse this camera was replaced in 2019 by the author and Gijs Akkermans with a Photometrics Prime95B 25mm RM16C with a minimum integration time of 20μ s. The camera has a CMOS sensor with rolling shutter, meaning the exposure of one row happens after the previous row is completed, forcing the accumulation of data on multiple ELM-like pulses to reconstruct the full spatial and temporal brightness profile of the pulse. Details on how to sample all stages of the ELM-like pulse so that one obtains a coherent picture are shown in Appendix A of <cit.> while the steps from raw images to line emissivity are detailed in <ref>. The sensitivity calibration was done using a Labsphere as explained extensively by Barrois. <cit.>During the experiments hydrogen Balmer lines brightness (p=4-∞→ 2) were recorded; only lines p=4-8 → 2 were considered to have a sufficient signal to noise ratio and were used in this study.§.§ Thomson scattering (TS)The Thomson scattering (TS) diagnostic allows the measurement of both T_e and n_e. A laser beam is fired through the plasma and the scattered light iscollected from the measurement volume determined by the intersection of the laser beam with each TS viewing LOS. 50 LOS are simultaneously measured in a radial plane (the same as OES, as shown in <ref>), to find the profile of the plasma properties. In Magnum-PSI, TS is used for both steady state plasmas and, albeit with reduced performance, time dependent measurements. For the experiments part of this study, TS was used in time dependent mode, with 50μ s time resolution and integration time. This returns an uncertainty <3% for n_e, and <10% for T_e for n_e>2.8 · 10^20#/m^3. As time between consecutive measurements has to match the laser frequency of 10Hz, we were forced to accumulate data over multiple pulses to reconstruct the entire ELM-like pulse time evolution. <cit.>§.§ Power supplyThe steady state power supply regulates the DC plasma source voltage while a capacitor bank composed of 28 individual sections is connected in parallel to cause a strongincrease of the plasma source current. The energy stored in each capacitor (shown in <ref>) is given by 1/2CV^2. During ELM-like pulses, 92% of the electrical energy dissipated in the plasma source is converted into plasma energy.<cit.> Additionally, Some energy is dissipated before the plasma reaches the target skimmer within the source and heating chambers. Here the pressure is maintained as low as possible via differential pumping, thereby reducing the interaction of the plasma column with the cold gas in these chambers and minimizing the associated energy losses.From the increase in cooling water temperature in steady state, it is estimated that the source and target skimmers absorb ∼10% and <2% of the electrical input energy, respectively. To gain more insight than what can be gained by individual diagnostics, presented in <cit.>, the OES data will be combined with that from TS and the input power source measurement within a Bayesian analysis framework. The purpose is to identify which processes are more relevant in the increase of energy removed in the volume for increasing levels of steady state detachment, driven by target chamber neutral pressure, and possibly their evolution in time.§ BAYESIAN ANALYSISIn this section we describe our analysis technique, that simultaneously takes into account information from different diagnostics on Magnum-PSI data to gain more insight into the dominant atomic and molecular processes during an ELM-like pulse. A significant aspect of this analysis is the inclusion of the molecular species H_2, H_2^+ and H^-. These species interact with the plasma, leading to the formation of excited hydrogen atoms, with related power and particle sources/sinks, which have a significant influence on the behaviour of Magnum-PSI target chamber plasmas, driving them into detachment. Since the interaction of these species with the plasma causes hydrogen atomic line emission they will be referred to as molecular precursors.The ordinary method for inferring the plasma properties from Balmer emission lines is to consider that the emission from higher-p H excited states is mainly generated via electron-ion recombination (EIR), while the emission from lower excited states is generated via electron impact excitation (EIE), and molecular contributions are generally neglected for simplicity. This normally allows to determine if the plasma is ionising, recombining or in between.<cit.>. Under conditions like those in this work, on the other hand, molecular contributions can be significant.<cit.> Plasma molecular interactions lead to excited hydrogen atoms generally excited to low p levels, and their distribution across p is similar to that of EIE, thus complicating the Balmer line analysis (see <ref>).A Bayesian analysis, that employs a probabilistic approach, is used to determine the plasma/molecule/atom characteristics (T_e, n_e, n_H/n_e, n_H_2/n_e, n_H_2_+/n_H_2, n_H^-/n_H_2, one of these quantities is referred to as Θ_i, while a set of 6 as Θ) that best match both the line emission (from OES) and the electron temperature and density (from TS). This approach combines information from different diagnostics, incorporates priors from simulations as well as reaction rates to find the best match to experimental data, providing insights into the dominant processes and their effects on the energy and particle balance.The main advantage of this approach is that it can not only account for all uncertainties and non linear rates, but also allows calculating the probability density function (PDF) of the quantities of interest. This implies that multiple non unique solutions, if present, can be found.Probabilistic approaches have been used before to infer the full state of the divertor plasma in tokamaks, but never during the ELM burn through.<cit.> First, the parameter space of Θ is defined, forming a regular grid. The range and samples of the Θ_i parameter are defined based on Thomson scattering and other priors. Given a certain combination of parameters Θ the outputs of all the quantities of interest can then be calculated, like line emission, power and particle sinks and sources, etc. Those predicted observables can then be compared with the priors to find the probability that the calculated values match the expected ones. This comparison and the calculation of the observables depend on the specifics of the problem and the simplifications used. For the combination of parameters Θ the probability from the priors is multiplied with the probability from the comparison of the calculated and measured line emission, returning the likelihood of the particular Θ. Starting with this grid, one can either add individual Θ, searching for the maximum of the likelihood, or refine the grid further around the best Θ. A grid refinement is done here to retain a grid structure and simplify the numerical problem. This is done only twice to limit the computational cost and memory requirements. After the quantities of interest like, for example, terms of the power balance, can then be calculated for each Θ and their probability density function (PDF) found.<ref> gives the full list of assumptions used to simplify the physics of the plasma of interest (from the target skimmer to the target, see <ref>). In short, the plasma is considered homogeneous along its length (z) from the target chamber skimmer to the target within each time step, and being poloidally (ϕ in <ref>) symmetric for a given radius (r). As mentioned before this is supported by the relatively uniform visible light emission far from the target found in Stage 1 and 2.<cit.> This means that the ELM-like pulse is not localised at a particular spatial location but happens along the whole plasma column, matching well the fast nature of the transport along field lines (∼ sonic flow, <20 μ s to flow through the target chamber) compared to the changes in plasma conditions over time (changes well resolved with 50 μ s time steps). The conditions at the target chamber skimmer will be considered as the input to the plasma column. The presence of impurities is monitored by a 6-channels Avantes AvaSpec-2048-USM2-RM survey optical emission spectrometer covering the range 299-950nm.<cit.> The survey spectrometer's channels LOSs are in the target chamber and impurities like metals from the power source or oxygen from cooling water are only detectable in significant amounts when the plasma source fails. This shows that impurities from the source are ordinarily efficiently removed in Magnum-PSI by differential pumping. Carbon can play a significant role when graphite targets are used<cit.>, but this is not a concern in these experiments in which a TZM target was used.The integral over the ELM-like pulse of quantities of interest, like terms of the power balance, are obtained by summing the contribution from each time step and radial location. In the following section we will explain the method used to determine the probability associated with an output at a given time and radial location. §.§ Analysis steps We will describe here in more detail the steps to first initialize the paramenter space based on TS and other priors (<ref>). Then we will show how to compare the modelled quantities and TS with the priors toobtain the probability of each Θ (<ref>). Lastly the comparison with the measured Balmer lines emission, grid refinement process and determination of the quantity of interest PDFs will be illustrated (<ref>). The numbering of each step corresponds with the numbers in the boxes in <ref>, <ref>, <ref>.§.§.§ Parameter space initialisationOur Bayesian analysis first finds the range of parameters to utilize. This is shown in <ref>. Every point in time and radius (t,r) during the ELM-like pulse is considered independently. Referring to <ref> the meaning of the steps are as follows: 1,2 Thomson scattering n_e(r,t), T_e(r,t):The measured T_e and n_e and their uncertainties are used to specify the range and grid elements of T_e and n_e. From every T_e,n_e combination the samples for the other Θ_i are calculated.4 B2.5-Eunomia n_H/n_e(r,t), n_H_2/n_e(r,t): We utilize B2.5-Eunomia simulations, a multi-fluid plasma code coupled with a non-linear Monte Carlo transport code for neutrals<cit.>, to determine scalings for n_H_2 and n_H with T_e. We utilize these scalings to obtain ranges for n_H_2/n_e and n_H/n_e (see <ref> and <ref>).3 B2.5-Eunomia T_H(r,t), T_H_2(r,t)[=T_H_2^+(r,t)=T_H^-(r,t)-2.2eV], T_H^+:Similarly, values of T_H, T_H_2 and T_H^+ are extracted from B2.5-Eunomia simulation scalings with T_e. Given the molecular precursor H_2^+ is predominantly generated from H_2, it is assumed that T_H_2=T_H_2^+. In the creation of H^- from H_2 the ion can get some of the Frank-Condon energy of the H_2 bond (2.2eV per nucleon). For this reason 2.2eV are added to T_H_2 to estimate T_H^-<cit.> (see <ref>).5 AMJUEL n_H^-/n_H_2(r,t), n_H_2^+/n_H_2(r,t):H_2^+/H_2 and H^-/H_2 density ratios are calculated from the AMJUEL collection of cross sections, reaction rates and ratios <cit.> to determine the n_H^-/n_H_2 and n_H_2^+/n_H_2 ranges used as priors (see <ref> and <ref>). §.§.§ Prior probability distribution Once the initial parameter space is determined, the prior probability distribution is calculated. The process for this is shown in <ref> and the meaning of the steps is as follows:7 ADAS colisional radiative model (atoms):The Atomic Data and Analysis Structure (ADAS) collisional radiative model<cit.> is used to calculate the line emission (see <ref>), the power losses (see <ref>) and the particle balance sources/sinks (see <ref>) due to atomic processes. The reactions considered and relative coefficients are indicated in <ref>. This type of analysis is fairly standardised, see for example <cit.>.8 Yacora collisional radiative model (molecules):The Yacora collisional radiative model<cit.> is used to calculate the line emission due to molecular precursors via the population coefficients (see <ref>) and the radiative power losses (see <ref>). The molecular precursors that will be utilized in this analysis are H_2, H_2^+ and H^- based on the work by Verhaegh <cit.> (see the reactions in <ref>). H_3^+ could also be considered, but its relevance is expected to be much lower than that of H_2^+ and H^- so it is dropped to limit the number of variables.<cit.>.9 AMJUEL/Janev reaction rates (molecules):The AMJUEL database and a collection of reaction rates from Janev <cit.> is used to calculate the rate of reactions involving molecular precursors (see <ref>, <ref>). The reactions rates are use then to calculate sources and sinks in the power and particle balance in the plasma column (see <ref>, <ref>)19 Global power input from plasma source:The power input from the plasma source is calculated from the measurement of voltage and current on the plasma source. From a studyby Morgan<cit.>, 92% of the electrical energy is transferred to the plasma during an ELM-like pulse. This value will be considered valid for each time step.18 n_e(r,t), T_e(r,t) (TS) at lowest target chamber pressure:It is assumed that for the lowest pressure case the interactions between the plasma column and the background gas in the target chamber are negligible, therefore they represent the input conditions for all cases that differ only by the target chamber neutral pressure. They are referred to as n_e,in, T_e,in and used to determine the input power and particle profiles (see <ref>).21 Power balance for all r → v_in:Assuming that the plasma flows from skimmer to target with the same Mach number (M_in(t)) across different radii it is possible to relate the total input power (P_source(t)) with the input conditions (n_e,in(r,t), T_e,in(r,t)). It is then possible to determine M_in(t) and then v_in(r,t). See <ref> for details.22 Power input from plasma source:The maximum power input is calculated as the energy flow due to a plasma in the input conditions (n_e,in(r,t), T_e,in(r,t)) at flow velocity v_in(r,t) plus the energy associated to the depletion of the plasma at the specific neutral pressure of the experiment (n_e(r,t), T_e(r,t)) in one time step. See <ref> for details.20 Particle input from plasma source:Similarly to the previous step the maximum particle input is obtained as the input particle flow using n_e,in(r,t) and v_in(r,t) plus the particle flow associated with the dissipation of the plasma at the specific neutral pressure of the experiment (n_e(r,t)) in one time step. See <ref> for details.10 Power losses model: The power losses in the annular section of the plasma column of interest are modelled as per <ref>. The effect of impurities is not considered, as it is assumed that most of the impurities introduced by the plasma source anode and cathode are removed by differential pumping in the source and heating chamber.12 Modelled plasma radiative power losses:The total radiated power from the plasma column is calculated. For atomic processes ADAS coefficients are used. For molecular processes Yacora coefficients are used, summing the line emission for all possible transitions (the highest excited state considered is p=13). It is not considered the radiation caused directly by H_2, H_2^+, H^- excited states such as the Werner and Lyman bands, as it should have a negligible contribution.<cit.>11 Modelled plasma total power losses:The total power losses are obtained by adding: total radiated power losses, net difference of potential energy from reactants to products and energy loss attributed to the heat carried away from the neutral produced by recombination. This terms and their components are part of the outputs of the Bayesian analysis.23 Compare power:The modelled net power losses are compared to the power input previously inferred. The net power losses have to be between 0 and the power input. The probability that this is the case is calculated as shown in <ref> and is one component of the prior density distribution.15 Particle losses model:The particle sink-sources terms are modelled as per <ref>. We neglect cross field transport when analysing the particle balance. Given the difficulty of accounting for cross field transport only the particle balance of the charged particles (e^-,H^+,H_2^+,H^-) that are bound by the magnetic field is operated.16 Modelled plasma total particle losses:The net particle sink is calculated for e^-,H^+,H_2^+,H^-). These terms and their components are part of the outputs of the Bayesian analysis.17 Compare particle:The modelled net particle losses are compared to the particle input previously inferred. Bounds for physical values of the net particle losses are obtained from the particle input previously inferred as detailed in <ref>. The probability that the particle sinks are within the bounds is calculated and their log-probability summed.13 Σ n_H(p):Within the power losses model the total density of excited states is calculated. ADAS PEC coefficients are used for atomic processes while Yacora coefficients are used for molecular, summing the density for all excited states.14 Compare Σ n_H(p)≤ n_H:It is checked that the total density of excited states is lower than the density of atomic Hydrogen. At the temperature of interest in this work the density of excited states is always a small fraction compared to the ground state, so a probability of 1 is assigned if the condition is respected and 0 otherwise. 24 Compare n_e, T_e:The measured T_e and n_e and their uncertainties are used to assign the relative probability on the n_e and T_e axis of the parameter space, uniform on the others.25 Compare n_H/n_e, n_H_2/n_e:The previously mentioned scalings from B2.5-Eunomia simulations are used to assign weakly varying probabilities on the n_H/n_e, n_H_2/n_e axis of the parameter space, being uniform on the others, as indicated in <ref>.§.§.§ Bayesian analysisThe expected value of the Hydrogen line emissivity is then compared with the measured one, and the resulting probability combined with the prior to return likelihood, also referred to as the sum of the Log-probabilities. The grid is refined twice around the regions of the grid with high likelihood to improve the resolution of the probability distribution. While keeping a grid structure requires a large amount of memory the numerical procedure is relatively simple. This procedure is shown in <ref> and the meaning of the blocks here introduced is as follows: 29 Balmer lines ϵ(r,t)(p=4-8→ 2):The Balmer line emission for transitions (p=4-8→ 2) is measured with OES (see <ref>). The camera has a rolling shutter so it is necessary to decouple the line of sight from the time information, as detailed in Appendix A of <cit.> and <ref>.30 Abel inversion (brightness to emissivity):Once the brightness per time step and line of sight is obtained an Abel inversion is operated (see <ref>). The process converts the line integrated information to the local emissivity assuming poloidal symmetry and the plasma optically thin.28 Modelled line emission:As part of the power losses calculation the line emission from atomic and molecular processes the total line emission for the lines of interest (p=4-8 → 2) is found (see <ref>).31 Compare emissivity:The modelled line emissivity is compared to the measured one to return the probability that each combination of priors generated the measured emission (see <ref>).32,33 Likelihood:The likelihood for every point of the parameter space is found adding the log probability from the prior and the comparison of the calculated emissivity with the measured one.34 Grid refinement:To increase the resolution in the region of the parameter space with high likelihood, the grid is refined by adding intermediate steps to the grid elements previously defined for every axis around the marginalised likelihood peak. The grid structure is maintained but it becomes non uniform. The prior probability distribution and comparison with measured emissivity is repeated to recalculate a new likelihood distribution. This loop is repeated twice to limit the computer memory requirements.35,36 Quantity of interest PDF:The quantity of interest for the plasma column power or particle balance have already been calculated for all the parameter space within <ref>. In order to reduce the size of the outputs the full range is reduced to a smaller number of intervals and the likelihood is summed within each interval.Once the PDFs for a quantity of interest in all radial and spatial locations are determined they are then convolved over radial steps, to obtain the total over the plasma column, and then in time over the ELM-like period to obtain the global quantities (see details in <ref>).When a reaction occurs, energy is transferred between different species (e.g. plasma to molecules or atoms). If charged particles are created they are bound by the magnetic field and are confined to the same radial section of the plasma column. When neutral hydrogen atoms and molecules are generated they can escape the plasma column, carrying with them their kinetic energy. They can also interact with plasma at a different radii on the way out andtransfer the energy back in the plasma.To model the transfer of energy to the neutrals and back to the plasma would require an effort beyond the scope of this work, so for now this component is not considered in the power and particle balance and only hyrdrogenic radiation is considered as net power leaving the plasma column. For more details see <ref>. It is important to address whether ignoring the power removed by the plasma column (and possibly transferred back) due to charge exchange (CX) and elastic collisions between H_2 and H^+ (H_2 elastic) is important and compatible with the Bayesian algorithm results. To estimate CX and H_2 elastic power losses we post-process our results as shown in <ref>. The same methodology was employed to reprocess B2.5-Eunomia results, and it returned a CX contribution between a 1% and 150% and H_2 elastic within 18% and 34% of the correct (self consistent) values. In one instance, the simplified method produced a factor of 100 larger CX losses compared to B2.5-Eunomia result. It is observed that in this case, with both high temperature and density, a large fraction of the CX losses are recovered by fast neutrals exchanging back to the plasma their energy before escaping the plasma column. This behaviour can be captured by the Monte Carlo model but not by this simplified method. With these very large uncertainties it is possible to say that CX is likely more important than H_2 elastic for ELM-like pulses, contrary to what shown in <cit.>, but the relevance for the plasma column energy balance is uncertain and would require a more detailed analysis, out of the scope of this work.§.§ Analysis limitations due to restrictions on measured Balmer linesWhile it is in general desirable to measure as many transition lines as possible, it became apparent during the setup of the experiment that the Hα line (p=3→ 2) could not be measured simultaneously with lines p≥ 4→ 2 due to the grating used. We describe herein how the lack of the Hα line prevents the separation of the line emission contribution from H_2^+ and H^- precursors.As a reference, <ref> shows the density ratio of excited states produced by different processes with respect to p=4 for typical T_e,n_e values.Restricting Balmer lines to n ≥ 4 → 2 recombination, H_2^+ mutual neutralisation and EIE have a distinctively different profile while H_2^+ dissociation, H_2 dissociation and H^+ mutual neutralisation are fairly similar. H_2 dissociation is usually not problematic because the H_2 density required to match the measured emissivity is excessively high. Conversely, H_2^+ dissociation and H^+ mutual neutralisation can be caused by relatively low molecular precursors densities, making it difficult to distinguish which caused the emission. This could have been alleviated by including p=3 → 2 in the analysis<cit.> but this was not possible with the available gratings. <ref> illustrates that with the present setup, using only the line ratios available is insufficient to distinguish which molecular precursor, H_2^+ or H^-, is dominant. This could be alleviated by combining matching line intensities with other conditions as described in <ref>, as it can help rule out regimes that can well match the emission profile but would be unphysical.The results of this study and the relevance of molecular interactions during the ELM-like pulse will now be shown. §.§ Bayesian analysis resultsIn this section the results from the Bayesian analysis will be presented. Only results regarding the strong pulse cases are shown, since the temperature and density falls below the detection capabilities of the TS system for a significant fraction of the duration for weak pulses.§.§.§ Power balanceThe results at each time and radial location, as explained in <ref>, consist of a collection of PDFs for the quantities of interest in the volume from target skimmer to target, like the power radiated via EIE. The temporal and spatial distribution of the most likely values can already be useful, as they can inform on the conditions in which some phenomena are important. Our inference shows that ionisation and excitation tend to be more important close to the axis of the plasma column, where and when temperature is highest, but are not peaked on the axis as the atomic hydrogen density profile is hollow. Radiation from EIR is strong for high density and intermediate temperature. Radiation from excited H atoms (H^*) generated from plasma molecular reactions (PMR) is present at similar times and locations as EIR but is significant up to even larger radii (and lower temperatures) than EIR. This is shown for ID 10 in <ref> in <ref>.The PDFs are convolved over the radial direction to integrate the parameter of interest on the radii. An example of the radial convolution of the power radiated from EIE at 0.4ms for ID 5 in <ref> is provided in <ref>. It can be noted that some PDFs, for example at r=11mm, show multiple peaks. This indicates that multiple possible combinations of the precursors densities with similar likelihood have been found. The evolution over time of the terms of the local the power balance, as per <ref>, are shown in <ref> for the lowest and highest pressure cases.In <ref> is shown that the energy of the ELM-like pulse is almost entirely used for ionisation, dissociation and molecular processes (MAD, MAI) increasing the plasma potential energy. However these ions recombine at the target, meaning that a larger fraction of the pulse energy will be able to reach the target, and in fact for this condition the target receives the most heat. The net power sink from the plasma is slightly larger than the input energy after its peak. This can be understood as there is no mechanism to enforce a global power balance since the analysis for every radial and temporal location is independent. The radiated energy is mostly due to EIE, that is to be expected due to the high temperature. After 0.6ms the temperature decreases below 4eV and the radiative and potential energy losses to the plasma from MAI/MAD and dissociation increases.For higher neutral pressure (<ref>) the peak in power input coincides to the peak of power loss from molecular reactions instead of ionisation, primarily due to the lower temperature. Most importantly, after the peak in input power and temperature, there is a strong influence of EIR. Radiative losses and potential energy gains balance such that after 0.4ms the net power sink on the plasma is negligible.<cit.> The period of peak radiative losses now coincides with the peak in EIR. More energy is radiated then before and recombination in the volume means that the plasma is depleted before reaching the target. The radiative losses are on the expenses of the plasma potential rather than its thermal energy. This causes a reduction of the heat flux just like in deep detachment, matching with the results from thermographic observations.<cit.> EIR dominates the path to radiative losses but the influence of molecular reactions increases. These results are quite similar to recent studies of detachment in medium size tokamaks including molecular precursors.<cit.>The PDFs at each time are further convolved multiplying by the time interval they refer to, returning the total for the quantity of interest over the entire duration of the ELM-like pulse. In <ref> is shown how the total energy radiated by the plasma changes with target chamber pressure.As the target chamber pressure is increased (Stage 1 (P<2Pa) to 2 (P>2Pa)) the dominant process changes from EIE to EIR and the impact of molecules increases reaching up to ∼1/3 of the total radiated power. This is also reflected in the total radiated power in the visible wavelength range (not shown): in Stage 2 it is 3 times higher than in Stage 1, 0.3J and 0.1J respectively (the Lyman series is always responsible for most of the power radiated), largely due to the increase of EIR importance. This is not as a dramatic increase as observed from thermography (Figure 5 in <cit.>), implying that the extrapolated Hα emission, from our inferences, could be overestimated. This could imply an overestimation of molecular assisted reactions in Stage 1 (visible radiated power losses due to EIE are 0.005-0.01J in for ID 5 and 6 in <ref> respectively, with a better agreement with thermography). An overestimation of Hα can also arise from an (over)underestimation of (H^-)H_2^+, changing the Hβ/Hα ratio as shown in <cit.>. Among radiation due to molecules the precursor that is more important is H^-. This would be against results from previous analysis that show a dominance of the H_2^+ precursor<cit.> but, as mentioned in <ref>, the radiation from H_2^+ dissociation and H^+ mutual neutralisation has similar line ratios for the lines used in this study, so cannot be distinguished with certainty.The variation of the terms in the global plasma column power balance with respect to target chamber pressure is presented in <ref>. The energy absorbed by the target was determined from thermographic analysis.<cit.> The estimation of CX and H_2 elastic collisions is as per <ref>. The two terms are already multiplied by the corrective factors found by comparing their simplified model with B2.5-Eunomia, identified in <ref> (CX and H_2 elastic collision losses from B2.5-Eunomia being 1-150% and 18-34% respectively compared to the simplified estimate). The radiative energy losses increase with target chamber neutral pressure, up to ∼1/3 of the input energy, while the energy to the target decreases. This transition corresponds to the transition between Stage 1 and 2 introduced when analysing the visible fast camera brightness. H_2 elastic losses are of the same order of magnitude as radiative and target losses, while CX has a very large range from zero to one order of magnitude higher than P_in. These losses are here only estimated. Subtracting the radiative losses from the input energy returns much more than the measured target flux, allowing for other loss channels like CX, H_2 elastic collisions or the losses associated with plasma surface interactions, that are here not inferred.From this very crude balance it seems that losses due to radiation and energy transfer to the target can account for about 1/3 of the input energy and H_2 elastic collisions for another 1/3. From fast camera observations, for high pressure conditions the energy dissipated at the target could be less relevant, leaving more room for CX losses, while the opposite for low pressure.§.§.§ Particle balanceDuring the experiment there was no direct measurement of any part of the particle balance, so it is not as well constrained as the power balance. Nevertheless thanks to the assumptions and approximations in <ref> it is possible to perform a rough particle balance on the plasma column. Only the charged particles' particle balance is utilised, as only these are confined to a certain poloidal location by the magnetic field. Only H^+, e^- are introduced by the plasma source, using the flow velocity estimated from the power input in <ref>. The volumetric particle sources and sinks are calculated together with the power ones and the target particle losses are treated as an unknown in a similar way as in the power balance. To calculate the prior probability, the balance of H^+ and e^- has a large uncertainty, as the plasma can potentially accumulate in the plasma column, while the balance of H_2^+ and H^- has a smaller uncertainty due to their short lifetime. More detail on the particle balance is given in <ref>.The procedure to convolve the data from time and spatial dependent PDFs to a global one for the whole pulse is the same as previously mentioned for the power. In order to better understand the influence of molecular reactions on the plasma the individual reaction rates are divided in MAR, MAI and MAD. These are defined as the paths composed of a first reaction that converts a neutral specie (H_2, H) into a molecular precursor (H_2^+, H^-) then a second one where the precursor is used. The final result of the two reactions combined can either be the recombination of the plasma that participated to the reactions (MAR) the ionisation of the original neutral (MAI) or the dissociation of H_2 (MAD). Here paths that cause ionisation or recombination and dissociation are not accounted in MAD to avoid double counting. All of the possible combinations of reactions that achieve the mentioned goals via a molecular precursor are added together to form the MAR, MAI and MAD rates.<cit.> The particle balance on the plasma is shown in <ref>.Here are shown the plasma particle input, constituted by the source, the ionisation source and the recombination sink. MAI increases at lower pressure but is not very important representing a small fraction of the total ionisation. With the considered reactions H_2^+ is the only precursor that can cause MAI. MAR increases with pressure, but it dominates at low pressure (high temperature), and amounts to ∼1/3 of the total plasma particle sink at high pressure (low temperature). It is mainly due to the H^- precursor. An important observation is that for low pressure conditions the ionisation source is higher than the estimated particle input from the source. The main difference between linear machines and tokamak divertors is that in a divertor, especially in high recycling, most of the particle source for the plasma comes from the recycling neutrals while the upstream plasma acts as the energy source. Conversely in linear machines the external source usually dominates. For low pressure conditions this does not happen here. If the neutrals predominantly come from plasma recycling at the target surface, this might be evidence of recycling in a similar fashion as in a tokamak. <ref> shows a decrease in net balance of protons and electrons in the plasma column (net H^+ and net e^-) for increasing neutral pressure due to a transition from an ionising to a recombining plasma. This is in agreement with both the analysis of power losses as per <ref> and the target heating analysis . The potential energy exchanges in the plasma like ionisation, dissociation, MAI and MAD are important to lower the plasma temperature enough for MAR and EIR to become important and reduce the particle flux to the target. MAD is more efficient than electron impact dissociation (EID) in converting H_2 to H and MAR can effect the plasma at higher temperature than EIR, therefore the inclusion of the chemistry from H_2^+ and H^- is important in describing the ELM burn through processes.The particle balance for atomic hydrogen is shown in <ref>. It must be kept into account that this balance is not directly constrained and that transport is not considered, so the balance is very tentative. Hydrogen from molecular reactions, and MAD within it, represent the largest contribution to the source. Considering also that MAI is less then 10% of MAD reaction rate, this means that most of the power losses due to molecules in the local power balance (<ref>) are due to MAD. Molecular reactions are responsible (integrating the local power balance) for about half of the potential energy exchange while up to a third of the radiative power losses. Molecular reactions seem to not have a strong impact in the plasma particle balance while dominating the neutral's particle balance. This is in agreement with previous findings in tokamaks such as TCV <cit.>, JET <cit.>, and TCV SOLPS-ITER modeling with reaction rates modified to properly assess molecular contribution <cit.>.It must be noted that impact of molecular reactions on the particle balance, radiated power, and potential energy is heavily dependent on the rates used. For this work the Yacora rates have been used to determine the radiative power losses due to molecular precursors. To determine the reaction rates and the potential energy losses, other rates from AMJUEL and Janev have been used, with no guarantee of the consistency with Yacora (in terms of approximations used, consideration of different ro-vibrational states for H_2 and H_2^+ and other parameters used in the collisional radiative codes). For typical plasma conditions in this study (T_e=2eV, n_e=10^21#/m^3), the average radiative power losses per molecular reaction are: * H_2^+ + H^- → H(p) + H_2 : 0.17eV* H^+ + H^- → H(p) + H : 8.2eV* H_2^+ + e^- → H(p) + H(1) and H_2^+ + e^- → H(p) + H^+ + e^- : 0.72eV* H_2 + e^- → H(p) + H(1) + e : 0.026eVThe molecular rates are less established then atomic ones, so these numbers can change and significantly effect the power balance.In low pressure conditions volume recombination is negligible and the vast majority of atomic hydrogen is produced by volumetric dissociation of H_2. Here the total volumetric source is higher then the total number of hydrogen atoms present in the target chamber before the ELM-like pulse and the neutrals that can be produced by surface recombination at the target (see <ref>). This points to the source being over estimated, that can be expected being H_2 and H particle balances not constrained. Both the hydrogen in the target chamber before the pulse and the possible neutrals generated at the target are larger then the volumetric ionisation sink. It is therefore unclear if what observed at low pressure is evidence of recycling or just dissociation and ionisation of a significant fraction of the gas in the target chamber.A metric which is potentially significant for tokamaks is the average energy loss per interaction with the neutrals. The plasma enters the target chamber at about 10eV, so Magnum-PSI can be thought as comparable to the section of a tokamak SOL from the region where the temperature the peak temperature is ∼ 10eV to the target. Above this temperature molecular effects become much less relevant, so the interactions between plasma and neutrals are easier to model. In Magnum-PSI, especially in Stage 2 where EIE is low, the importance of recycling appears to be very limited and the vast majority of the atomic hydrogen is produced via dissociation of H_2, that therefore can be considered the main neutral precursor causing the cooling of the plasma. The quantity of neutral gas that has a chance to interact with the plasma can be roughly estimated from the neutral pressure, assuming sonic inflow at room temperature over a cylinder or a conventional 2cm radius. In Stage 2 the H_2 that reacts with the plasma is about a fifth of what enters from the sides. This is likely a product of the geometry of the plasma and elastic collisions heating the neutrals and causing dilution.<cit.> The average power losses via radiation due to plasma interaction with H_2 (then H) neutrals (EIE and MAD/MAR/MAI), calculated as (P_rad - P_rad recomb)/(2ṅ_H_2,sinks), correspond to 0.9-1.7eV/# per interaction. Considering the overestimation of the MAD reaction rate mentioned before, this metric is likely slightly higher. Including the potential energy losses due to ionisation and molecular reactions the energy losses per nucleon are ∼4.5eV in Stage 1 and ∼6eV in Stage 2. Of these the fraction due to ionisation and MAI, that converts plasma thermal energy to potential and therefore requires recombination or MAR not to effect the target, is 1eV and 2.5eV respectively. For comparison in Stage 2 the estimated energy losses due to CX and H_2 elastic collisions are 0.04-12eV and 0.15-0.5eV per nucleon entering the reference plasma volume respectively.Another useful metric is the energy cost per ionisation<cit.>: as temperature decreases and the plasma becomes more detached more excitation happens before ionisation occurs. In our case the radiation due to EIE increases from ∼ 5eV in Stage 1 to ∼7eV in Stage 2, while it was ∼22eV in TCV at particle flux roll-over.<cit.> The radiation due to molecular reactions per ionisation increases from ∼2eV in Stage 1 to ∼15eV in Stage 2, showing the relevance of molecular reactions and bringing the total closer to TCV estimations. Considering then that not all the hydrogen produced via dissociation is ionised, the total cost including the potential energy losses for molecular reactions and ionisation itself increase from ∼35eV to ∼60eV. It must be considered that while radiative losses are in part constrained by matching the line emission, an absolute measure, the potential ones are only inferred, and could change significantly if elastic collisions and CX are accounted for properly. Recycling, potentially significant in Stage 1, can cause an increase of the radiative losses via EIE at the target but also a local increase in plasma density, therefore only moderately reducing the heat flux to the target.The results from particle balances are here tentative and obtained with strong approximations, so further investigations will be necessary to better understand the respective roles of the various sources and sinks.§ SUMMARY This study explores the impact of ELM-like pulses on a detached target in Magnum-PSI, employing a range of diagnostics.In the first paper associated to this study<cit.>, it was observed that as the level of detachment increased, the energy of the ELM-like pulse was increasingly dissipated, eventually preventing it from reaching the target. This is attributed to increased volumetric power losses. The plasma's behaviour was classified into distinct stages. In Stage 1, the plasma remains attached to the target both before and during the ELM. By increasing the neutral pressure, the plasma id detached from the target prior to the ELM-like pulse but reattaches during the pulse, referred to as Stage 2. Detachment during the steady state is correlated with a significant rise in visible hydrogenic line emission, consistent with what is inferred from simulations.<cit.> Further increasing the neutral pressure results in the effective dissipation of the ELM-like pulse energy within the volume, and the plasma fails to reattach, marking Stage 3. The power transferred to the target was independently calculated via thermography and demonstrated to decrease in accordance with the aforementioned division in stages. In the most extreme cases of Stage 3, target heating becomes undetectable, signifying the complete dissipation of the ELM-like pulse energy within the volume. This could also be applied to tokamaks by increasing the distance between the ionisation front and the target and increasing neutral compression, thus creating a cold gas buffer capable of dissipating a substantial portion of the ELM energy.The optical emission spectrometer was upgraded to acquire temporally and poloidally resolved brightness in the visible range. Dedicated routines were developed to separate the spatial and temporal dependence in the raw data given by the rolling shutter, and an Abel inversion was performed on the brightness to give the line emissivity. Hydrogen Balmer line emission from OES (p=4-8 → 2) was used to develop a Bayesian routine that incorporates the results from multiple diagnostics to return the power balance in the plasma column in the target chamber. Various approximations to extrapolate the results from a single location in the target chamber to its entirety were adopted. The routine incorporates priors from numerical simulations (B2.5-Eunomia) and collisional radiative codes (Yacora, ADAS). The goal is to understand what drives the previously identified volumetric losses and what the role of atomic and molecular assisted reactions is.It was found that radiation from the plasma due to molecular assisted reactions is an important, although not dominant, energy loss mechanism. Radiation from excited hydrogen atoms created from plasma molecular interactions is found to be the about the same for Stage 1 and 2, but this is likely due to overestimation in Stage 1. Mutual neutralisation of H^- seems to dominate radiative losses, but it was established that this could not be determined from OES alone with the present setup. Molecular assisted reactions significantly effect the local plasma power balance exchanging kinetic to potential energy, limiting what is available for ionisation and recycling. This is somewhat analogous to tokamaks, where power limitation of the ionisation source can induce detachment<cit.>.Molecular reactions predominantly lead to MAD, as it is a more efficient path to H_2 dissociation than EID. These losses could be overestimated here due to an unconstrained particle balance for H_2 and H. Molecular processes are mostly dominant in an intermediate temperature range around 3eV, between ionisation and EIR dominant regions. The radiated power losses can be responsible for significant plasma power losses, but exchanges of potential energy like ionisation, EID, MAI, MAD, and collisional processes like CX and H_2 H^+ elastic collisions are expected to dominate the plasma power balance. Ionisation is more important at high temperature, >5eV, whereas at lower temperatures MAD/MAR increases significantly. When these processes cause the temperature of the plasma to go below ∼1.5eV EIR occurs, reducing significantly the plasma flow to the target and the relative heat flux. The significance of radiated power losses could differ if impurities such as carbon or nitrogen are present. These results indicate that for highly detached regimes in linear machines molecular interactions are important and need to be accounted, something that is not yet fully done in many codes used for Tokamak edge plasma studies. The authors would like to thank: D. Wünderlich for his help in obtaining the Yacora coefficients;H. J. van der Meiden and J. Scholten for providing the Thomson scattering and ADC data.This work is supported by US Department of Energy awards DE-AC05-00OR22725 and DESC0014264 and under the auspices of the Engineering and Physical Sciences Research Council [EP/L01663X/1 and EP/W006839/1]. To obtain further information on the data and models underlying this paper please contact [email protected] for M. L. Reinke’s contributions was in part provided by Commonwealth Fusion Systems.This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200-EUROfusion). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.This work was supported in part by the DIFFER institute.§ REFERENCESIEEEtran§ OES DATA INTERPRETATION In this section, it will be detailed how the OES measurements are processed to obtain the local radially and temporally resolved emissivity used to estimate the relevance of molecular processes. The camera that was selected for the purpose of collecting time resolved OES data was a Photometrics Prime95B 25mm RM16C, because of the relatively high signal to noise in low light conditions and large size of the sensor. In <ref> is shown the typical picture collected during an experiment when on top of a steady state plasma an ELM-like pulse is fired. The rows are read sequentially from the bottom, with a time shift equal to the integration time, minimum 20μ s. This means that for the particular example shown the rows indicated as Before represent times before the effect of the ELM-like pulse propagated to the OES location. During represents the pulse and After is for times after the ELM-like pulse, characterised by homogeneous line emission from the hot gas filling the target chamber. From <ref> it is also possible to distinguish part of the 40 line of sight that are available.The first effect that is compensated is the sensitivity of the camera at low signal levels. The correlation between counts of the pixels and light intensity is mostly linear, but deviates significantly below 6 counts, and negative counts are returned at very low signal. A routine was developed to compensate for this, thanks to dedicated measurements to find the correlation between light intensity and counts.To decouple spatial and temporal information, a scan is operated such that the ELM-like pulse is shifted in time with respect to the start of the camera image record and TS measurement. More details on the sampling strategy in Appendix A of <cit.>. The presence of ELM-like pulses affected by capacitor bank misfiresis found analysing the plasma source power and the data corresponding to those pulses is excluded. To separate the time and row dependency, for every row, column, and time of interest, the data in a range of 100μ s and 8 rows is fit with a polynomial of second degree in time and first in row. To avoid over smoothing the image a smaller weight is assigned for increasing times and row difference from the one that is being examined. In <ref> is shown the time/row decoupled image. The output time step has a 50μ s resolution to match TS data.The counts are summed among the rows composing each LOS, and the line intensity is calculated by integrating above the background level. Brightness is then converted to emissivity via Abel inversion. The line emission is supposed to be poloidally symmetric and the plasma to be optically thin. In order to avoid unrealistic discontinuities given by noise, the superimposition of 3 Gaussian is fitted to the brightness profile as done by Barrois.<cit.> Each Gaussian can then analytically be Abel inverted and summed to obtain the total emissivity. In this process the uncertainties are propagated to be used in subsequent steps in the analysis. An example of the inversion process is shown in <ref>. Given the signal to noise ratio and the available lines, it is decided to use Balmer lines p=4-8 → 2.§ DETAILS ON THE BAYESIAN CALCULATIONS This section will provide a detailed explanation of how the expected properties of a plasma are calculated, compared with the measurements, and used in the particle and power balance, given a set of priors. §.§ Priors from B2.5 Eunomia To define the initial parameter space and the prior, it is necessary to define the range and probability associated with all the axis of the parameter space.For T_e and n_e, the TS values are used and the range is defined as 6 times the uncertainty. The probability is calculated as a linear normal distribution with the uncertainty corresponding to 1 sigma.The range and probabilities for n_H_2/n_e are obtained from B2.5-Eunomia simulations for a steady state neutral pressure scan with 2 plasma source settings, ranging from attached to detached via increasing neutral hydrogen target chamber neutral pressure, carried out by Chandra.<cit.> The simulations consider the whole plasma column source to target, but only data inside the target chamber and within 2cm of the axis are considered here (marked with x, as opposed to other regions marked by a point in <ref>, <ref>, <ref>, <ref>, <ref>, <ref>). As demonstrated by Den Harder<cit.>, the density decrease of H_2 in the plasma is mainly driven by rarefaction due to the high temperature of the plasma itself. For this reason a quite strong correlation between molecular hydrogen and plasma density ratioand plasma temperature exists, shown in <ref>. The most likely ratio is obtained by fitting each simulation's results with a linear log log function and then averaging the fit parameters as shown in <ref>. The probability is defined as a normal distribution where 2 sigma corresponds to the black dashed lines, which are 100 and 1/100 times the fit value. The range is a significantly larger window around the dashed lines, to account for the large uncertainty coming from the fact that B2.5-Eunomia only simulates steady state conditions, while the ELM-like burn through is a very dynamic one.The simulations are used to provide also range and probability for n_H/n_e. Atomic hydrogen is generated from recombination and from H_2 interaction with plasma and various molecules, so its density is only weakly correlated with plasma temperature and density, as shown in <ref>.The probability was calculated with a linear normal distribution with nominal value from the fit (calculated in the same fashion as for n_H_2/n_e) and 2 sigma arbitrarily assigned as per the dashed line in <ref>. Other quantities that are part of the plasma state and had to be determined to calculate reaction rates and other coefficients are the temperatures of all species. To reduce the number of variables in the Bayesian algorithm their uncertainty is in this work not considered and only the nominal values are used. The correlations are shown in <ref>, <ref>, <ref> for H, H_2 and H^+ temperature respectively, where the black dashed lines indicates the values used. For T_H and T_H_2 the fit is obtained in the same fashion as n_H/n_e while for T_H^+ it is assumed T_H^+=T_e=T_plasma. For T_H_2 in particular a weak dependence on the plasma density is present, whose estimate is shown in <ref> and can be due to an increase of the collisionality for higher density and a better coupling with the neutrals, resulting in a correction factor to apply to the dependency on T_e alone. Given H_2^+ is mostly originated from H_2, it is considered T_H_2=T_H_2^+. This is valid for H^- too, but because it can get some of the H_2 binding energy (2.2eV per atom), 2.2eV are added to T_H_2 to estimate T_H^-.<cit.>§.§ Priors from AMJUELIonised hydrogen molecules are generated mainly from H_2, so their density prior is calculated with AMJUEL<cit.>, a library that, among others, contains n_H^-/n_H_2 and n_H_2^+/n_H_2 density ratios in an equilibrium plasma for given plasma temperature and density (Section 12.58, 12.59, 11.11, 11.12). The conditions of an ELM-like pulse can potentially deviate significantly from equilibrium, so a wide range around equilibrium is considered as prior and a linear uniform distribution as likelihood.§.§ Priors range optimizationTo optimize the n_H/n_e range to only useful values, a combination of the information from <ref> and the OES measurement is used. For each T_e / n_e combination, the emission from EIR via the ADAS PEC coefficients is calculated and subtracted from the OES measurement. Then, the n_H/n_e required to recreate each residual line emission via EIE plus its uncertainty is calculated. The largest n_H/n_e, limited by a predefined multiplier times the value from the B2.5-Eunomia fit, will be the highest value considered for that particular T_e / n_e combination. The lower limit will be taken as a small fraction of the maximum value, again limited by a predefined multiplier times the value from the B2.5-Eunomia fit. This way, parts of the range of n_H/n_e that would cause an excessive line emission are automatically excluded and the prior range is assigned efficiently.The same process is applied to the n_H_2/n_e prior. For each T_e / n_e / n_H/n_e combination, the total emission from EIR and EIE is subtracted from the OES measurement, and the n_H_2/n_e required to match the residual is calculated using the Yacora coefficients for the H_2 dissociation reaction. For the n_H_2^+/n_H_2 prior, the emission from EIR, EIE, H_2 dissociation is considered. Consequently, for the n_H^-/n_H_2 prior the emission from H_2^+ is also considered. §.§ Emissivity The emissivity is calculated for known precursors densities via the ADAS PECs and Yacora population coefficients.<cit.> The Photon Emission Coefficients (PEC, photons m^3/s) coefficients are defined as the number of photons generated per second per unit of the precursors density. The number of photons for the transition p → q is equal to the product of density of the excited state p and the Einstein coefficient A_pq. therefore, the emission generated by atomic excitation and recombination can be expressed as per <ref> and <ref>, where what is intended as population coefficient (PC_i) is also highlighted. ϵ^exc_pq = PEC^exc_pq(T_e,n_e) n_e n_H = A_pqn_H(p)/n_e n_H_PC_exc n_e n_Hϵ^rec_pq = PEC^rec_pq(T_e,n_e) n_e n_H^+ = A_pqn_H(p)/n_e n_H^+_PC_rec n_e n_H^+ The line emission due to molecular reactions is similarly calculated using the Yacora population coefficients as per <ref>, <ref>, <ref> and <ref>. It is also shown which reaction was considered in building the coefficients, and the variables necessary to calculate the coefficients. As mentioned T_H, T_H_2 are determined from the B2.5-Eunomia simulation while T_H_2≈ T_H_2^+≈ T_H^--2.2eV. ϵ^H_2^+_pq =A_pq PC_H_2^+(T_e,n_e) n_e n_H_2^+reactions:H_2^+ + e^-→ H(p) + H^+ + e^- H_2^+ + e^-→ H(p) + H(0)ϵ^H_2_pq =A_pq PC_H_2(T_e,n_e) n_e n_H_2reaction:H_2^+ + e^-→ H(p) + H(1) + e^-ϵ^H^-+H_2^+_pq =A_pq PC_H^-+H_2^+(T_e,T_H_2^+,T_H^-,n_e) n_H_2^+ n_H^-reaction:H^-+H_2^+→ H(p) + H_2ϵ^H^-+H^+_pq =A_pq PC_H^-+H^+(T_e,T_H^+,T_H^-,n_e) n_H^+ n_H^-reaction:H^-+H^+→ H(p) + H(1) The total calculated emissivity and its uncertainty are determined as per <ref> with σ_ADAS and σ_Yacora the uncertainty on the coefficients mentioned before. ϵ^calc_pq = ϵ^exc_pq + ϵ^rec_pq + ϵ^H_2^+_pq + ϵ^H_2_pq + ϵ^H^-+H_2^+_pq + ϵ^H^-+H^+_pq σ^calc_ϵ_pq = {σ_ADAS^2 (ϵ^exc_pq^2 + ϵ^rec_pq^2) + . . + σ_Yacora^2 (ϵ^H_2^+_pq^2 + ϵ^H_2_pq^2 + ϵ^H^-+H_2^+_pq^2 + ϵ^H^-+H^+_pq^2) }^1/2 The line emissivity measurement is compared with the expectation. For each precursor combination and line, the likelihood that y_pq=0 is calculated with <ref>y_pq = ϵ^calc_pq-ϵ^measure_pq , σ_y_pq = √(σ_ϵ_pq^calc^2 + σ_ϵ_pq^measure^2)L(y_pq = 0|Θ) = 1/σ_y_pq√(2π) e^-1/2( y_pq/σ_y_pq)^2where Θ represent the specific combination of precursors that lead to the emission ϵ^calc_pq.Following Bayes theorem the posterior (probability of the combination of precursors generating the measurements) is calculated as the likelihood of the measurement being generated by the precursors times the probability associated with the precursors themselves divided by the probability of the measured data. For the case in which only one emission line is included in the model this is expressed in <ref> P(Θ|y_pq = 0) = L(y_pq = 0|Θ) P(Θ)/P(y_pq = 0) Where P(y_pq = 0) acts as a normalisation factor. The final product of all probabilities will be normalised, so this term can be neglected. P(Θ) isthe product of the probability associated with every combination of precursors (see <ref>, <ref> and <ref>). The probability of fitting all the lines P_ϵ is then determined with <ref>. P_ϵ =P(Θ) ∏_p=4,q=2^p=8 P(Θ|y_pq = 0) For this calculations σ_ADAS and σ_Yacora were assumed 10% and 20%, respectively. §.§ Balance over the plasma column To avoid considering precursor densities that could well match the line spectra but would lead to unrealistic power or particle losses a balance on the plasma column is performed. The definition of plasma column also allows to extract global information on the ELM-like pulse from the local TS/OES measurements and compare them with other global measurements like the power input from power supply. A schematic of the model of plasma column used is in <ref>.Fundamental assumptions are: * Given that the neutrals density in source and heating chamber is low thanks to differential pumping, it is assumed that the plasma is transported undisturbed from the plasma source to the target skimmer. Here T_e,n_e are equal to what is measured by TS in the target chamber for the lowest neutral pressure setting, that corresponds to the lowest possible volumetric losses.* The plasma enters the target chamber without any molecular precursor. This is justified by the fact that from source to target chamber skimmer the neutral pressure is at it lowest while the temperature is at its highest and this conditions are the least favourable for reactions involving molecules.* The neutral pressure is fixed throughout the ELM-like pulse at its steady state value.* All plasma properties, such as temperatures, densities, reaction rates, radiated power, depend on radial and temporal coordinates only and are spatially constant from target skimmer to target. This is justified by the fact that the fast camera shows that in Stage 1 and 2 the radiation is mostly constant from a short distance off the target. Given the OES measuring location, only the properties of the bulk of the plasma can be analysed[Measurements specific to the region close to the target have been attempted but failed, possibly due to reflections or obstructions by the target itself.]. That means that the power losses in the visible light brightness peak between the OES and the target observed in Section III of <cit.> cannot be accounted, so the volumetric power losses from the analysis will likely be an underestimation. The extent of the non uniform region close to the target, likely including the sheath and a region with strong plasma surface interactions, is typically <1cm, which is small compared to the 38 cm from target to skimmer, so this underestimation should be minor. Increasing neutral pressure from Stage 1 to 2 (the cases we are most interested in), the visible light brightness becomes stronger in the plasma column, making the anisotropy at the target even less relevant. The approximation also neglects anisotropy in the visible light brightness in the bulk for very high neutral pressure. This is especially dominant in Stage 3, so it's importance should be minor for Stages 1 and 2.* The plasma behaviour in the sheath and in the region with strong plasma surface interactions is neglected.* The flow velocity of the plasma is constant from the source to the target.* Cross field transport is negligible (mostly true for charged particles due to the high magnetic field and additionally for molecular ions due to their short life time) Given these assumptions, one can calculate the components of the power and particle balance on the whole plasma column. The OES/TS measurements from a single location can be applied to the whole column and the contribution from atomic and molecular processes can be found.A quantity that will be used later is the flow velocity (v_in), which is the velocity of the plasma as its flow from the target skimmer to the target. It is mainly used here to subdivide the power from the plasma source (a global value) to what is provided to each radial location and to estimate the local particle inflow. It is also used to estimate the kinetic energy of the plasma, but the relevance of this term is minor. There is no direct measurement of v_in yet, as collective Compton scattering measurements will be available in the future. v_in is then approximated by imposing, for the experimental condition with the lowest target chamber neutral pressure, that the power from the source matches the energy flow measured at the TS location. The flow is assumed having a single Mach number for all radial locations. Applying this conditions to <ref> this translate to <ref> that is then solved to find the Mach number.∑_r{(1/2 m_i v_in^2(r,t) + 5k_B T_e,in(r,t) + E_ion + E_diss) ·}{· n_e,in(r,t) v_in(r,t)A } = P_source(t) v_in(r,t)= M_in(t) c_s,up(r,t)c_s= √((T_e + T_H^+( ≈ T_e) )k_B/m_H) In calculating P_source as product of voltage and current an efficiency of 92% is considered in the conversion from electric to plasma energy.<cit.> M_in is ≈1 during the ELM-like pulse. It must be noted that at the beginning and end of the pulse TS is incapable of accurately measuring across the whole plasma because of the low density, and the energy conversion from electric power to plasma potential is lower, resulting in the calculated M_in>1. The effect of the overestimation, though, is to allow for larger energy and particle budgets, widening the possible parameter space, so it is acceptable. I will now detail how to calculate the likelihood associated with the power and particle balance. §.§ Power balance In this chapter it will be detailed how the power (energy) balance equation is obtained and how all the terms are defined. The 1D energy and particle balance equations are obtained from the 1D Fokker-Planck collisional kinetic equation (<ref>) as per derivation from Stangeby<cit.> and are adapted using the mentioned approximations for the region from target skimmer to target. This results for every time step and radial location in <ref>∂ f /∂ t+ v_z ∂ f/∂ z+ eE/m∂ f /∂ v_z = ( ∂ f /∂ t)_coll+S(x,v) ∂ E /∂ t + d/dz[ ( 1/2 m_i v^2 + 5kT_e + E_ion + E_diss) n_e v ] =- P_ext source_= 0 + P_ volume sinks-sources∂ E /∂ t + ( 1/2 m_i v_in^2 + 5kT_e,in + E_ion + E_diss) n_e,in v_in =+ P_ target+ P_ volume sinks-sources P_diss max = ( 1/2 m_i v_in^2 + 5kT_e + E_ion + E_diss)n_e V/Δ t_P_∂ t + + ( 1/2 m_i v_in^2 + 5kT_e,in + E_ion + E_diss) n_e,in v_inA_P_in≥ ≥P_ volume sinks-sourceswith v the flow velocity, E_ion and E_diss the ionisation and dissociation energy for hydrogen. The inequality arises from not accounting the power delivered to the target and to neglect plasma interactions with neutrals such as elastic collisions and charge exchange. All quantities marked with the subscript _in refer to the input conditions, otherwise the conditions inside the plasma column are intended. P_∂ t represents the power deriving by depleting all the energy associated with the plasma in the volume of interest (V) in a single time step (Δ t), P_in is the power entering the volume of interest from the plasma source through the area A. P_diss max is the maximum power that can be depleted in a radial portion of the plasma column in a time step. As an additional constrain on the power balance it will be required to P_volume sink-source to be positive, as otherwise it would mean that the plasma is externally heated on its way to the target.Let’s investigate now the volumetric sinks-sources term. There are roughly three ways in which a hydrogen plasma can undergo power losses: * Radiative losses. This mostly comes from the relaxation of excited hydrogen atoms, which can arise from both plasma-atom as well as plasma-molecule interactions. The radiative losses associated with H_2 molecular band radiation are expected to be of insignificant.<cit.> * Power transfer from kinetic energy to potential energy. Several plasma species have a relative potential energy associated with it (for instance, H^+ has a potential energy of 13.6eV compared to atomic hydrogen). Converting a neutral into an ion thus “converts” 13.6eV of kinetic plasma energy to potential energy.* Power transfer from CX and elastic collisions. CX as well as elastic collisions between the plasma and neutral atoms and molecules can lead to transfer of power from the plasma to the neutrals (and vice versa). This includes collisions between particles of the same specie but at different temperatures like the cold proton generated from ionisation and the hot one part of the plasma.OES combined with collisional radiative models is used to estimate the magnitude of both path <ref> and <ref>, which is employed in this work. For path <ref>, the Balmer line emission is measured facilitating, through the Bayesian inference of the plasma properties, a full estimate of the hydrogenic line radiation from excited atoms arising both from plasma-atom as well as plasma-molecule interaction. For path <ref>, ionisation and recombination rates are estimated to account for the power transfer between potential and kinetic energy. In the recombination reaction a hot H^+ is converted into a neutral. That neutral has a kinetic energy equal to the temperature of the plasma that generated it, significantly higher than all other molecular and neutral species. For this reason the energy removed by the plasma assuming the neutrals from recombination escape is accounted in the local power balance. Additionally a series of molecular reactions are considered, see <ref> for which the difference in potential energy between reactants and products is calculated and accounted.Note that paths <ref> and <ref> do not strictly represent power lost from the plasma column but can be power transfer mechanisms. Such transfer mechanisms often lead to an effective loss of kinetic energy by the plasma, but can also cause it to increase.In the power balance that regards the limitation of the power transferred from the plasma at a single radial location, <ref>, pathway <ref> is considered. It is in fact impossible for the plasma to transfer energy from kinetic to potential for more more than it is available. Differently when the quantity of interest are the components of the global power balance, <ref>, only terms where the energy is removed from the plasma column entirely will be considered. Internal power transfer will not be considered as it is energy that remains in the plasma.Pathway <ref> cannot be readily analysed experimentally but can be analysed in detail in simulations. The importance of this is currently discussed in literature and it could be significant especially in low temperature conditions in tokamak divertors and linear machines. <cit.> Further code investigations on this and detailed comparisons against experiments are required, which is outside of the scope of this work. To check that neglecting CX and H_2 elastic collisions, the ones to have the largest impact<cit.>, does not have a negative impact on the consistency of the solution a crude estimation was done in post processing. This is done by first calculating the ADAS CCD reaction rate for CX and AMJUEL 3.5 rate for H_2 elastic. These are multiplied by the density of the reactant species interested and by the maximum energy that can be transfer with a single collision. The energy of the reactants are equivalent to their temperature from TS and <ref>. This results in <ref>.P_CX =3 / 2(T_H^+(=T_e)-T_H) RR_CX(T_e,n_e) n_H^+ n_HP_H_2 elastic =3 / 28/9 (T_H^+-T_H_2) RR_H_2 elastic(T_H^+,T_H_2)n_H^+ n_H_2These quantities PDFs are calculated as one of the outputs of the Bayesian algorithm. The sinks/sources terms for <ref> are taken from different sources, to encompass the best knowledge available at the time of writing, see <ref>. Grouping them by type and precursor the power balance sinks/sources term is then defined as per <ref> P_volume sinks-sources =P_ radiated+ P_neutral via recombination + P_ potential energy P_ radiated=P_ radiated atomic+ P_ radiated molecular P_neutral via recombination = 3/2Δ V T_e RR_recP_ potential energy= Δ V ∑_iΔ E_i · RR_i P_ radiated atomic= P_ excitation _ADAS PLT + P_ rec + bremsstrahlung _ADAS PRBP_ radiated molecular=P_ rad H_2^+ + P_ rad H_2 + P_ rad H^-+H_2^+ + + P_ rad H^-+H^+ + P_ rad e^- + H →H^-+hv P_ rad,i= Δ V ∑_p=2,q<p^p=13ϵ^i_pq where Δ V represent the volume corresponding to the radial location of the plasma considered, Δ E_i is the energy difference between products and reactants of the reaction i and RR_i is its reaction rate. The ^p=13 comes from the fact that only atomic hydrogen excited states up to 13 are here considered. The probability that the inequality in <ref> is true and that P_volume sinks-sources is positive is calculated with <ref>y =P_diss max - P_s-s, σ_y = √(σ_P_s-s^2 + σ_P_diss max^2 )L_P =L(y ∈ [0,P_diss max]) = = 1/2[ erf ( P_diss max-y/√(2)σ_y) - erf(-y/√(2)σ_y ) ]where P_volume sinks-sources is shortened with P_s-s.The power sinks/sources are calculated by adding all the radiative losses to the potential energy contribution. The latter is itself composed by positive and negative contributions that tend to cancel out. This causes the uncertainty of the sinks/sources to greatly dominate over the input one, making the effective use of this balance very difficult. To solve this issue σ_y considered as equal to σ_P_diss max and that is assumed to be 50% of P_diss max (using the fixed nominal T_e, n_e values from TS).§.§ Particle balance The derivation of the particle balance equation from Stangeby<cit.> results in Equation 14 ∂ n_j/∂ t + d/dz(n_j v )= Sinks-Sources (nv)_j, diss max =n_j V /Δ t _(nv)_j,∂ t + n_j,in v_in_(nv)_j,in≥ ≥(nv)_j, Sinks-Sources = Δ V ∑_i f_ij RR_i (nv)_j, Sinks= ∑_iṅ_i,sinks - ṅ_i,sources with f_ij the multiplicity and sign in the reaction i for the specie j and n_H_2_in=n_H_in=n_H_2^+_in=n_H^-_in =0. The inequality comes from not including the particles lost due to surface processes happening at the target and cross field transport. Charged particles are bound by magnetic fields while neutrals can more easily move across. For this reason the particle balance, that considers each radial location independently, is calculated only for charged particles as e^-, H^+, H_2^+, H^-. In the case of H_2^+, H^- the lifetime is very short so even in a single time step it is not physical to allow for its accumulation.<cit.> For this reason for them (nv)_i, Sinks-Sources is limited to be ≥ -(nv)_i, diss max. For e^- and H^+ the net rate of production is limited to their density the next time step. This term, referred as (nv)_j,next, is defined similarly to (nv)_j,∂ t in <ref>. The likelihood that the particle balance is verified is given by <ref> and <ref> y_j= (nv)_j, diss max - (nv)_j, Sinks-Sources σ_y_j =√(σ_(nv)_j, diss max^2 + σ_(nv)_j, Sinks-Sources^2 )L(y_e^-∈.. [0,(nv)_e^-, diss max+(nv)_e^-, next]) = = 1/2[ erf ((nv)_e^-, diss max+(nv)_e^-, next-y_e^-/√(2)σ_y_e^-) . +-. erf ( -y_e^-/√(2)σ_y_e^-) ] L(y_H^+∈.. [0,(nv)_H^+, diss max+(nv)_H^+, next]) = = 1/2[ erf ((nv)_H^+, diss max+(nv)_H^+, next-y_H^+/√(2)σ_y_H^+) . +-. erf ( -y_H^+/√(2)σ_y_H^+) ]L(y_H_2^+∈.. [0,2(nv)_H_2^+, diss max]) = = 1/2[ erf (2(nv)_H_2^+, diss max-y_H_2^+/√(2)σ_y_H_2^+) . +-. erf (-y_H_2^+/√(2)σ_y_H_2^+) ] L(y_H^-∈.. [0,2(nv)_H^-, diss max]) = = 1/2[ erf(2(nv)_H^-, diss max-y_H^-/√(2)σ_y_H^-) . +- . erf(-y_H^-/√(2)σ_y_H^-) ] L_nv = L(y_e^-≥ 0) · L(y_H^+≥ 0) ·=· L(y_H_2^+∈ [0,2(nv)_H_2^+, diss max]) ·=· L(y_H^-∈ [0,2(nv)_H^-, diss max]) Similarly to what mentioned for the power balance, here too the sinks/sources term is composed of positive and negative factor, so rather than using it a large uncertainty on (nv)_i, diss max of 100% for e, H^+ (using the fixed nominal n_e value from TS) and 5% of TS n_e for H_2^+, H^- is assumed. The uncertainties are here adopted so large because differently from power and emissivity there is no direct measurement of the particle input. As part of the particle balance one has also to include that the density of excited states obtained with ADAS and Yacora coefficients is lower than the density of total atomic hydrogen in the volume. This is calculated with <ref>. y = n_H - ∑_q,i n_i H(q), σ_y = √(∑_q,i (σ_i n_i H(q))^2)L_H^exc = L(y ≥ 0) = 1/2[ 1 - erf(-y/√(2)σ_y) ]§.§ Plasma column power balance The definition of the plasma column volume and the power balance allows to evaluate the global performance of the detached target to the ELM-like pulse. The parameter of interest is, in this case, how much power is removed from the plasma column and how much is due to atomic versus molecular effects. In considering this the potential energy exchange due to EIR, for example, was not considered because it represent a transfer mechanism and not a net loss, while the radiative component due to radiation (ADAS PRB coefficient, returning the losses due to line radiation and Bremsstrahlung) was. Bremsstrahlung radiation is present also in the wavelength range of the IR camera and could be related to the observed prompt emission but this was not investigated. Similarly all other exchanges of potential energy are not considered. The generated neutrals while travelling out of the plasma can react with the neighbouring plasma and the energy they carry be reintroduced. Because evaluating this would require a significant effort this component is for now excluded.The terms considered for the global power removed from the plasma column are indicated in <ref>.E_removed from plasma = E_ radiated= E_exc + E_rad rec+bremm_atomic ++ E_rad H_2 + E_rad H + E_rad H_2^+ + +E_rad H^- + E_rad e^-+H →H^-+pv +_ molecularOnce all the likelihoods associated with each combination of priors are calculated they are multiplied to return the total likelihood as per <ref> L = P_ϵ L_p L_nv L_H^exc The PDFs of all the components of the power and particle balance previously calculated are built by portioning their range to a smaller number of logarithmic intervals, summing the probability within. For each additive term of interest the PDFs are then convolved in space and in time to obtain the PDF for the whole ELM-like pulse. For each radial and time location and for each output required (for example the total radiated energy) is defined a large list of the possible energy losses in that section of the plasma. The list is randomly distributed according on the PDF of that output at that location. The contribution for all radii is summed to generate a list of possible values of the total radiated energy at one time step. A histogram is built based on that to represent the PDF of the quantity of interest at that time step. This operation is repeated to sum the contribution from all the time steps to return the PDF of the quantity of interest for the whole ELM-like pulse. §.§ Reactions Reaction rates and other coefficients used in the Bayesian calculations are obtained from ADAS<cit.> for the atomic reactions while from Yacora <cit.>, AMJUEL<cit.>or a collection of reaction rates from Janev<cit.> for the molecular reactions. The reactions considered in this work are listed in <ref>, <ref>, <ref>,and <ref>.
http://arxiv.org/abs/2312.16356v1
{ "authors": [ "Fabio Federici", "Bruce Lipschultz", "Gijs R. A. Akkermans", "Kevin Verhaegh", "Matthew L. Reinke", "Ray Chandra", "Chris Bowman", "Ivo G. J. Classen", "the Magnum-PSI Team" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20231226235051", "title": "Effect of detachment on Magnum-PSI ELM-like pulses: II. Spectroscopic analysis and role of molecular assisted reactions" }
A Survey on Super Resolution for video Enhancement Using GAN 1st Ankush Maity Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 2nd Sourabh Kumar Lenka Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 3rd Roshan Pious Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 4th Vishal Choudhary Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] 5th Prof. Sharayu Lokhande Dept. of Computer Engineering Army Institute of TechnologyPune,India [email protected] ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= This compilation of various research paper highlights provides a comprehensive overview of recent developments in super-resolution image and video using deep learning algorithms such as Generative Adversarial Networks. The studies covered in these summaries provide fresh techniques to addressing the issues of improving image and video quality, such as recursive learning for video super-resolution, novel loss functions, frame-rate enhancement, and attention model integration. These approaches are frequently evaluated using criteria such as PSNR, SSIM, and perceptual indices. These advancements, which aim to increase the visual clarity and quality of low-resolution video, have tremendous potential in a variety of sectors ranging from surveillance technology to medical imaging. In addition, this collection delves into the wider field of Generative Adversarial Networks, exploring their principles, training approaches, and applications across a broad range of domains, while also emphasizing the challenges and opportunities for future research in this rapidly advancing and changing field of artificial intelligence.Generative Adversarial Networks, artificial intelligence, frame rate,super-resolution,loss functions § INTRODUCTIONImproving picture and video quality has long been a goal in the fields of computer vision and image processing. Deep learning techniques, along with the unique use of Generative Adversarial Networks (GANs), have resulted in significant improvements in the field of image and video super-resolution in recent years. The results of these discoveries have not only improved the visual acuity of the low-resolution material, but have also led to the development of new areas of investigation, such as surveillance, medical imaging and remote sensing.This compilation of literature review papers aims to provide a comprehensive review of the most recent developments in image and video super-resolution, with a special emphasis on the integration of deep learning approaches and GANs. The studies reviewed here introduce unique ideas, each tackling specific issues and pushing the frontiers of what is possible in the hunt for higher-resolution images.The use of GANs as a framework for creating images and videos with high perceptual quality and realism is a significant theme studied in these publications. These methods use GANs to create information that not only closely resembles the original high-resolution images, but also has the depth of detail and visual quality that can be critical in a variety of applications.The literature reviews cover a broad array of subject areas related to image and video super resolution, including single image super resolution and video super resolution. They go into the complexities of loss functions, recursive learning, edge enhancement, and frame rate enhancement, illuminating how these techniques contribute to overall image and video quality improvement.The assessment of these approaches is a fundamental part of the literature discussed in this paper, utilizing metrics such as the Peak Signal-to-Number (PSNR), the Structural Similarity Indicator (SSIM), and the Perceptual Indices (PIs) to evaluate the effectiveness of the proposed approaches. Additionally, these articles address the challenges associated with deep learning model training, the need for enormous quantities of labelled data, and the intricacies of degradation processes encountered in real-world conditions.We will investigate the new approaches, the potential impact on diverse sectors, and the future directions that this quickly evolving field has as we go into each of these literature studies. These studies demonstrate the transformative impact of deep learning and GANs in the pursuit of improved visual quality in photos and videos, ranging from the enhancement of CCTV footage to the restoration of medical images. § LITERATURE REVIEWThe author of [1] examines the field of image super-resolution in computer vision, which aims to improve image quality by increasing spatial dimensions. The task of achieving high perceptual quality is difficult and involves algorithms reconstructing high-quality images from low-resolution counterparts. In recent years, deep learning has been used to enhance the texture of low-resolution images. However, obtaining corresponding image pairs for training purposes is challenging, often resulting in the generation of synthetic low-resolution images. In order to address the challenge of achieving super-resolution results from low-resolution images with unknown degradation, researchers propose a novel two-stage GAN system, with two architectural variations. This approach demonstrates the potential of learning to down-scale images in a low-level supervised environment, providing a powerful technology to improve super-resolution results. In conclusion, image super-resolution is a complex task, but this new GAN-based approach exhibits potential in overcoming the limitations of synthetic data and enhancing real-world image quality.The future work seeks to address the inherent issues associated with this ill-conceived problem, where images of low resolution can have multiple interpretations in high-resolution, resulting in subjectivity in the quality of perception. Scientists are currently investigating the possibility of learning to reduce noise in low-resolution data to enhance super-resolution output. This involves training models to denoise image degradations, potentially enhancing texture and detail reconstruction. In practical applications, super-resolution holds promise in surveillance technology by improving the quality of recorded footage, facilitating better monitoring and decision-making without the need for expensive equipment. Super-resolution in computer vision provides a cost-effective approach to enhance image quality, preserving historical information and delivering high-fidelity content. However, existing techniques struggle with generalization, resulting in the presence of blurry artifacts. Researchers are investigating methods to learn real-world degradation for super-resolution, with encouraging outcomes in reducing the reliance on supervised image pairs. Nevertheless, addressing low-resolution noise may necessitate the utilization of dense networks for more efficient feature extraction from low-resolution images.In [2], The authors introduce an advanced super-resolution reconstruction algorithm that is based on the idea of aGAN. The goal of this algorithm is to increase the resolution of low resolution images by creating high-resolution images with enhanced details and visual effects. What sets this algorithm apart is the refined network model and loss function employed, both of which have undergone significant improvements and optimization. The incorporation of an auxiliary VGG-19 network aids in the extraction of image features, while an extended convolution expands the receptive field, culminating in more impressive reconstruction outcomes. The algorithm is honed through training using the DIV2k dataset as the training set and set5, set14, and bsd100 datasets as the testing set. The authors conduct an experimental analysis to demonstrate the feasibility of the proposed method and demonstrate the perceptible improvement in image quality in comparison to existing conventional models. The article provides an in-depth analysis of the complex structure of the proposed generative net and discriminant network models, elucidating their intrinsic mechanisms. During the training process, low-resolution images are fed into the generation network to yield super-resolved images, which are then subjected to a VGG-19 network for feature map extraction. The discrimination results are generated by comparison of the feature maps of high resolution images and the resulting loss. The total loss, comprised of multiple loss functions, is employed to train and update the network parameters. The experiments are conducted in a variety of environments and the results are evaluated using a variety of parameters, including the PSNR, SSIM, and MOS. The outcome of these evaluations demonstrates that the proposed method surpasses traditional convolutional neural networks, offering superior visual effects and higher MOS scores. The authors also highlight the necessity for more robust evaluation indicators to accurately assess the quality of image reconstruction. In conclusion, the proposed enhanced super-resolution reconstruction algorithm, based on a generative adversarial network, exhibits promising potential in achieving high-quality and aesthetically pleasing image super resolution.The introduction of Paper [3] presents an innovative technique for enhancing image resolution known as MR-SRGAN (Multi-branch receptive field dense block improved super-resolution generative adversarial network). This method aims to tackle the challenges of accurately extracting intricate texture features and achieving network convergence during the training phase. To overcome these obstacles, the MR-SRGAN algorithm incorporates the use of the new MBRS residual block and MRB module for extracting detailed texture features from images, while also improving training convergence by adjusting the loss function. In order to evaluate its performance, the algorithm is compared to the SRGAN algorithm through super-resolution experiments conducted on the Set5 and Set14 datasets. The results demonstrate that MR-SRGAN outperforms SRGAN by achieving a higher PSNR ratio, thus demonstrating the superiority of the reconstruction quality. The capability to improve the resolution of imagery is essential for a variety of applications, including satellite reconnaissance, medical imaging, and industrial control, as it allows for clear visualization of information. Traditional methods that rely on interpolation often result in blurriness and visual fragmentation, while reconstruction-based approaches heavily rely on prior knowledge, making them susceptible to quality degradation when dealing with high magnification or limited input images. On the other hand, depth learning-based methods have exhibited better performance, with SRGAN being an advanced algorithm in this domain. Nonetheless, there is still room for improvement when it comes to accurately extracting texture features from images and achieving network convergence.The MR-SRGAN algorithm introduces the novel MBRS residual block and MRB module to enhance feature extraction and effectively address the convergence issue identified in the SRGAN algorithm. Through experimental evaluations, it has been observed that MR-SRGAN produces images with more intricate texture details and better edge information compared to other algorithms. Objectively, the algorithm achieves higher PSNR and SSIM values, indicating superior reconstruction quality. All in all, the MR-SRGAN algorithm demonstrates promising results in enhancing image super-resolution by effectively addressing the limitations present in existing methods. Future research endeavors could focus on further refining the model to expedite training and improve accuracy.In [4] the author explores the utilization of a GAN (Generative Adversarial Network) to enhance the clarity of low-resolution images. The GAN network employs a discriminator to distinguish between genuine and generated images, and it possesses the ability to restore edges and amplify the overall visual excellence of the low-resolution images. The article introduces a suggested technique for Image Super-Resolution through the utilization of the GAN approach. The writers refer to past studies on GANs for Super-Resolution and emphasize the effectiveness of deep residual networks and advanced deep CNNs for enhancing images. They also address the issue of insufficient feature extraction and propose a GAN model that permits greater adaptability and acceptance. The suggested technique involves two facets: capturing images using a micro lens array (MLA) and enhancing the resolution using the recommended GAN technique. The captured images undergo processing, and their resolution is enhanced before being merged for the complete visualization system. A honeybee example is employed, and a specific area of interest is chosen to diminish noise. The article further elaborates on the training of the GAN model using a loss function, supervised learning, and the use of PSNR and other image quality indicators to assess the model's performance. The authors conducted experiments using various image inputs and evaluated the quality of the generated images.They discovered that the suggested model performs admirably based on metrics such as PSD, SSIM, and PSNR. Additionally, it generates top-notch, high-resolution images in a brief span of time. The article introduces a GAN-centered methodology for enhancing the resolution of low-quality images. The recommended technique demonstrates encouraging outcomes in enhancing image quality and producing high-resolution images.Article [5] presents a yardstick for examining the ability of image and video super-resolution (SR) models to restore intricate details. The authors crafted a dataset encompassing intricate patterns that SR models often falter in restoring accurately. This methodology was used to evaluate 32 recent SR models and compare their ability to maintain the context of the scene. To verify the authenticity of the restored details, a comparative analysis of the restored details was conducted using crowd-sourced data and an objective evaluation metric was developed, referred to as ERQAV2.0. The findings unveiled that numerous SR methods concentrate on enhancing the authenticity of the resulting image but may inadvertently generate flawed structural objects. An erroneous restoration of details can induce errors in object detection and identification. Consequently, the authors underscore the significance of preserving context in tasks that necessitate scene interpretation, such as video surveillance and dashboard cameras. The yardstick and subjective comparison brought to light that different SR models perform divergently based on the types of degradation in the input and the motion of the camera. Some models exhibited overfitting to specific down sampling methods, while others demonstrated greater stability and yielded similar outcomes in different tests. The introduction of noise to the images typically diminished the metric values and altered the rankings of the models. The authors put forth the ERQAv2.0 metric, which outshines other quality metrics in terms of its correlation with subjective scores for detail restoration. This metric takes into account how well the structure of objects is restored relative to the ground-truth imagery. This study offers profound insights into the capabilities of SR models in restoring intricate details and accentuates the importance of preserving context. The yardstick and objective assessment metric can prove to be invaluable tools for future research in the realm of SR-based work.In [6], the author invokes an interesting concept, claiming that the SRGAN is an innovative method for obtaining high resolution images from low resolution inputs. However, it is worth noting that SRGAN often results in images marred by unsightly imperfections. In this article, the authors present an Advanced SRGAN that seeks to improve the visual quality by optimizing three fundamental elements of SRGAN: network architecture, adversary loss, and perception loss. They introduce a novel building unit, the Residual in Residual Density Block, as the basis of the network. The authors choose to dispense with the use of batch normalization layers and instead utilize residual scaling and reduced initializations to facilitate the development of a deep neural network. Additionally, the authors further refine their discriminator by introducing the RaGAN structure., which assesses the relative authenticity of images. Additionally, they bolster the perceptual loss by utilizing features before activation, thereby providing more robust guidance for maintaining brightness consistency and recovering texture. The proposed ESRGAN consistently outperforms SRGAN in terms of visual quality, offering more lifelike and organic textures. It is noteworthy that it achieved the highest ranking in the Perceptual Index 2018-SR Challenge. The authors also provide the ESRGAN code. This article highlights the importance of Single Image Super-resolutionwhich is a process of reconstructing high-resolution image data from low-resolution data. SISR has captivated the attention of both the research community and AI companies. Although approaches focused on PSNR have made strides in improving the numerical value, they often yield overly smoothed outcomes lacking in high-frequency details. To shed light on the topic, the authors draw comparisons between ESRGAN and state-of-the-art PSNR-oriented methodologies such as SRCNN, EDSR, and RCAN, as well as perceptual-driven approaches like SRGAN and Enhance Net. Their findings reveal that ESRGAN surpasses previous techniques in terms of sharpness and detail, resulting in more authentic textures and fewer imperfections. In essence, the authors propose ESRGAN as an enhanced rendition of SRGAN, one that achieves superior visual quality through the enhancement of network architecture, adversarial loss, and perceptual loss. ESRGAN's performance eclipses that of prior methods, securing the top spot in a super-resolution challenge within the realm of SR-based endeavors.As proposed in the paper [7],The MDCN is a novel deep learning approach for SISR. This innovative network incorporates both residual and dense connections to enhance the flow of information and address the issue of gradient vanishing. To further enhance its capabilities, the authors introduce a scale recurrent framework that allows the network to super-resolve images at different scales using the same set of convolutional filters.Comparing the MDCN to existing super-resolution techniques, it can be observed that it provides the most advanced performance for smaller scale factors (2 & 3) and the most competitive performance for larger scale factors (4 & 8). The training procedure includes a combination of low-level (L1) loss, high-level (VGG) loss, and high-level (Adversarial) loss to enhance both the objective (pixel-reconstructruction error) and visual (sharpness) quality of the super-resolution images.The paper delves into several aspects of the MDCN, including the impact of the scale recurrent design, variable depth supervision, number of dual-link units, order of addition and concatenation connections, and convergence curve. The findings reveal that the scale recurrent design significantly enhances performance, and the MDCN network proves to be efficient in terms of parameter size and convergence.Extending the MDCN approach to video super-resolution, the authors employ a multi-image restoration technique. This allows the network to process multiple neighboring low-resolution frames and achieve superior results compared to existing video super-resolution methods. Therefore, the proposed MDCN network not only excels in SISR but also exhibits potential in video super-resolution by leveraging the power of multiple frames.By incorporating both objective and perceptual loss functions, the MDCN network succeeds in enhancing the quality of the super-resolved images. The various aspects explored in the paper, such as the scale recurrent design and variable depth supervision, contribute to the efficiency and effectiveness of the MDCN network.The paper[8] examines the concept of "image super-resolution", a technique that seeks to improve the resolution of low resolution images or videos. Deep learning-based techniques employed in super-resolution have resulted in significant improvements in image quality.. The paper introduces both supervised and unsupervised methods for super-resolution and explores the various convolutional neural networks (CNNs) employed in these approaches.Supervised super-resolution methods rely on paired low-resolution and high-resolution images for training. These methods can be further categorized into CNN-based techniques and those based on generative adversarial networks (GANs). CNN-based methods, such as SRCNN and LapSRN, utilize convolutional neural networks to acquire the knowledge of transforming low-resolution images into high-resolution counterparts.In contrast, GAN methods, such as SRGAN and ESGRAN, utilize the combined capabilities of both the generator and the discriminator to improve the perceived quality of the reconstructed images.Unsupervised super-resolution methods, in contrast, do not necessitate paired training data and have the ability to directly reconstruct low-resolution images from the real world. One such method is ZSSR, which trains a small image-specific CNN during the testing phase using solely the low-resolution image itself. Another method, IKC, employs an iterative correction process to estimate the blur kernel responsible for image degradation and subsequently refine the super-resolution outcome.The paper also delves into various challenges and future prospects in super-resolution research. These encompass the development of stable non-reference quality evaluation indicators, exploration of unsupervised super-resolution techniques, and the application of super-resolution models in real-world domains such as face recognition and medical imaging. In essence, the paper offers an overview of deep learning-based image super-resolution methods while emphasizing the disparities between supervised and unsupervised approaches. Furthermore, it delves into the specifics of CNNs utilized in these methods and provides valuable insights into the future directions of research in the field of super-resolution.In [9], the authors present a novel approach to enhance the quality of images and videos by utilizing a recurrent generative adversarial network named SR2 GAN. To achieve remarkable precision in super-resolution tasks, they employ a convolutional neural network integrated with residual learning models.The proposed model employs a recursive approach to learn the transformation function necessary to synthesize high-resolution images, which is an effective solution to the problem of video super-resolution. This approach not only reduces model parameters and depth, but also guarantees the successful synthesis of high-quality images. The authors conduct comprehensive testing of the proposed model against existing methods and find that the SR2 GAN model outperforms them with respect to PSNR and SSIM. Additionally, the authors generously provide the source code and supplementary materials for the proposed model. The network architecture comprises an encoder-decoder structure featuring residual and inverse residual blocks. While the encoder captures an image's context through downsampling, the decoder reconstructs the image using the extracted context information. To maintain gradient flow and temporal consistency in video frames, skip connections and concatenation are employed. The model is meticulously trained on various datasets for image and video super-resolution, including renowned benchmark datasets like Set5, Set14, BSD100, Urban100, Apollos capes, and Cityscapes. The results demonstrate unambiguously that SR2 GAN produces high resolution images with improved visual sharpness and structural similarity in comparison to alternative techniques. The proposed approach is widely accepted as a deep, concise, and optimal solution for super-resolution image and video applications. STRB in India generously supported this work.The [10] article introduces a novel framework called FREGAN (Frame Rate Enhancement Generative Adversarial Network), which aims to enhance the frame rate in videos. By leveraging a sequence of past frames, the model predicts future frames to effectively increase the frame rate. The researchers employed the Huber loss as a loss function in the FREGAN model, leading to exceptional results in super-resolution. To evaluate the model's performance, standard datasets such as UCF101 and RFree500 were utilized. The results demonstrated that the proposed model achieved an impressive PSNR of 34.94 and a SSIM of 0.95. These metrics were employed to assess the model's effectiveness. In this article, we provide a detailed description of a FREGAN model. The FREGAN model consists of a generator (generator) and a discriminator (discriminator). The generator predicts an intermediate frame on the basis of two adjacent frames. The discriminator network is used to distinguish between real and false frames. In the model, a customized Convolutional Neural Network (CNN) model is employed in the discriminator, while a modified version of the UNet architecture is used in the generator. The proposed model was trained on datasets comprising 256x256 pixel, 3-channel images extracted from 30fps and 24fps videos. The training process involved the use of the Adam optimizer, and the delta parameter of the Huber loss function was fine-tuned to achieve optimal results. The experiments revealed that an optimum delta value of 0.5 yielded the highest PSNR and SSIM scores. When compared to other methods for frame rate enhancement, the proposed model outperformed them in terms of SSIM scores and achieved a PSNR score close to the best performing method. The FREGAN model showcased robust performance, and the Huber loss function effectively handled noise in the images. This article introduces a promising approach to enhance the frame rate in videos, particularly in applications such as gaming and autonomous driving. Future advancements in the generator structure and the incorporation of additional intermediate frames have the potential to further enhance the model's performance.the article [11] discusses the wide-ranging applications of super-resolution reconstruction in various fields such as medical imaging, security monitoring, and remote sensing imaging. This research primarily focuses on enhancing low-resolution images to generate high-resolution counterparts using a technique called SISR. The advancement of CNNs has greatly improved super-resolution techniques, and attention models have recently been introduced to filter unnecessary visual information and emphasize relevant features. This paper introduces a unique approach by combining a SRGAN with a CBAM to create a more lightweight network called (attention-model) AM-SRGAN. This integration is a significant contribution as it directly combines the attention module with GAN, offering a fresh perspective on optimizing GAN network structures. The CBAM-embedded SRGAN simplifies the network structure and enhances its representational capabilities, potentially reducing the need for excessively deep networks, which can be challenging to train. The paper explores the use of attention models in the context of GANs for super-resolution reconstruction. The combination of CBAM with SRGAN introduces a more efficient and lightweight network architecture, demonstrating the potential to simplify complex super-resolution techniques while maintaining or even improving image quality. The introduction of neural networks into super-resolution reconstruction has significantly improved its effectiveness. Researchers continuously strive for better results by exploring new network structures and frameworks. However, as networks become deeper, training time increases, and network robustness may decrease. Integrating attention models like the CBAM module into SRGAN offers an innovative approach. It filters important features, streamlining the generator's structure and accelerating critical reconstruction feature extraction. The experimental results confirm that the attention model optimizes the network's construction, reducing complexity and reconstruction time. This study provides valuable insights for GAN-based computer vision and highlights the untapped potential of attention models in super-resolution reconstruction, promising further research in this areaIn our research paper [12], we utilize GANs as a framework for the generation of natural frames with remarkable perceptivity and realism. The GAN framework directs the generation process to regions in the search area that are more probable to contain photorealist frames, thus moving us closer to the domain of natural frames.. To accomplish this, they utilize a highly sophisticated Residual Network (ResNet) architecture within the GAN framework, setting a new benchmark in video super-resolution, particularly for high upscaling factors, as measured by PSNR. Our contribution is the introduction of a generator network that optimizes a unique perceptual loss, calculated using feature maps from the VGG network. This approach offers greater resilience to changes in pixel space compared to the traditional Mean Squared Error (MSE)-based content loss In the initial stage, our proposed algorithm was evaluated against various state-of-the-art video super-resolution methods. Following that, a qualitative assessment was conducted to evaluate the performance and outcomes of these different video super-resolution techniques. In order to conduct our experiments, we utilized a video database that is accessible to the public and contains high-quality material. This video series was utilized to train our algorithm and to conduct experiments. The resulting video demonstrates super-resolved frames generated using different video super-resolution methods.they presented a video super-resolution (SR) algorithm that utilizes a generative adversarial network (GAN). Our approach achieves a new benchmark performance, as measured by the widely recognized PSNR measure.We conducted a thorough analysis of different architectures and compared them to our algorithm. Our results demonstrate that GAN based improvements for significant scaling factors yield significantly more photorealistic reconstructions than those obtained from conventional reference methods.The author in [13] explores the advancements of deep convolutional neural networks (CNNs) in video super-resolution (SR). However, the conventional approaches heavily rely on optical flow estimation for frame alignment, which often leads to the presence of artifacts. To overcome this challenge, we propose a novel end-to-end CNN that dynamically generates spatially adaptive filters using local spatio-temporal pixel channels. This innovative approach eliminates the need for explicit motion compensation, resulting in improved alignment and temporal consistency. Furthermore, we incorporate residual modules with channel attention to enhance feature extraction. Through comprehensive evaluations on three public video datasets, our method demonstrates superior performance in terms of image clarity and texture preservation compared to state-of-the-art techniques. While deep CNNs have made significant progress in video super-resolution, the dependence on optical flow for frame alignment poses difficulties and introduces artifacts. To address this, we introduce a comprehensive CNN that generates adaptive filters based on local spatio-temporal pixel channels, eliminating the requirement for explicit motion compensation. This approach enhances alignment and temporal consistency and facilitates high-resolution frame restoration in the reconstruction network. By incorporating residual modules with channel attention, we further improve feature extraction. Our approach out-performs existing techniques in terms of image clarity and texture preservation, as evidenced by evaluations conducted on three public video datasets. The authors present a one-stage framework for Video Super-Resolution (VSR) that aims to reconstruct high-resolution videos without the need for explicit motion compensation. The authors suggest the implementation of STAN to achieve temporal convergence in the feature domain. This would improve the integration of multi-frame data and enhance the overall temporal consistency of the video. Furthermore, they introduce a comprehensive VSR network that is composed of the STAN for alignment and the reconstruction network for temporal information aggregation. This design enables the network to effectively analyze interframe spatial information and learn to synchronize neighboring video frames. Through extensive testing, it has been demonstrated that the STAN is more efficient than existing two-layer networks and can compete with the most advanced SISR and VSR techniques.In [14], The author examines the need to improve the visual perception of compressed video, particularly in the context of the video coding standard H.265 /HEVC. While HEVC provides efficient compression, it unavoidably results in the loss of video information during compression and transmission, thereby impacting both objective and subjective quality. To tackle this issue, generative adversarial networks (GANs) have garnered significant attention for their capacity to reduce artifacts and elevate visual quality in image and video enhancement tasks. GANs accomplish this by acquiring the knowledge of image distribution mapping, thereby minimizing the disparity between generated and training set image distributions through adversarial loss. The evaluation of image and video restoration typically depends on full-reference techniques employing metrics like PSNR and SSIM. However, real-world transmission tasks often lack access to original images, necessitating the employment of non-reference assessments. The "Perceptual Index" (PI), which was introduced in the 2018 "PIRM Challenge," quantifies the difference between the image representation of the training dataset and the representation of the reconstructed video frames for the purpose of assessing visual quality without reference image. General-purpose annular systems (GANs) provide a basic principle for determining the ratio of objective distortion to visual perception quality through their loss functions. These functions include control distortion, which is the difference between the pixels between the actual image and the reconstructed image, as well as control perception quality, which is the ratio between the generated image and the natural image distribution. The paper outlines a wide range of image restoration techniques, including techniques that focus on resolving super-resolution images and reducing compression artifacts. Numerous methods harness the power of GANs, such as SRGAN and ESRGAN, which employ perceptual loss functions and adversarial training to enhance image quality. Drawing inspiration from the success of GANs in improving the visual perception of super-resolution images, This paper proposes the application of GAN properties to improve the visual performance of HEVC- compressed videos. It outlines a GAN-driven algorithm designed to improve the visual fidelity of HEVC compressed video. By reducing the disparity between the image distribution produced by the GAN generator (G) and the training set distribution via adversarial loss, it is possible to restore the visual fidelity of video frames. The HEVC-compressed images effectively direct the generation network of the GAN to learn the mapping from encoded images to their original images, a process that the GAN’s discriminator network continually improves upon. Our experiments demonstrate the effectiveness of our methodology.In [15], the author explores the realm of artificial intelligence (AI), which has witnessed an extraordinary surge in growth and data accumulation. Within this realm, machine learning has emerged as a crucial player, particularly in its subset known as representation learning. This subset has proven to be invaluable in extracting valuable information from challenging data scenarios. One notable technique within representation learning is deep learning, which excels in deriving abstract, high-level features from data.Machine learning can be categorised into two distinct sub-categories: supervised and non-supervised.. The latter has gained increasing importance due to the difficulties associated with collecting labeled data. To address this challenge, generative models like Generative Adversarial Networks (GANs) have revolutionized unsupervised learning by harnessing game theory principles to generate realistic data, bypassing the need for Markov chains or approximate inference. The primary focus of this survey is to delve into the state-of-the-art in GANs, providing comprehensive insights into their definitions, motivations, applications, and training techniques. Furthermore, it explores various evaluation metrics and the diverse fields where GANs have found application. The survey concludes by analyzing the limitations of GANs and offering valuable insights into potential areas for future research in this dynamic and rapidly evolving field. In summary, this paper offers an extensive overview of the research context surrounding GANs, delving into their fundamental principles and exploring their derivative models and diverse applications across different domains. Additionally, it tackles the topics of evaluation metrics and training strategies. Finally, it highlights the current challenges faced by GANs and identifies potential avenues for future research.In [16], The author proposes a novel concept that combines a model built on the outstanding GAN with an edge enhancement method for video super resolution reconstruction. The main goal of the model is to improve the image quality of low resolution and blurring videos, including those recorded by CCTV cameras. By employing the GAN framework, the model is able to generate remarkably realistic and intricately detailed super-resolution frames. This is achieved by training discriminators to differentiate between the enhanced frames and the original frames. Additionally, the model incorporates an edge enhancement function which further enhances the results by accentuating the edges through the utilization of the Laplacian edge module. Furthermore, the model integrates a perceptual loss mechanism to enhance the overall visual experience. The results of the experiments performed on a variety of datasets, including the well-known Vid4 dataset, demonstrate the robustness of the proposed approach. The article highlights the importance of video super resolution in computer vision and its broad-reaching applications in areas such as remote sensing or medical imaging. It examines the various types of super resolution, including SISR, and VSR., while also highlighting the advancements made possible through deep learning-based approaches. The article also acknowledges the challenges encountered when training deep learning models, particularly the need for copious amounts of paired LR-HR data and the complexities associated with the degradation process in real-world scenarios. In conclusion, the proposed model, which implements both GAN and edge enhancement techniques, exhibits highly promising outcomes in the domain of video super-resolution reconstruction. By combining the potential of deep learning with edge enhancement, this model has the ability to generate super-resolution frames that are not only lifelike but also immensely detailed. The conducted experiments successfully demonstrate the advantages of this proposed method across various datasets, including low-resolution videos acquired through CCTV cameras and other real-world scenarios. Ultimately, this model has the potential to significantly elevate the quality of low-resolution videos and enhance numerous computer vision tasks.§ CONCLUSION The field of Video Super resolution and frame acceleration has had see an unprecedented amount ofresearch and studies due to the growing power of AI and its implication in generation a higher quality multimedia from old and poor quality video for better user experience. This survey of research papers highlights the diverse approaches and methodologies that various researchers have developed in order to solve the challenge of super resolution and frame generation. key takeaways from this exploration include the use of various SRGANs to generate the super resolute version of the frame and generate the details that might have been lost due to less resolution, the use of frame generation with the help of GANs by prediction the intermediate frame have been used to increase the smoothness of the video. The quality of the enhanced videos generated has been steadily improving. These innovations promise to restore damaged or substandard video and aid in the revival of cultural and scientific knowledge.00 b1 Molefe, Molefe, and Richard Klein. "Image Super-Resolution Using Generative Adversarial Networks with Learned Degradation Operators." MATEC Web of Conferences. Vol. 370. EDP Sciences, 2022. b2Jia, Rongzhao, and Xiaohong Wang. "Research on super-resolution reconstruction algorithm of image based on generative adversarial network." Journal of Physics: Conference Series. Vol. 1944. No. 1. IOP Publishing, 2021. b3Xiang, Nan, Bin Tang, and Lu Wang. "Image Super-Resolution Method Based on Improved Generative Adversarial Network." 2022 IEEE 5th International Conference on Electronics Technology (ICET). IEEE, 2022. b4Reddy, K. Srinivasa, et al. "Implementation of super resolution in images based on generative Adversarial network." 2022 8th International Conference on Smart Structures and Systems (ICSSS). IEEE, 2022. b5Lyapustin, Eugene, et al. "Towards true detail restoration for super-resolution: A benchmark and a quality metric." arXiv preprint arXiv:2203.08923 (2022). b6Wang, Xintao, et al. "Esrgan: Enhanced super-resolution generative adversarial networks." Proceedings of the European conference on computer vision (ECCV) workshops. 2018. b7Purohit, Kuldeep, Srimanta Mandal, and A. N. Rajagopalan. "Mixed-dense connection networks for image and video super-resolution." Neurocomputing 398 (2020): 360-376. b8 Yang, Zhuoyuan, Ping Shi, and Da Pan. "A survey of super-resolution based on deep learning." 2020 International Conference on Culture-oriented Science & Technology (ICCST). IEEE, 2020. b9Thawakar, Omkar, et al. "Image and video super resolution using recurrent generative adversarial network." 2019 16th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE, 2019 b10Ou, Chaojie, and Fakhri Karray. "Enhancing driver distraction recognition using generative adversarial networks." IEEE Transactions on Intelligent Vehicles 5.3 (2019): 385-396 b11Yang, Bin, et al. "Super-resolution generative adversarial networks based on attention model." 2020 IEEE 6th International Conference on Computer and Communications (ICCC). IEEE, 2020 b12Gopan, Karthika, and G. S. Kumar. "Video super resolution with generative adversarial network." 2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI). IEEE, 2018 b13Wen, Weilei, et al. "Video super-resolution via a spatio-temporal alignment network." IEEE Transactions on Image Processing 31 (2022): 1761-1773 b14Wang, Ting, et al. "Visual perception enhancement for HEVC compressed video using a generative adversarial network." 2020 International Conference on UK-China Emerging Technologies (UCET). IEEE, 2020 b15Pan, Z., Yu, W., Yi, X., Khan, A., Yuan, F. and Zheng, Y., 2019. Recent progress on generative adversarial networks (GANs): A survey. IEEE access, 7, pp.36322-36333. b16Wang, J., Teng, G. and An, P., 2021. Video super-resolution based on generative adversarial network and edge enhancement. Electronics, 10(4), p.459.
http://arxiv.org/abs/2312.16471v2
{ "authors": [ "Ankush Maity", "Roshan Pious", "Sourabh Kumar Lenka", "Vishal Choudhary", "Prof. Sharayu Lokhande" ], "categories": [ "eess.IV", "cs.CV", "cs.LG", "cs.MM" ], "primary_category": "eess.IV", "published": "20231227084138", "title": "A Survey on Super Resolution for video Enhancement Using GAN" }
unsrtnatOptical conductivity of overdoped cuprates from ab-initio out-of-plane impurity potentials P. J. Hirschfeld January 14, 2024 ========================================================================================== The challenge in biomarker discovery and validation using machine learning from omics data lies in the abundance of molecular features but scarcity of samples. Most machine learning-based feature selection methods necessitate of hyperparameter tuning, typically performed by evaluating numerous alternatives on a validation set. Every evaluation has a performance estimation error and when the selection takes place between many models the best ones are almost certainly overestimated. Biomarker identification is a typical multi-objective problem with trade-offs between the predictive ability and the parsimony in the number of molecular features. Genetic algorithms are a popular tool for multi-objective optimization but they evolve numerous solutions and are prone to overestimation. Methods have been proposed to reduce the overestimation after a model has already been selected in single-objective problems, but to the best of our knowledge no algorithm existed that was capable of reducing the overestimation during the optimization, leading to a better model selection, or that had been applied in the more general domain of multi-objective problems. We propose , a novel multi-objective optimization wrapper algorithm that learns how the original estimation, its variance, and the feature set size of the solutions predict the overestimation, and adjusts the expectation of the performance during the optimization, improving the composition of the solution set. We verify thatimproves the performance of a state-of-the-art genetic algorithm on left-out or external sample sets, when predicting cancer subtypes and/or patient overall survival, using three transcriptomics datasets for kidney and breast cancer. Since to the best of our knowledge there was no measure of estimation error for multi-objective solution sets, we propose two such measures. According to both of them,provides more realistic performance estimates in all the considered scenarios. Data and source code availability. The gene expression data used in this study is from public repositories. The source code and detailed numerical results will be available in a public server after peer review. § INTRODUCTION Molecular biomarker discovery with machine learning (ML) is usually limited by data that includes many features but few samples. This renders the models prone to overfitting and the evaluation prone to estimation error.Many ML approaches involve hyperparameter tuning and feature selection, with model selection based on the performance measuredon a validation set within a training, validation, and test paradigm. Comparing many models and choosing the best is prone to overestimation, i.e. the difference between the performance measured on the validation set and the real performance, a.k.a. “winners course”. The models that fit the noise present in the validation set are advantaged, a phenomena sometimes referred as overfitting on the validation set <cit.>. This is also exacerbated by data scarcity. In the quest to reach higher accuracies, increasing the number of hyperparameter configurations may lead to selected models with better performance on the validation set, but a corresponding increase in overestimation may determine diminishing or even negative improvements on the test set.In biomarker selection both the predictive performance (in terms of disease type, survivability, etc.) and the practical applicability are crucial factors. A biomarker that includes a small number of molecular features requires less resources when applied clinically, thus might be preferred over a slightly more accurate but more expensive or time consuming alternative. Characterizing all the best compromises (or trade-offs) between predictive value and feature set size is a multi-objective (MO) optimization problem <cit.>, and it can be solved by means of MO feature selection (MOFS) techniques <cit.>. These techniques aim to identify not just a single best solution, as in single-objective (SO) problems, but rather a Pareto front of solutions. This front is the set of optimal solutions that illustrate the trade-offs between different objectives in MO optimization. However, all candidate solutions (or biomarker models selected by the employed MO feature selection technique) are evaluated on the validation set, which can result in the overestimation of the performance of the selected models.K-fold cross-validation (CV) is the most common methodology for ML assessment and its benefits and limitations have received much attention in the SO case <cit.>. K-fold CV returns a model trained on all the available samples and an estimation of its performance computed by averaging k CV results. The obtained model performance is usually underestimated when only one hyperparameter configuration is used. However, it tends to be overestimated when multiple configurations are evaluated, and the model with the best performance is returned <cit.>.While some methods have been proposed to improve the estimation of the performance when k-fold CV is applied for model selection, their design and application has been limited to SO problems that are a special subset of the more general MO problem. To the best of our knowledge, no attempt has been made to improve the performance estimation in MO problems like the identification of the best biomarkers in high-dimensional omics data.MO problems may require more model evaluations when many good trade-offs exist to be discovered.It may be argued that this offers the possibility to model differences in overestimation in function of the position in the non-dominated front.Limitedly to SO problems, approaches exists to improve the performance estimation of a selected model, but the model is still selected using the unadjusted estimation, so there is no improvement in the model selection. In the model selection process, where solutions are ranked based on a model evaluation metric, simply subtracting a constant from this metric will not affect the ranking and, therefore, the model selection. If instead the adjustment takes into account some characteristic of the models, such as the variance in performance during the validation phase or the feature set size, it might change their order. It may be argued that such an adjustment could improve the model selection if the model selection would use the adjusted estimations.While the quality of the performance estimation can be easily measured in SO problems as the absolute difference between the performance expected by the algorithm and the performance measured on new samples, to the best of our knowledge no such measure exists for MO problems, where the results are solution sets. Here we propose two new measures for the quality of the performance estimation.In Cattelani et al., Fig. 1 and Supplementary Fig. 1 are examples of solution set performance, resulting after executing MO optimization for breast cancer subtypes biomarker discovery <cit.>. The optimizer explored the best trade-offs between balanced accuracy and number of features, and used 3-fold CV internally to evaluate the solutions. The overestimation (difference between the “inner cv” and “test” dots in the figures) of the balanced accuracy increases as the number of features increases. Analogously, the overestimation increases when the estimation increases. Correlations like these may suggest that characteristics of the solutions might be predictive of the overestimation.To the best of our knowledge, no previous work experimented the effectiveness of methods for mitigating the overestimation in ML applications with 2 or more objectives. Additionally, no previous work applied the adjustment to the performance estimation prior to the selection of the solution set, thus affecting the selection. Consequently, this approach is also yet to be explored in biomarker discovery within large-scale omics data, where adjusting the expected performance of evaluated biomarker models across multi-cohort datasets could lead to a more robust and reliable biomarker selection.We present a new MO optimization algorithm that wraps any other MO optimizer to produce temporary solutions. These solutions are then employed to train regression models, which are tasked with predicting performance overestimation. Subsequently, the algorithm executes a wrapped MO optimizer once more, but with a key distinction: it now utilizes the regression models to deliver adjusted fitness evaluations, enhancing the model selection process. The final resulting models are then trained on all the available samples, thus no samples are lost in order to compute their expected performance on new data.In our benchmarking study, which concentrates on selecting cancer biomarkers from transcriptomic data, we have experimentally verified that incorporating information from the original performance estimation, its variance, and the size of the feature set can enhance both the performance and its estimation. This improvement is evident when predicting cancer subtypes and patient overall survival, particularly on test sets that are either excluded from the original dataset or entirely external.Tsamardinos et al. <cit.> compare double CV, the Tibshirani and Tibshirani method <cit.>, and nested CV in their ability to improve the estimation of the fitness for SO problems. These algorithms adjust the fitness estimation but do not affect the model selection, and the model that results is the same that is selected with simple k-fold CV with hyper-parameter optimization.Automated ML (AutoML) tools try numerous combinations of models and hyperparameters and return the best performing model and an estimation of its performance. In Tsamardinos et al. six AutoML tools are compared <cit.>. Of these, only one has a predictive performance estimation strategy that adjusts for multiple model validations (limitedly to SO problems), while the majority of the tools have the necessity to withold a test set for an unbiased estimation of the performance of the winning model, thus loosing samples from the final model training. Differently from our proposed algorithm, none of the cosidered AutoML tools applies a performance adjustment in MO problems, and no considered tool uses the performance adjustment during the optimization for improving the selection of the winning model. To the best of our knowledge our approach is the first that can be applied to the more general MO problems and that improves the selection of the solutions in addition to their fitness estimation.§ METHODSThealgorithm wraps a MO optimizer, serving two main purposes: improving the estimate of the performance of the solutions, and increase the overall performance of the solution set (set of biomarker models in our case study). The data-driven approach complicates the measurement of these achievements, as CV yields varied performance metrics. These include the performance anticipated by the optimizer, based on training data, and the actual performance measured on left-out samples post-optimization. To our knowledge, there is not an established metric for the error in evaluating the predictive performance of solutions in MO problems within a CV framework. In single-objective scenarios, one might simply assess the absolute difference between the fitness expected from training data and the fitness observed on new data. However, in MO scenarios, it is crucial to consider each solution's contribution to the Pareto front. In this section, we introduce two novel metrics to measure this estimation error in MO CV setups.Most importantly, we introduce a novel wrapper algorithm to improve the precision of the fitness estimation and the overall performance in MO optimization and prove its effectiveness in feature selection for biomarker discovery. Our wrapper algorithm follows a two-stage process using inner MO optimizers. In our case study, these optimizers are represented by genetic algorithms (GAs) specifically designed for MO problems and paired with supervised ML algorithms, such as the Gaussian Naïve Bayes (NB) classifier to distinguish different cancer subtypes or the Cox Proportional-Hazards Model (Cox) for survival analysis. In the first stage, regression-based models are trained to estimate the overestimation, drawing on characteristics of the solutions selected by an MO optimizer. Key characteristics include the original performance, its variance, assessed using only training data, and the size of the feature set. Subsequently, in the second stage, these trained regression models are employed to enhance the evaluation of solutions by an inner MO optimizer. This step aims to refine the selection of the solutions, thereby improving the overall results of the process.We have benchmarked our algorithm, , in the context of MOFS for cancer biomarker discovery. This was done using gene-based transcriptomic datasets from The Cancer Genome Atlas (TCGA) project <cit.>. Specifically, our goal is to identify biomarkers for categorizing cancer subtypes and survival prediction in kidney and breast cancer patients through a MOFS approach. For breast cancer, an additional cohort from The Sweden Cancerome Analysis Network - Breast (SCAN-B) <cit.> serves as an external validation set. Moreover, as inner MO optimizer, we use a novel modification of Non-dominated Sorting Genetic Algorithm III (NSGA3) <cit.>, NSGA3 with clone-handling method and specialized mutation operator (NSGA3-CHS), with NB or Support Vector Machine (SVM) as inner classifiers, and Cox as inner survival model. We compare the unadjusted optimizer with adjustments by 5 different regression models, using both internal-external CV and external validation. §.§ Measuring overestimation in multi-objective problems We propose two methods to measure the error of the performance estimates for the solutions to MO problems: MO performance error and Pareto delta.§.§.§ Multi-objective performance errorWe define the MO performance error E_υ starting from the HyperVolume (HV) computed on the train performance (H_ι) and the Cross HyperVolume (CHV) (H_υ). H_ι and H_υ have been formally defined by Cattelani et al. (<cit.>). Since H_υ is a family of functions, with the specific instantiation that depends on the user provided function υ, E_υ is a family of functions too.E_υ(X,X')=H_ι(X,X')-H_υ(X,X')Where X encodes the performance of the solutions evaluated on the train data, and X' the performance evaluated on the test data. In our case study we use the same instantiation of υ as in Cattelani et al.: the function λ (<cit.>).E_υ has a simple definition and can be computed very efficiently if the experimental setup already includes the measuring of H_ι and H_υ. It can be seen intuitively as the difference between the aggregated performance of all the solution set as expected by the optimizer taking into account only train data, and the aggregated performance of the same solutions when applied on never before seen test data by decision makers that must choose their preferred solution informed by train performance only.§.§.§ Pareto deltaThe MO performance error is a metric that derives straightforwardly from the train HV and the CHV. This metric does not pinpoint the specific origins of differences between these two measures. Even though the CHV inherently considers the differences between train and test performance inside its formulation, increases in performance in part of the solutions can even out with decreases in another part, leading to a reduced performance error measurement despite these differences between train and test performance that have opposite sign. To address this issue, we introduce an additional measure called Pareto delta. It has the property of being equal to 0 only when there is no difference in performance between train and test data for all elements of the solution set.To calculate the Pareto delta, we first sum up the absolute error in fitness estimation for each solution, multiplying it by the derivative of the HV concerning that specific objective and solution. Then, we compute the average across all objectives.Let n be the number of solutions and m the number of objectives. Let X be an n× m matrix where x_i,j is the train performance for the j^th objective of the i^th solution, with 1≤ i≤ nand 1≤ j≤ m. Similarly, let X' be an n× m matrix where x'_i,j is the test performance for the j^th objective of the i^th solution. Let H_ι be the HV computed on the train performance X. ∂ H_ι / ∂ x_i,j is the partial derivative of the HV with respect to x_i,j. We define the Pareto delta function (P_Δ) asP_Δ(X,X')=1/m∑_j=1^m∑_i=1^nx_i,j-x'_i,j∂ H_ι/∂ x_i,jIn the special cases with 0 dimensions or 0 solutions we define P_Δ as 0. As long as the differences x_i,j-x'_i,j are small, P_Δ is proportional to the difference between the volume of the geometrical union of the space under the train and test fronts and the volume of the intersection of the same two spaces. §.§ :The DOSA-MO is a wrapper algorithm that implements a two-phased approach. Initially, regression models are trained to predict overestimation based on selected solutions´ characteristics: their the original fitness, its variance and the feature set size in our case study. The original fitness and its variance are general characteristics for solutions to any ML problem, while the feature set size is specific for feature selection solutions. Then, in the subsequent phase, these regression models are used to refine the MO optimizer's solution assessment, focusing on better model selection to enhance overall performance. In more details, DOSA-MO consists of three main steps.1. Generating solution sets for overestimation prediction. It collects solutions to be used as training samples to learn how to adjust the objective functions that are used to evaluate solutions for supervised learning tasks, such as classification accuracy. This consists in running MO optimizers in a k-fold CV loop. For each fold, a solution set is produced using only training data, and its performance is measured on the left-out samples.2. Training of regression models for overestimation. For each objective DOSA-MO trains a regression model on the samples collected during step 1. Each sample contains as independent variables three meta-features of the solutions that are potentially predictive of the overestimation. They are the original fitness (i.e. the fitness used by the optimizer to choose the best solutions, measured using only training data, applying inner k-fold CV in our case study), the standard deviation of that fitness (we measure it using bootstrap), and the number of features included in the solution (number of genes forming the biomarker in our case study). The dependent variable is the overestimation: the difference between the original fitness and the fitness that is computed on new data through CV. Not all solutions can be considered as equally important. We might expect solutions that are in crowded areas of the Pareto front to be selected less often by a decision maker. To take this into account each sample is weighted according to the partial derivative of the HV (constructed using the original fitnesses) with respect to the considered solution and objective. The weights for each fold and objective are scaled to sum to 1. A regression model is trained on these samples for each objective. We minimize the absolute error, when allowed by the specific model, since the impact of an error on the HV is approximately linear for small errors. 3. Generating the solution set using adjusted performance. A second MO optimizer is deployed to generate a solution set (each solution refers to a feature set in our case study), with objectivesthat are systematically adjusted by previously trained regression models for overestimation.In the revised process, each original objective function is replaced by a pipeline that initially computes the function's result,its standard deviation, and the feature count of the solution.This data is then feed to the corresponding adjuster regression model, which predicts the overestimation.The final adjusted performance is calculated by deducting the predicted overestimation from the original fitness value. The solution set generated by the MO optimizer during this final run represents the output of the whole .§.§.§ Pseudocode formulation of thealgorithmIn order to wrap any MO optimizer, themust be polymorphic, so we define an abstract class MultiObjectiveOptimizer representing a generic MO optimizer (moopt). It has a single method optimize that works with provided objectives and training data. Theis a MultiObjectiveOptimizer itself (ao).MultiObjectiveOptimizer abstract class definition. class definition.The method new is just a simple constructor that saves the polymorphic parts of the algorithm: the tuningOptimizer, a MO optimizer used to create the samples for the adjusting regression, the actual regression model (adjusterLearner), and the mainOptimizer that is the MO optimizer that uses the objectives adjusted by the regressions to produce the results for the user.Theimplementation of optimize first organizes the data into folds. The resulting object, foldsData, contains the data itself and also the description of how it is partitioned into folds.then executes the tuningOptimizer on each fold and collects the results, that are a set of solutions for each fold. The set of solutions returned by each optimizer is optimizer-dependent in general, but in our experiments we used the Pareto front of all the solutions that were explored. For each objective obj the algorithm assigns weights to the solutions: for each tuningOptimizer result set h, the solutions receive a weight that is proportional to the partial derivative of the HV of the belonging result set with respect to the solution and the current objective obj. An adjuster regression model is trained for the current objective obj using the function trainAdjuster that receives in input a regression model adjusterLearner, the current objective obj, the tuningOptimizer solution set for each fold, the data including folds information (foldsData), and the weigths of the samples (weights). The returned adjuster predicts how much the fitness of a solution changes between the training sample set and an unseen testing sample set. The function trainAdjuster has its own detailed description below. Using the previous objective obj and the adjuster, a new adjusted objective is created that when evaluating a solution first uses the old fitness function to compute a temporary fitness, then adjustes this fitness by subtracting the prediction of the adjuster. Finally,runs the mainOptimizer on the whole trainingData, using the adjusted objectives instead of the original objectives.The trainAdjuster function trains a fitness adjuster regression model for one of the objectives using as samples the solutions resulting from running the tuningOptimizer on all the folds defined by foldsData. The solutions are weighted: each solution has a weight proportional to the HV partial derivative with respect to the considered objective, with the weights for each fold that sum to 1.trainAdjuster function definition.For each fold i as defined by foldsData, trainAdjuster prepares the samples for training the adjuster regressor (ta). The samples are prepared separately for each fold, then used together in training. The function evaluateWithCV assigns two fitnesses to each solution: one computed on the train data of the current fold i, and another computed on the test data. It also computes the standard deviation of the train fitness computed with the boostrap method. The differences between train and test performance are the values that the regression will learn to predict. In order to do that the regression has 3 input meta-features: the original fitness, i.e. the performance on the train data, the standard deviation of the original fitness, and the number of the features that are included in the solution (the number of genes that are included in the biomarker in our case study).Theis a wrapper algorithm that uses an MO optimizer to produce the training data for the regression models and a (possibly different) MO optimizer that will run with the adjusted objectives to produce the final results for the user. In our case study we used NSGA3-CHS algorithm as both tuning and main optimizer. The parameters of population size and number of generations were lower for the tuning optimizer in order to reduce the computational load. Modified Non-Dominated Sorting Genetic Algorithm II (NSGA2) algorithms NSGA2-CH and NSGA2-CHS were first introduced in <cit.> and further validated in <cit.>. In this work we analogously modified NSGA3 with the same capabilities of NSGA2-CHS obtaining NSGA3-CHS. For this purpose we designed a more general algorithm: NSGA*.NSGA* class definition.nsgaStar describes the algorithm NSGA*, a further generalization of the generalized Nsga2 algorithm presented in <cit.>. NSGA* generalizes NSGA2 and NSGA3, together with their modifications NSGA2-CH, NSGA2-CHS, NSGA3-CH, and NSGA3-CHS. The differences between the algorithms derived from NSGA2 and NSGA3 reside in the tournament and in the sort routines. NSGA2 uses a tournament by position: the individuals are paired if their position divided by 2 is the same <cit.>. NSGA3 uses a random tournament where pairs are selected randomly with replacement <cit.>. Both NSGA2 and NSGA3 use a hyerarchical sort where individuals are first sorted by their domination front. The secondary sorting is different: it is based on the crowding distance in NSGA2 <cit.> and on reference point niching in NSGA3 <cit.>. The modifications CH and CHS use a primary sorting according to the clone index (<cit.>), and the original sorting of NSGA2 or NSGA3 as secondary sorting.The class Nsga* has a constructor that memorizes the algorithm-specific parameters and strategies: the number of generations, population size, feature importance function, sorting algorithm, tournament strategy, mutation operator, and flag for the use of clone repurposing. Since theuses adjusted objective functions, NSGA* takes an objectives object in input, it is used by the evaluate function and represents the NSGA* ability to run with different objective functions. The algorithm is similar to generalized NSGA2 <cit.>, but with the added possibility of personalizing the tournament strategy, sorting algorithm and objectives. It can be used as inner MO optimizer by theand can be specialized also as NSGA3, NSGA3-CH, and NSGA3-CHS.§.§.§ Considered adjusting regression modelsFive regression models have been considered in the experimental validation for adjusting the fitness functions. All of them use the sample weights, computed by theas the partial derivatives of the HV as explained above. Dummy. The simplest regression model learns the weighted median.ptree. The pruned decision tree regression model, minimizing the weighted absolute error. The tree is pruned with the Minimal Cost-Complexity Pruning algorithm <cit.>. The complexity parameter for the pruning is optimized by running a 5-fold CV on its training data.RFReg. The random forest regression, minimizing theweighted absolute error.SVR. The epsilon-support vector regression with Gaussian kernel type <cit.>. It uses an l2 regularization penalty. We use the default parameters of C=1 and ϵ=0.1.rSVR. The randomized SVR uses random search with 5-fold CV on its training data to optimize the parameters C and ϵ. We include in the tests the unadjusted NSGA3-CHS, that is equivalent to use regression models that predict always 0 (shortened as “zero” in the following).§.§.§ Limit the computational overhead for adjustmentsRunning the tuning optimizer in a nested k-fold CV in order to generate the samples used for training the regression models imposes a computational overhead with respect to the cost of running the main optimizer without any adjustment to the fitness functions. An high overhead could make theimpractical in real cases. In our experimental validation, that uses GAs as inner MO optimizers, we limited the computational overhead by using in the tuning optimizer a smaller population and less generations than in the main optimizer.The simplifying assumption is made for the computational cost of the GAs to be proportional to the number of individual evaluations multiplied by the number of samples used for evaluating the individuals (in training algorithms the cost is typically at least linear in the number of samples, since the algorithm has to at least iterate through them), and for the number of evaluations to be in turn proportional to the population size multiplied by the number of generations (this is in fact an upper bound considering that individuals equal to previously evaluated ones might not need to be evaluated again). We define a parameter μas the ratio between the number of individuals evaluated during the tuning phase with respect to the main optimization phase.In order to have a computation time of theapproximately double than the time required by the unadjusted optimizer,μ is set to 1 in our experiments.In the case of internal-external CV, let η be the number of external folds. We use the same number of folds also for the generation of the training samples for the regression models, so the tuning optimizer is executed that number of times for each execution of the . Let ρ be the population size for the main optimizer, and ρ' the population size for the tuning optimizer. We compute ρ' with the following equation. ρ'=ρ√(μ/η-1) Where · is the round to the nearest integer operation.Letγ be the number of generations for the main optimizer, and γ' the number of generations for the tuning optimizer. Similarly, we compute γ' with the following equation. γ'=γ√(μ/η-1) According to the previous assumptions, the cost of running the main optimizer c is c=ργm, with m being the number of samples. The cost of running all the iterations of the tuning optimizer c' is the following. c'=ηρ'γ'm(η-1)/η=ρ√(μ/η-1)γ√(μ/η-1)m(η-1) Ignoring the round operations that have just a small contribution it is easy to verify the desired ratio. c'≈ργμm=mc So the computational cost of using the adjusted optimizer wrapper, including the main optimizer, is approximately equal to (1+m)c, or even lower if the cost of evaluating individuals grows more than linearly in the number of samples.The external validation is faster since it does not have an outer k-fold CV. For the external validations, we have arbitrarily set η=5, resulting in samples for regression training to be acquired from 5 folds. §.§.§ Computing the fitness functions variance One of the high-level features of the solutions, used for prediction by the adjustment regression models, is the standard deviation of the original fitness.Collecting the fitnesses of an individual (feature set in our case study) on the different folds and computing the standard deviation of a performance metric (for example, the balanced accuracy or the concordance index), would have a limited precision because of the number of the folds, that is necessarily contained to have an acceptable computational time. On the other hand, increasing the fold count would heighten computational demands, as the inner models need to be trained for each fold. Additionally, each fold fitnesses would be estimated on a smaller left-out sample set. Moreover, performing repeated CV would increase the number of evaluations but the same test samples would be reused.Our method to calculate each solution's fitness variances addresses these issues by performing bootstrap analysis within each fold of the k-fold CV, followed by aggregation of the results across all folds. The following steps describe how one of the fitnesses for one of the individuals is computed.* For each of the folds there are training and testing samples. The inner model is optimized on the training data. The fitness is computed on the whole testing data, then by using bootstrap on the testing data, the variance for the fitness is computed.* The results from each of the folds are aggregated considering the folds as strata in a stratified bootstrap. The variances in the different folds are assumed uncorrelated and combined with the equation for the variance of the mean of uncorrelated random variables: the variance of the mean is the sum of the variances divided by the square of the number of folds. When cross-validating, there are two sources of performance variance: the composition of the training set affects the training process thus can lead to different predictors, and indirectly to different expected performance, while the composition of the test set directly impacts the expected performance <cit.>. It is known that there is no unbiased estimator of the variance of k-fold CV <cit.>. The described procedure accounts for the variance explained by the limited number of test samples, but only partly for the variance explained by the limited number of training samples, since the training samples are reused in different folds (this is a well known unavoidable limitation in k-fold CV <cit.>) and the bootstrap is applied on the test sets but not on the training sets, to avoid incurring in unfeasible computational costs. Despite the limitations, this estimation of the fitness variance appeared predictive of the overestimation in our experiments, thus justifying its inclusion in the high level features for the overestimation prediction. §.§ Benchmarking datasets: description and pre-processingWe conducted benchmark studies with two primary goals: firstly, to identify gene expression-based biomarker sets for classifying cancer subtypes, and secondly, to determine similar biomarker sets for survival prediction in kidney and breast cancer patients.Both case studies were addressed by using internal-external CV on TCGA <cit.> data. Additionally, the breast cancer case study includes an external validation with training on TCGA data and testing on SCAN-B data <cit.>. The latter test is particularly important as it demonstrates that the proposed algorithm can also be utilized by training predictive models for both classification and overestimation within a specific cohort (e.g. TCGA) and applying them in external cohorts (e.g. SCAN-B).TCGA transcriptomic datasets were downloaded with the curatedTCGAData R-package version 2.0.1 from assays of type RNASeq2GeneNorm <cit.>. The retrieved data consists of upper-quartile-normalized TPM values. They were log transformed by applying log_2(x+1).The external gene expression-based transcriptomic dataset for breast cancer was obtained from the Gene Expression Omnibus (GEO) database (GSE96058) collected from the SCAN-B consortium. It includes FPKM log-transformed gene expression profiles. This data was already log-transformed, and we did not apply our own log-transformation to it. We applied for each value x the function max(x,0), then we excluded the genes with less than 30% non-zero values.For our study, we utilized TCGA datasets specific to cancer types. For the breast cancer case study, we used the TCGA-BRCA dataset. Additionally, for kidney cancer, we considered a compendium of TCGA datasets, which includes: KICH (Kidney Chromophobe), KIRC (Kidney Renal Clear Cell Carcinoma), and KIRP (Kidney Renal Papillary Cell Carcinoma).Each transcriptomic dataset was filtered by removing genes with zero variance and gene expression values were standardized before use. For both TCGA and SCAN-B, cancer patient samples were categorized based on the PAM50 cancer subtype signature,which is used to determine specific molecular subtypes of breast cancer.The subtypes identified by PAM50 that were used for our experiments are:Luminal A (LumA), Luminal B (LumB), HER2-enriched (Her2), Basal-like (Basal, which is often referred to as triple-negative) and normal-like (Normal) cancer.For the task of classifying cancer subtypes in TCGA kidney cancer patients, we focused on distinguishing clear-cell renal cell carcinoma (ccRCC),chromophobe renal cell carcinoma (ChRCC), and papillary renal cell carcinoma, which was further divided into two subtypes based on recent studies identifying distinct clinical categories.These are referred to as PRCC T1 and PRCC T2. Additionally, we included samples of normal non-cancerous tissues.This classification system is based on a study published by Ricketts et al. <cit.>. Overall survival data for TCGA kidney cancer is from Liu et al. <cit.>.Supplementary Fig. S1 shows the first two principal components (after standardization) and the number of mRNA-based profiles for each kidney cancer subtype. Supplementary Fig. S2 shows the same information for the samples that are used when the prediction of the overall survival is an objective. In this cases the normal tissues are not used because in TCGA they are collected from healty areas of cancer patients. Additionally, two samples are excluded because they are missing the survival information. Supplementary Fig. S3 shows the distribution of the survival outcomes in time.Supplementary Fig. S4 shows the first two principal components (after standardization) and the number of mRNA-based profiles for each breast cancer subtype in TCGA data. The same information for the SCAN-B dataset is shown in Supplementary Fig. S5.§.§ Experimental setupOur tests include three datasets: TCGA kidney used for internal-external CV, TCGA breast used for internal-external CV and for the training part of external validation, and SCAN-B used for the testing part of external validation. The datasets and their preprocessing are described in datasets.Our MO problems include three objectives that measure the ability to classify the cancer subtypes, the ability to predict the overall survival, and the feature set size. These objectives are measured in the [0, 1] interval as per assumption of the algorithms. Subtype classification is measured with the balanced accuracy, survival prediction with the c-index, while the feature set size with the novel metric root-leanness. It was measured with leanness in a previous study <cit.>. The leanness is a value between 0 and 1 with higher values for solutions with less features. It is defined as 1/(n+1), with n being the number of features. The leanness decreases sharply as the feature set size increases; consequently, the CHV is strongly impacted by the accuracy of the smallest biomarkers that use 1 or 2 features. Taking this into account, Cattelani et al. <cit.> provided an additional evaluation of the same solution sets by using a different measure for the impact of the number of features: 1/(√(n)+1). This soft-leanness decreases less sharply as the set size increases. the root-leanness is even smoother that the soft-leanness and is defined as the root of the leanness: √(1/(n+1)).The program performs a 3-fold CV repeated 3 times for internal-external CV on the kidney dataset, while, for uniformity with previous works <cit.>, a 5-fold CV on the breast dataset. The MO optimizer used is the , wrapping NSGA3-CHS as both tuning optimizer and main optimizer (pseudocode). The hyperparameters of NSGA3-CHS used as main algorithm are population 500, generations 500, and 3 folds used for the evaluation of the individuals. The hyperparameters related to the tuning phase of theare set as explained in overhead. Each test is repeated 6 times, one for each of the 5 regressors described in regressors, and one for the zero regressor (equivalent to the unadjusted NSGA3-CHS).In this study, for the task of cancer subtype classification, NB and SVM were selected as the inner models.Meanwhile, Cox was employed as the inner model for the task of cancer survival prediction. The SVM uses the Radial basis function kernel, balanced class weight, and l2 regularization parameter C=1.The considered experimental setups are listed in exp_setting.In each k-fold CV execution, samples are stratified based on the included objectives.For balanced accuracy, stratification is by cancer subtype, and for overall survival,by survival status and time (binned into high or low for evenly distributed death events).If both these objectives are present, stratification combines both criteria.During each GA call, for classification objectives,features not passing an ANOVA test (p-value 0.05) are removed from the current training samples.For survival objectives, features that fail a Wald test (p-value 0.05) in a Cox regression are discarded.If both objectives apply, features failing both tests are dropped.For each combination of validation type, datasets, objectives, adjusting regression model,and classification inner model, we calculated the MO performance error and Pareto delta to assessthe difference between expected and actual test sample performance.The CHV was also computed as an overall performance indicator for the solution sets. To the best of our knowledge it is the only proposed generalization of both HV to CV scenarios and of single-objective CV to MO <cit.>. CHV takes into account the differences between the performance expected by the optimization algorithm and measured on the test samples and preserves the HV appreciated properties, in particoular the strict monotonicity <cit.>: if a set of solutions is strictly better, the CHV is guaranteed to be higher.§ RESULTS We compare 6 different instantiations of theby varying the regression model used for the adjustment, including the zero regression model, equivalent to not applying any adjustment. The regression models receive as input 3 high level features of the solutions that are correlated with the overestimation: the original fitness, its standard deviation, and the cardinality of the feature set (number of genes). For an example of correlations between the high level features and the overestimation see Supplementary Section 2. We use NSGA3-CHS as wrapped model. The tests are repeated with 8 different combinations of validation type, datasets, objectives, and classification inner model. For each combination we report two measures for the accuracy of the fitness estimation: the MO performance error (pe) and the Pareto delta (pd). Additionally, we report a measure of overall performance of the optimizers: the CHV (chv). In solution_sets we show in detail the performance of the best solution sets from the external validation. §.§ Overestimation in feature selection for biomarker discoveryWe measured the error of the performance estimates with two methods: the MO performance error and the Pareto delta,both of which are detailed in the methods section. In summary, the MO performance error quantifies the discrepancy between the HV computed on the train performance and the CHV. On the other hand, the Pareto delta is a metric that captures differences between expected performance and actual performance on new samples, focusing on the more granular level of individual solutions.§.§.§ Comparative analysis using multi-objective performance errorThe MO performance error evaluates the DOSA-MO algorithm's capability in reducing the performance estimation error. The MO performance error quantifies the discrepancy between two key metrics that each summarize a solution set’s fitness in a single value:the HV computed from training performance and the CHV (see methodsPE).We report this error across various experimental setups. The MO performance error outcomes from various regression-based adjusters are reported in consolidated_mo_performance_error.Significantly, ptree and RFReg often outperform other overestimation predictors in terms of MO performance error.Therefore, for users prioritizing the reduction of MO performance error, ptree and RFReg regressors emerge as particularly effective choices.In comparison, SVR-based estimators of overfitting appear to be the least effective when considering MO performance error.Moreover, it is interesting to note that ptree and RFReg regressors perform particularly well in multi-cohort biomarker discovery and validation scenarios. In these scenarios, an external dataset (SCAN-B) is used to evaluate the models (biomarker sets), with their performance adjusted based on what has been learned from the initial TCGA-BRCA cohort.§.§.§ Comparative analysis using the Pareto deltaThe Pareto delta measure (methodsPD) considers the difference betweena model's predicted and actual test performance for each individual solution.This makes it particularly relevant and more informative than the MO performance error in the common situation when users are presented with multiple solutions but will ultimately select only one. A comparison of the Pareto delta measure across various experimental setups is detailed in consolidated_pareto_delta.In the DOSA-MO framework, ptree and RFReg regressors consistently outshine otherswhen reducing performance estimation error, aggregated on a solution by solution basis with the Pareto delta.Interestingly, while no significant differences are noted in MO performance error betweenthe dummy model and the unadjusted optimizer, the Pareto delta metric reveals thatall kidney cancer setups with proper regressors have a lower Pareto delta compared to the unadjusted optimizer.rSVR performs better than the zero regressor in all scenarios except one.RFReg and ptree always yield lower Pareto deltas than the zero regressor, including in breast cancer setups,highlighting their promise in both overestimation measures. §.§ Evaluating DOSA-MO's effectiveness in MO feature selectionThe previous results highlight DOSA-MO's success in reducing performance estimation errors in ML-driven feature selection for biomarker discovery.This section evaluates if DOSA-MO's dual-stage framework also enhances the quality of the produced feature set,that is the final output for biomarker selection models considered for clinical validation.To assess DOSA-MO's impact on the overall feature selection process, we calculated the CHV for each experimental setup (see consolidated_chv).Tree-based regressors effectively reduce estimation errorsbut do not significantly improve the quality of solution sets generated by the MO optimizer.This indicates that using these regressors for overestimation prediction may not always enhance solution sets,despite reducing quality estimation errors.Interestingly, dummy and SVR-based MO optimizers consistently outperform the CHV of unadjusted models,highlighting the effectiveness of overestimation prediction in DOSA-MO for superior feature selection outcomes.Specifically, SVR outperforms the zero model in all kidney setups and most breast setups,except in scenarios involving the overall survival objective. Notably, SVR yields optimal results in setups excluding survival prediction,while the dummy model excels in setups that include survival prediction. §.§ Advancing biomarker discovery using multi-cohort studies. In this section, we examine a case study that utilizes two distinct cohorts, TCGA-BRCA and SCAN-B, for external validation in biomarker discovery.The approach focuses on learning models and overestimation during biomarker identification within one cohort (like TCGA) and then applying this knowledge to another cohort (such as SCAN-B). This shows the reliability of biomarker discovery across different cohorts.As previously shown, in the external validation for classification of breast cancer subtypes thewith SVR as fitness adjustment regression model and SVM as inner classifier provides the best overall performance measured with the CHV, while, keeping the SVM as inner classifier, using the RFReg it is possible to obtain the more accurate prediction of the performance on a new cohort, with respect to both MO performance error and Pareto delta. The effect of the adjustment can be appreciated by looking at the solution sets presented in breast_cancer_scatter_plots. The balanced accuracy expected by the optimizer ("internal cv” in the figures) is monotonically increasing with the number of features because the solutions returned by the algorithm are non-dominated. The estimation error, i.e. the vertical distance between the balanced accuracy expected by the optimizer using only the training samples and the balanced accuracy measured on the external dataset, is lower when using the .§ DISCUSSION We presented a new wrapper MO optimizer capable of adjusting the performance measures for the overestimation that is due to multiple model validations, improving the solution set that is produced. It has been validated on multiple tanscriptomics configurations related to kidney and breast cancer, including external validation.Since to the best of our knowledge there was no measure of error in the performance estimation for MO solution sets, we proposed two such measures. According to both of them, theprovided improved performance estimates in all the considered scenarios when a regression model based on decision trees was used to predict the overestimation.Even the dummy regression model that learns just the weighted average of the overstimation, when used for adjusting the fitness, allows to obtain a better overall performance than the unadjusted optimizer in 7 out of 8 experimental setups, as measured by the CHV. The SVR regression model also provides for a better CHV than the unadjusted optimizer in 7 out of 8 setups. The dummy regression model is the best in the 3 setups that include survival prediction, while the SVR is the best in the 5 setups that have subtype classification and economy of features as objectives.The rSVR regression model selects the hyperparameters for the regularization strength of the SVR using random search. It performed poorly in our tests. A regularization that works well while cross-validating on solutions obtained by the same unadjusted optimizer, is not as effective when the GA runs with a bigger population, more generations, and the adjustment applied to the evaluation of the individuals. More conservative regression models, like the dummy, or the SVR with its strong default regularization, have proven more effective in our tests.Predicting the overestimation is more complex when the adjustment to the fitness is applied during the optimization as in our approach. The regression models are trained on sample distributions, but then they apply the adjustment results to different solutions affecting their selection. We cannot expect for the regression models to be as precise in areas of the solution space that have not been explored when creating the training set. This applicability domain issue brings an added difficulty.Adjusting the fitness impacts the exploration of the optimizer, and the explored solutions will have a different distribution from the one on which the regressors were trained, even in the areas included in the applicability domain. This is a moving target problem, often solved in other cases by optimizing in small steps, for example when applying back-propagation to artificial neural networks across multiple epochs. It is not practical to run many iterations of computationally expensive MO GAs, but strategies involving faster algorithms may be explored. Since the solutions can potentially still be used for training the regression models also in the following steps, techniques may be mutuated from on-line learning algorithms. More research is needed to improve how the applicability domain and moving target problems are addressed. The proposed algorithm can be applied to any data and problem and has been experimentally validated on cancer transcriptomics data for subtype classification and survival prediction. Further experimental validation is needed to assess its performance in other domains.
http://arxiv.org/abs/2312.16624v1
{ "authors": [ "Luca Cattelani", "Vittorio Fortino" ], "categories": [ "q-bio.QM", "cs.LG" ], "primary_category": "q-bio.QM", "published": "20231227161314", "title": "DOSA-MO: Dual-stage Optimizer for Systematic overestimation Adjustment in Multi-Objective problems improves biomarker discovery" }
nosep cdOT1pzcmitbiggg 3.0 Biggg 3.5 bigggg4.0 Bigggg4.5∐OMXMnSymbolE mnomxOMXMnSymbolEmn mnomxboldOMXMnSymbolEbn OMXMnSymbolEmn <-6>MnSymbolE5<6-7>MnSymbolE6<7-8>MnSymbolE7<8-9>MnSymbolE8<9-10> MnSymbolE9 <10-12> MnSymbolE10 <12-> MnSymbolE12 section#2#1#3subsection#2#1#3 subsubsection#2#1#3italics theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary definition definition[theorem]Definition notation[theorem]Notation example[theorem]Example assumption[theorem]Assumption examples[theorem]Examples remark[theorem]Remarkliterature[theorem]Literature (1) (2)L R (#1)(#1) (#1)(#1) 0=∫∫-0-0 - 
http://arxiv.org/abs/2312.16301v1
{ "authors": [ "Grigorios Giotopoulos", "Hisham Sati" ], "categories": [ "math-ph", "hep-th", "math.DG", "math.MP" ], "primary_category": "math-ph", "published": "20231226191320", "title": "Field Theory via Higher Geometry I: Smooth Sets of Fields" }
UTF8gbsn Wang et al.: PanGu-π: Enhancing Language Model Architectures via Nonlinearity Compensation The recent trend of large language models (LLMs) is to increase the scale of both model size (the number of parameters) and dataset to achieve better generative ability, which is definitely proved by a lot of work such as the famous GPT and Llama. However, large models often involve massive computational costs, and practical applications cannot afford such high prices. However, the method of constructing a strong model architecture for LLMsis rarely discussed. We first analyze the state-of-the-art language model architectures and observe the feature collapse problem. Based on the theoretical analysis, we propose that the nonlinearity is also very important for language models, which is usually studied in convolutional neural networks for vision tasks. The series informed activation function is then introduced with tiny calculations that can be ignored, and an augmented shortcut is further used to enhance the model nonlinearity. We then demonstrate that the proposed approach is significantly effective for enhancing the model nonlinearity through carefully designed ablations; thus, we present a new efficient model architecture for establishing modern, namely, PanGu-π. Experiments are then conducted using the same dataset and training strategy to compare PanGu-π with state-of-the-art LLMs. The results show that PanGu-π-7B can achieve a comparable performance to that of benchmarks with about 10% inference speed-up, and PanGu-π-1B can achieve state-of-the-art performance in terms of accuracy and efficiency. In addition, we have deployed PanGu-π-7B in the high-value domains of finance and law, developing an LLM named YunShan for practical application. The results show that YunShan can surpass other models with similar scales on benchmarks. Transformer, Large Language Model, Nonlinearity, Network Architecture, Finance, Law. PanGu-π: Enhancing Language Model Architectures via Nonlinearity CompensationYunhe Wang, Hanting Chen, Yehui Tang, Tianyu Guo, Kai Han, Ying Nie, Xutao Wang, Hailin Hu, Zheyuan Bai, Yun Wang, Fangcheng Liu, Zhicheng Liu, Jianyuan Guo, Sinan Zeng, Yinchen Zhang, Qinghua Xu, Qun Liu, Jun Yao, Chao Xu, and Dacheng Tao Fellow, IEEE Corresponding to Yunhe Wang (Huawei Noah's Ark Lab). E-mail: [email protected]. Acknowledgments: This work is jointly funded by Huawei 2012 Labs and Huawei Group Finance. We also thank the work from both the data engineering team and IT architecture team. ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONLarge language models (LLMs) have significantly evolved and are capable of a wide range of NLP tasks such as machine translation, text summarization, and dialogue. Following the scaling law <cit.>, a series of studies confirm significantly improved performances and emergent abilities <cit.> on downstream tasks by scaling up the model size and the data size, , GPT-3 with 175B parameters <cit.> and PaLM with 540B parameters <cit.>. Recently, a remarkable innovation, ChatGPT, was introduced with the ability to interact with humans in a conversational way. The success of ChatGPT is attributed to pre-training on extensive textual data and fine-tuning with alignment to human preferences. This paradigm has a profound impact on subsequent research and real-world applications <cit.>. The emergence of ChatGPT has inspired the community to develop more excellent large models, including LLaMA <cit.>, ChatGLM <cit.>, and Baichuan <cit.>, which in turn drive the vigorous development of the LLM field.In addition to general-purpose LLMs, high-value domain-specific large models are also being extensively researched to promote the implementation and practical application of LLMs. For example, LaWGPT <cit.> enhances the fundamental semantic understanding in the field of law with a specialized legal vocabulary and extensive pre-training on a large-scale Chinese legal corpus. FinGPT <cit.> develops an open-source LLM in a data-centric approach. Huatuo <cit.> builds a Chinese medical instruction fine-tuning dataset and enhances its performance in question-answering within the medical field. As shown in Figure <ref>, our analysis of the industry distribution of domain-specialized LLMs reveals that those in the finance and law domains attract the most attention due to their widespread demand and commercial value.The Transformer model introduced in 2017 <cit.> is the foundational architecture for many LLMs. The core components of the Transformer model include the multi-head self-attention (MSA) and the feed-forward network (FFN). MSA computes attention scores for each token in the input sequence with respect to all other tokens, capturing relationships and dependencies. FFN is performed on each token separately which provides more nonlinear transformation and model capacity. Transformer architectures are utilized to build encoder (, BERT) and decoder (, GPT-2) for NLP tasks. In LLM, decoder-only architecture is widely used by predicting the next token with the context information <cit.>. Beyond the standard Transformer architecture, a series of studies have explored architecture modification (especially MSA <cit.> or FFN <cit.>) seeking better performance. PaLM <cit.> and LLaMA <cit.> use SwiGLU-based FFN <cit.> which consists of the component-wise product of two linear layers, showing significantly increased generation quality. RWKV (Receptance Weighted Key Value) <cit.> proposes an RNN-style attention mechanism to alleviate the quadratic complexity in standard MSA. Switch Transformer <cit.> allocates different parameters for each input example and results in a sparsely-activated model.The development of an excellent LLM is a complex system engineering, which includes data preparation, data cleaning, model architecture, cluster commutation, and optimizer. The model architecture design is one of the most important components and determines the maximum performance potential of the deployed LLM. Among the recent projects in 2022-2023, the popular versions that are often used for secondary development are GPT-3 <cit.> and LLaMA <cit.>. By inheriting the observations and analysis of our previous work <cit.>, we find that the feature collapse problem also affects the expressive power of these well-designed Transformer architectures. Taking LLaMA as an example, we empirically analyze its feature collapse phenomenon using the rank metric <cit.>. The feature rank diminishes significantly in deeper layers, leading to a greater similarity among all tokens. This greatly degrades the generation quality and diversity of LLMs. We also theoretically analyze the feature collapse problem in Transformer architecture (details in Section <ref>). Through theoretical analysis, we have discovered that nonlinearity significantly impacts the capabilities of the Transformer model. Enhancing nonlinearity can effectively mitigate the issue of feature collapse and improve the expressive power of the Transformer model. We intend to construct stronger LLM architectures by approaching them from a nonlinear perspective.In this paper, we introduce a new architecture for LLMs to address the feature collapse problem via nonlinearity compensation, named PanGu-π. We introduce more nonlinearity from two approaches in both FFN and MSA modules without significant increasing the model complexity. First, the series-based activation function with multiple learnable affine transformation is equipped in FFN, which can effectively enhance the nonlinearity of the entire network with negligible calculations. Then, the augmented shortcut is paralleled with the main branch of each MSA module to eschew the rank collapse. To maintain the model efficiency, we carefully refine the augmented shortcut operation with hardware-friendly operations. The enhanced PanGu-π architectures (see Figure <ref>) are constructed with both the series activation-based FFN and shortcut-augmented MSA. We also prove that the superposition of these two operations can enhance nonlinear compensation. We build two versions of PanGu-π with different model sizes, , PanGu-π-7B and PanGu-π-1B. By training on a large-scale corpus, our PanGu-π models obtain general language ability on downstream tasks. Through carefully designed ablations, we demonstrate that the proposed approach can effectively enhance the model nonlinearity and alleviate feature collapse. Thus, with the same scale of parameters, we can achieve a substantial efficiency gain via the two new modules. Extensive experiments on various NLP tasks are evaluated to compare with state-of-the-art LLMs. In a scenario with similar model size, PanGu-π models can achieve better performance in terms of both accuracy and efficiency. In addition to the foundational abilities, we have deployed PanGu-π-7B in the high-value domains of finance and law, developing a specialized LLM named YunShan for practical application. Extensive evaluations of finance and law benchmarks also show that YunShan surpasses other state-of-the-art models with similar scales.This work introduces a new LLM network architecture (, PanGu-π) with extensive experiments and theoretical analysis with some of the ideas borrowed from our preliminary works published on NeurIPS 2023 <cit.> and NeurIPS 2021 <cit.>. This present work makes the following significant contributions. First, the previous two papers, one involving the construction of a CNN backbone <cit.> and the other focusing on vision Transformers <cit.>, have laid a foundation that this paper seeks to extend within LLMs. We achieved commendable results in this endeavor, and extensive experiments have validated the effectiveness of our methods. Second, the previous two papers analyzed the network design subject from distinct perspectives. In our current study, the two works are theoretically integrated and essentially address the same critical issue from a unified standpoint in the LLM domain. Third, we organically adapted series activation to FFN and integrated augmented shortcuts into MSA. These two components are complementary to each other and effectively introduce more nonlinearity into the Transformer architecture. Fourth, we developed the PanGu-π foundation models by large-scale training and fine-tuning (SFT), which achieved state-of-the-art results in general NLP tasks for a similar model size. In addition, we extend PanGu-π to finance and legal domains by transfer learning and obtain excellent performance on these downstream tasks.The rest of this paper is organized as follows. Section 2 reviews related work in the field of Transformer architectures for building LLMs and the related hotspot applications. Section 3 provides a theoretical analysis of the feature collapse problem and the nonlinear capabilities of existing Transformer architectures. Section 4 introduces a nonlinear enhancement strategy based on the series activation function and augmented shortcut. Section 5 details the data, training strategies, and experimental results of the PanGu-π architecture with two models with important parameter scales, , PanGu-π-7B and PanGu-π-1B. In Section 6, the PanGu-πarchitecture is deployed in the high-value domains of finance and law, developing the YunShan LLM for practical application. Extensive evaluations of finance and law benchmarks also show that YunShan surpasses other state-of-the-art models with similar scales.Section 7 concludes the entire paper and discusses future works.§ RELATED WORKS In this section, we first summarize the recent representative works in the field of LLMs. We then review the classical works for enhancing Transformer architectures. Lastly, we investigate the domain-specific large language models, especially in finance and law. §.§ LLMs With the emergence of ChatGPT <cit.> from OpenAI, LLMs with billions of parameters achieved astounding performance on various natural language processing tasks. Subsequently, the latest GPT-4 <cit.> pushed the generalization capabilities of LLMs to a new level. However, the proliferation of GPT-series models is accompanied by a strict commercial orientation that is not conducive to a thriving open source community. The representative work of democratic LLMs is LLaMA <cit.>, a collection of open-source foundation language models ranging from 7B to 65B parameters. Later, a more elaborate model LLaMA2 <cit.> is introduced, appearing to be on par with some closed-source models <cit.> based on human evaluations. Since its release, LLaMA has attracted extensive attention from the academia and industry. Subsequent efforts have been based on LLaMA by either instruction tuning or continual pre-training. Stanford Alpaca <cit.> is the first LLaMA-based chatbot fine-tuned with 52K instruction-following samples generated by the self-instruct method <cit.>. Vicuna <cit.> also fine-tunes LLaMA with user-shared conversations collected from ShareGPT <cit.>. In addition to the language models geared toward English-dominant applications, multilingual language models are also thriving. InternLM <cit.> presents a multilingual foundational language model pre-trained on a large corpus with 1.6T tokens with a multi-phase progressive process. Baichuan2 <cit.> introduces a series of large-scale multilingual language models containing 7 billion and 13 billion parameters. PanGu-Σ <cit.> extends the dense Transformer model to a sparse one with Random Routed Experts. Qwen <cit.> introduces a comprehensive language model series that encompasses distinct models with varying parameter counts. Skywork <cit.> presents 13B LLMs trained on a corpus drawn from both English and Chinese texts with a two-stage training methodology. §.§ Enhanced Transformer Architectures While Transformer architectures have gained significant prominence in LLMs recently, there continues to be a surge of interest in their effective utilization in diverse domains, including computer vision tasks. In light of this, we review classical works dedicated to enhancing Transformer structures, with a particular focus on augmenting model nonlinearity and efficiency.Natural language processing domain. The conventional self-attention mechanism, with quadratic computational complexity, poses challenges for handling long input sequences during training and inference. To mitigate this, various structural priors on attention, including sparsity <cit.> and linear attention <cit.>, have been proposed. Notably, Reformer <cit.> employs locality-sensitive hashing to approximate full attention. Longformer <cit.> integrates local windowed attention with task-motivated global attention. Models such as GPT-3<cit.> incorporate locally banded sparse attention methods, such as Factorized Attention <cit.>. There are also works focusing on replacing the attention module by incorporating recurrent models <cit.>. Hyena <cit.> trained a recurrence of gating units and implicitly parametrized long convolutions, which serves as an attention-free drop-in replacement for the traditional Transformer architecture. RWKV <cit.> replaced the quadratic QK attention with a scalar formulation that has linear cost. RetNet <cit.> theoretically derived the connection between recurrence and attention and proposed the retention mechanism for sequence modeling. There are also efficient enhancements focused on the Feed-Forward Network (FFN). Mixture-of-Experts (MoE) <cit.> has demonstrated effectiveness in the pre-training of LLMs. In addition to MoE, PaLM <cit.> and LLaMA <cit.> leverage the SwiGLU activation for original FFN intermediate activations. This choice is grounded in the observation that SwiGLU activations, as demonstrated in compute-equivalent experiments <cit.>, substantially enhance quality compared to standard activation functions like ReLU, GeLU, or Swish.Computer vision domain. PVT <cit.> and Swin <cit.> utilize hierarchical structures across multiple stages, overcoming challenges posed by the original isotropic ViT <cit.> for diverse computer vision tasks. Ongoing research focuses on refining local information processing <cit.>, simplifying attention mechanisms <cit.>, and exploring alternative modules <cit.>. For example, T2T-ViT <cit.> reduces token length through iterative aggregation, whereas TNT <cit.> captures local information by dividing ViT's patches. Swin <cit.> and Cswin <cit.> introduce local attention within a window and shifted window partitioning for cross-window connections. GFnet <cit.> employs Fast Fourier Transform for token mixing. Architectures like ResMLP <cit.> and MLP-Mixer <cit.>, solely rely on multi-layer perceptrons (MLPs), excluding convolutions or self-attention mechanisms. §.§ LLMs for Finance and LawIn addition to the LLMs towards general purpose, the domain-specific models that are more capable of generating applied value are receiving increasing attention, with finance and law being the most representative.Financial LLMs.Wu propose the first proprietary LLM with 50 billion parameters specialized for the financial domain, BloombergGPT <cit.>, which is a decoder-only causal language model based on BLOOM <cit.>. The proposed training strategy of mixing domain-specific and general-purpose data results in a balanced performance in both domains.Unlike the proprietary BloombergGPT, FinGPT <cit.> takes a data-centric approach and presents an open-source LLM to researchers and practitioners. It exhibits promise in financial tasks such as sentiment classification, quantitative trading and financial fraud detection. PIXIU <cit.> has created a large-scale multi-task instruction dataset by manually reworking open-sourced datasets <cit.>. A financial LLM called FinMA is then introduced by fine-tuning LLaMA with the constructed instruction dataset. The comprehensive evaluation results including financial NLP and prediction tasks uncover the strengths and weaknesses of various LLMs when handling different financial tasks. To address the lack of open-sourced models specifically designed for Chinese finance, Zhang introduce XUANYUAN 2.0 <cit.>, built upon the BLOOM <cit.> architecture. To mitigate catastrophic forgetting, the hybrid-tuning strategy that combines the stages of pre-training and fine-tuning is proposed. By appropriately mixing the general and financial corpus in pre-training and fine-tuning, XUANYUAN 2.0 achieves impressive performance in both the general domain and financial domain. Chen propose a financial LLM DISC-FinLLM <cit.> by multiple experts fine-tuning the framework based on Baichuan-13B <cit.>. Experimental results on multiple evaluation benchmarks demonstrate its promising performance.Legal LLMs. The legal sector is another area that is significantly benefitting from the advancement of LLMs. BaoLuo and Lychee <cit.> are lawyer assistants developed by fine-tuning Chinese legal domain QA datasets. AI Lawyer <cit.> applies Active Learning to alleviate the problem of limited supervised data volume in the legal domain. FedJudge <cit.> focuses on the privacy of legal data and adopts Federated Learning <cit.> during instruction tuning, it also utilizes Continual Learning <cit.> to mitigate the issue of data distribution shifts. HanFei, LaWGPT, Lawyer-llama, WisdomInterrogatory, and Fuzi.Mingcha <cit.> undergo a two-phase training process: further pre-training with unsupervised legal corpus to enhance the semantic understanding ability in the legal field and then supervised training with corresponding datasets. HanFei <cit.> is the first legal LLM in China fully trained with 7B parameters and supports multi-turn dialogue, LaWGPT <cit.> expands the vocabulary by adding specific legal domain terms, Lawyer-llama <cit.> has experimented with different data permutations and training sequences during its instruction tuning phase. During inference time, LawGPT_zh(XieZhi), LexiLaw, ChatLaw, Lawyer-llama, Fuzi.Mingcha and DISC-LawLLM <cit.> introduce a retrieval module to ensure that a definite legal document in the knowledge base supports the response. Additionally, ChatLaw <cit.> also involves a Keyword model for key information extraction to reduce the ambiguity of user queries. Fuzi.Mingcha <cit.> enables the LLM to use syllogistic reasoning to arrive at verdict predictions by training the LLM on a self-constructed dataset.§ PRELIMINARIES AND MOTIVATION In this section, we commence by dissecting the foundational architecture of the Transformer model, introducing a metric of nonlinearity to articulate its capabilities. Subsequently, we delve into an analysis of the Transformer architecture's components – the multi-head self-attention and the multi-layer perceptrons modules – scrutinizing their nonlinear expressive capability. This exploration also brings to light the limitations inherent in the current incarnations of Transformer architectures.Recognized for the Transformer's suitability for parallel computing and the inherent complexity of its model, the Transformer has demonstrated superior precision and performance compared to the widely adopted RNN recurrent neural network. The Transformer architecture consists of two parts: the multi-head self-attention and the multi-layer perceptrons modules.The multi-head attention module is a fundamental component of the Transformer architecture. An MSA module with H heads is defined as (Z_l)= ([A_lhZ_lW^v_lh]_h=1^H)W^o_l, l ∈ [1,2,⋯, L],where Z_l∈^N× d is the featureof the l-thlayer,A_lh∈^N× N and W^v_lh∈^d× (d/H) are the corresponding attention map and value projection matrix in the h-th head, respectively.(·) denotesthe concatenating for features of the H heads and W^o_l∈^d× d is the output projection matrix. The attention matrix A_lh is calculated by the self-attention mechanism, ,A_lh= softmax((Z_lW^q_lh)(Z_lW^k_lh)^⊤/√(d)),where W^q_lh∈^d× (d/H) and W^k_lh∈^d× (d/H) are the query and value projection matrices, respectively. Attention A_lh reflects the relation between different tokens, and a larger value A_lh^ij indicate that token i and token j have a stronger relationship. An MLP module is defined as(Z'_l) = σ(Z'_lW'_l_1)W'_l_2, l ∈ [1,2,⋯, L],where Z'_l∈^N× d is the features of the l-thlayer , W'_l_1and W'_l_2∈^d× d are the weight matrixs.One of the paramount capabilities of neural networks is their nonlinear expressive capability. The higher the degree of nonlinearity, the more complex the function space the network can approximate, resulting in enhanced accuracy. In traditional Convolutional Neural Networks (CNNs), nonlinearity is primarily imparted by activation functions. However, in the Transformer architectures, the sources of nonlinearity are twofold: the self-attention mechanisms and the activation functions within the Multi-Layer Perceptrons (MLP). Hence, our analysis separately scrutinizes the nonlinear expressive capabilities of both self-attention and the MLP within the Transformer framework.Define ℳ_m:={Y∈ℝ^N× m|Y=1x^⊤, x^⊤∈ℝ^1× m} as a subspace in ℝ^N× m, where 1=[1, 1, …, 1]^⊤∈ℝ^N×1, n is the number of tokens and d is the dimension of token representation. We define the distance between matrix H∈ℝ^N× m and ℳ_m as d_ℳ_m(H):=min_Y∈ℳ_m‖H-Y‖_F, where ‖·‖_F is the Frobenius norm. d_ℳ_m(Z_l) <cit.> is a commonly used metric to measure the capability and nonlinearity of the Transformer architecture. Next, we investigate the distance between Z_l the output of layer land subspace ℳ_d.We begin by examining the nonlinear expressive capabilities of the self-attention modules. The self-attention matrix within the Transformer can be intuitively likened to the normalized adjacency matrix of a corresponding graph. Viewed through a graph perspective, the self-attention layer can be seen as equivalent to a Graph Neural Network (GNN) operating on a fully connected graph with normalized edge weights. Excessive self-attention layers like GNN layers result in excessive smoothing, with node vector representations tending to be the same, resulting in convergence to a specific low-rank subspace for any given input.The following theoretical analysis utilizes the formula of the self-attention layer to shed light on the intricacies of this phenomenon. We study how the self-attention layer converges with low rank based on matrix projection. Our analysis involves the definition of a subspace ℳ characterized by a unique property in which each row vector of its elements is the same. By scrutinizing the behavior of the self-attention layer in relation to matrix projection, we aim to unravel the underlying mechanisms that drive the convergence of node vector representations toward a specific low-rank subspace. This exploration is essential for gaining deeper insights into the nuances of self-attention in Transformers, shedding light on the intricacies of their functioning and providing a theoretical foundation for potential enhancements and optimizations. For self-attention matrix A∈ℝ^N× N, any weight matrix W∈ℝ^d× m, any H,B∈ℝ^N× d, α_1, α_2 ≥ 0 and σ is the nonlinear Lipschitz continuous activation function, we have: d_ℳ_m(HW)≤ sd_ℳ_d(H), d_ℳ_d(σ(H))≤ Ld_ℳ_d(H), d_ℳ_d(α_1 H+α_2 B)≤α_1 d_ℳ_d(H) + α_2 d_ℳ_d(B), d_ℳ_d(AH)≤√(λ_max) d_ℳ_d(H) , where s is the largest singular value of W, λ_max is the largest eigenvalue of A^⊤(I-ee^⊤)A and L is the Lipschitz constant of activation function σ(·). Applying the lemma <ref> to the single-head self-attention layer, we can obtain its low-rank attenuation. Given a model stacked by the MSA modules,the diversity d_ℳ(Z_l) of featurein the l-th layer can be bounded by that of input data Z_0, , d_ℳ_m(AZ_lW)≤√(λ) sυ_1 d_ℳ_d(Z_l). where s>0 is the largest element of all singular values of all W and λ is the largest eigenvalue of all A^⊤(I-ee^⊤)A for each self-attention matrix A. For the low-rank matrix projection of concat matrices, we have the following lemma: For block matrix H_h ∈ℝ^N× m, we have: d_ℳ_Hm( ([H_h]_h=1^H))^2= ∑_h=1^Hd_ℳ_m(H_h)^2, Applying the theorem <ref> and the lemma <ref> to the formula <ref>, we can see how the multi-headed self-attention layer decays layer by layer into the low-rank space. Given a model stacked by the MSA modules,the diversity d_ℳ(Z_l) of feature in the l-th layer can be bounded by that of input data Z_0, , d_ℳ_d(MSA(Z_l))≤√(λ H)sυ_1 d_ℳ_d(Z_l). d_ℳ_d(Z_l)≤ (√(λ H) sυ_1)^ld_ℳ_d(Z_0). where H is number of heads, s>0 is the largest element of all singular values of all W^v_lh and υ_1 is the largest element of all singular values of all W^o_l. Assume further that A is doubly stochastic (so that A^⊤e=e) with positive entries. Then by Perron–Frobenius theorem, A^⊤A has a maximum eigenvalue 1 with associated eigenvector e as well. In this case, the matrix A^⊤(I-ee^⊤)A=A^⊤A-ee^⊤ has a maximum eigenvalue λ_max<1.√(λ H) sυ_1 is usually smaller than 1, so the feature diversity d_ℳ_d(Z_l) decreases rapidly as the network depth increases. Recursively, Z_l will converge toward subspace ℳ_d if √(λ H) sυ_1<1 and all representations are the same, resulting in over-smoothing. In conclusion, the nonlinear expressive capabilities of the vanilla self-attention module are limited.We then focus on the Multi-Layer Perceptrons (MLP) of the Transformer architecture. Given a model stacked by the MLP modules,the diversity d_ℳ_d(Z'_l) of feature in the l-th layer can be bounded by that of input data Z'_0, , d_ℳ_d((Z'_l))≤Lsυ_2 d_ℳ_d(Z'_l). d_ℳ_d(Z'_l)≤ (Lsυ_2)^ld_ℳ_d(Z'_0). where s>0 is the largest element of all singular values of all W'_l_1, υ_2 is the largest element of all singular values of all W'_l_2 and L is the Lipschitz constant of activation function σ(·). The analysis from the prior proofs reveals that the diversity of MLP modules is constituted by two elements: the eigenvalues of parameters and the Lipschitz constant of the activation functions. In neural networks, parameters are typically normalized, which means the maximum eigenvalues of these parameters are bounded. Furthermore, the parameters in a neural network are learned through backpropagation. Given these factors, it becomes challenging to impose restrictions on the eigenvalues of parameters. Consequently, the activation functions in the MLP emerge as the most crucial aspect of their nonlinear expressive capabilities. § PANGU-Π MODULES AND ARCHITECTURES In this section, we first propose the series informed activation function to enhance the nonlinearity of the MLP module. Then, we introduce the augmented shortcuts to improve MSA modules in the Transformer architecture. Finally, we prove that the combination of the two techniques results in a novel and stronger Transformer model.§.§ Augmented ShortcutAs discussed in Section <ref>, a pure attention suffer serve a feraure collapseproblem.The typical LLM architecture only equips each MSA module with a single shortcut connection, which is an identity projection and directly copies the input features to the outputs. This simple formulation may not have enough representation capacity to improve the feature diversity maximally. We aim to refine the existing shortcut connections in vision Transformers and explore efficient but powerful augmented shortcuts to produce visual features with higher diversity.We propose augmented shortcuts to alleviate the feature collapse problem by paralleling the original identity shortcut with more parameterized projections. The MSA module equipped with T augmented shortcuts can be formulated as:AugMSA (Z_l)=(Z_l) + Z_l+∑_i=1 ^T _li(Z_l;Θ_li),l ∈ [1,2,⋯, L],where _li(·) is the i-th augmented shortcut connection of the l-th layer and Θ_li denotes its parameters. In addition to the original shortcut, the augmented shortcuts provide more alternative paths to bypass the attention mechanism. Different from the identity projection that directly copies the input tokens to the corresponding outputs, the parameterized projection _li(·) transforms input features into another feature space. Projections _li(·) will apply different transformations to the input featureas long as their weight matrices Θ_li are different, and thus paralleling more augmented shortcuts has the potential to enrich the feature space.A simple formulation for_li(·) is the sequence of a linear projection and an activation function, _li(Z_l;Θ_li) = σ(Z_lΘ_li),l ∈ [1,⋯, L], i ∈ [1,2,⋯, T], where Θ_li∈^d× d is the weight matrix and σ is the nonlinear activation function (, GELU). In Eq. <ref>, _li(·) tackles each token independently and preserves their specificity, which is complementary to the MSA modules aggregating different tokens. Note that the identity mapping is a special case of Eq. <ref>, , σ(x)=x and Θ_li is the identity matrix.This indicates that the upper bound of feature diversity d_ℳ_d(Z_l) decreases dramatically as the network depth increases without shortcut connections. We now analyze how the diversity d_ℳ_d(Z_l) changes the layer l in the model stacked by the AugMSA modules. We have the following theorem. Given a model stacked by the AugMSA modules,the diversity d_ℳ_d(Z_l) of feature in the l-th layer can be bounded by that of input data Z_0, , d_ℳ_d( AugMSA(Z_l)) ≤ (√(λ H) sυ_1 + 1+ ∑_i=1^T L‖Θ_li‖_2)d_ℳ_d(Z_l), d_ℳ_d(Z_l) ≤ (√(λ H) sυ_1 + 1+ ∑_i=1^T L‖Θ_li‖_2)^ld_ℳ_d(Z_0), where H is number of heads, s>0 is the largest element of all singular values of all W_l, and ‖·‖_2 is the ℓ_2 norm of the matrix. Since α_i = (√(λ H) sυ_1 + 1+ ∑_i=1^T L‖Θ_li‖_2) > 1, this allows us to prevent feature collapse.Compared with Theorem <ref>, the augmented shortcuts introduce an extra term (√(λ H) sυ_1 + 1+ ∑_i=1^T L‖Θ_li‖_2)^l, which increases exponentially. This tends to suppress the diversity decay incurred by the attention mechanism. The term α_i (0≤ i≤ l) is determined by the norms of weight matrices Θ_li of the augmented shortcuts in the i-th layer, and the bound of diversity d_ℳ_d(Z_l) in the l-th layer can be affected by all the augmented shortcuts in the previous layers. For the ShortcutMSA module with only an identity shortcut, we have α_i=√(λ) Hsυ_1 + 1. Adding more augmented shortcuts can increase the magnitude of α_i, which further improves the bound. As discussed above, paralleling multiple augmented shortcuts with the MSA and MLP modules in a vision Transformer can improve the feature diversity to achieve higher performance. However, directly implementing _li(·) (Eq. <ref>) involves a lot of matrix multiplications that are computationally expensive. For example, given feature Z_l ∈^n× d and weight matrix Θ_li∈^d × d, the matrix multiplication Z_lΘ_li consumes nd^2 FLOPs, where d is usually large in vision Transformers (, 4096 in LLaMA-7B). In <cit.>, the augmented shortcut is implemented with a block-circulant matrix, which realizes fast inference with fast Fourier transformation (FFT). Even though it achieves high theoretical acceleration, we empirically find that its practical speed depends on the hardware optimization. Considering that LLM is a universal model, we propose to implement the augmented shortcut with a simpler bottleneck module. The module is constructed by stacking two FC layers with a nonlinear activation function (, GeLU). The first FC layer reduces the d-dimension feature into a low-dimension space by a reduction ratio r and the second FC layer restores the original feature dimension. Then the computational cost is reduced to 2nd^2/r FLOPs . A larger r implies a further decrease in the computational cost. For example, when the reduction ratio r is set to 32, the computational cost can be reduced by 16× compared to the original augmented shortcut (Eq. <ref>). Obviously, shortcuts use a large weight identity branch, such as AugMSA (Z_l)= (Z_l) + αZ_l(∑_i=1^T Θ_li) (where α > 0), to prevent feature collapse, but this reduces network performance. We theoretically analyze this because feature diversity d_ℳ_d(Z_l) is adding excessive noise. The effect of noise on feature diversity is usually false positive. For example, if H=0, then d_ℳ_d(H)=0. However, when the input matrix introduces a zero-average noise ϵ, then d_ℳ_d(H+ϵ)=‖ϵ‖_F>d_ℳ_d(H)=0. This requires us to improve the diversity features and minimize the impact of noise diversity on the network. This means reducing the value of |d_ℳ_d( AugMSA(Z_l+ϵ)) - d_ℳ_d( AugMSA(Z_l))| while ensuring that d_ℳ_d(Z_l) is not attenuated.The following describes the definition of feature diversity of noise. We consider the effect of noise on matrix projection d_ℳ_d(H). |d_ℳ_d(H+ϵ) - d_ℳ_d(H)| ≤‖ϵ -1(x^H+ϵ_min-x^H_min)^⊤‖_F≤ d_ℳ_d(ϵ)= ‖(I-ee^⊤ )ϵ‖_F = ‖ϵ -1x^ϵ_min^⊤‖_F ≤‖ϵ‖_F.For zero average noise, the following equation holds ee^⊤ϵ = 0∈ℝ^N× d, so that the above inequality is equal. Since |d_ℳ_d(f(H+ϵ)) - d_ℳ_d(f(H))| ≤ d_ℳ_d(f(H+ϵ)-f(H)). We define d_ℳ_d(f(H+ϵ)-f(H)) to represent the diversity effect of noise ϵ on the f function whose input is H. The smaller the value, the higher the robustness of the function f. For the sake of simplicity and considering typical scenarios, the following discussion assumes that the input noise ϵ is zero average noise. We consider the impact of noise on the MSA module, when H=1. For a slight perturbation of the input ϵ, the self-attention matrix also produces a perturbation A_ϵ=A+δ, , d_ℳ_d( MSA(Z_l+ϵ) - MSA(Z_l))≤√(λ_A+δ) sυ_1 ‖ϵ‖_F + √(λ_δ)sυ_1 d_ℳ_d(Z_l), where λ_A+δ is the largest eigenvalue of A_ϵ^⊤(I-ee^⊤)A_ϵ andλ_δ is the largest eigenvalue of δ^⊤(I-ee^⊤)δ, usually λ_A+δ<1 and λ_δ<1. For the H heads MSA module, we can get the following formula: d_ℳ_d( MSA(Z_l+ϵ) - MSA(Z_l)) ≤√(λ_A+δH) sυ_1 ‖ϵ‖_F + √(λ_δH)sυ_1 d_ℳ_d(Z_l), We consider the noise diversity of linear parallel branch: d_ℳ_d(L (Z_l+ϵ)Θ_li- LZ_lΘ_li) ≤ L‖Θ_li‖_2‖ϵ‖_F, If and only if ee^⊤ (σ(Z_l+ϵ) -σ(Z_l)) = 0, the following inequality is equal: d_ℳ_d(_li(Z_l+ϵ;Θ_li)-_li(Z_l;Θ_li))≤ L ‖Θ_li‖_2 ‖ϵ‖_F. For the nonlinear activation function, σ(Z_l+ϵ) -σ(Z_l) is no longer guaranteed to be zero-average. Therefore, the noise diversity of the nonlinear branch is weaker than that of the linear branch: d_ℳ_d(_li(Z_l+ϵ;Θ_li)-_li(Z_l;Θ_li)) < L‖Θ_li‖_2‖ϵ‖_F. Given a model stacked by the AugMSA modules, the noise diversity of feature in the l-th layer can be bounded by the following formula, , d_ℳ_d( AugMSA(Z_l+ϵ) -AugMSA(Z_l))< (1+√(λ_A+δH) sυ_1+ L ∑_i=1^T ‖Θ_li‖_2)‖ϵ‖_F + √(λ_δH)sυ_1 d_ℳ_d(Z_l) This indicates that using a large number of nonlinear shortcuts instead of LZ_l(∑_i=1^T Θ_li) prevents feature collapse, reduces the impact of input noise on feature diversity, and enhances the robustness of the network. In addition, it also enhances the nonlinear expression ability. §.§ Series Informed Activation FunctionA neural network N_d composed of d hidden layers can be regarded as a composite of d functions f_i: N_d = f_1 ∘ f_2 ∘⋯∘ f_d. In particular, each hidden layer function f_i can be written as the composite of a function g_i and an activation function σ_i: f_i = σ_i ∘ g_i. Actually, the learning procedure of f_i amounts to an optimization problem over the layer hypothesis space H_i. Usually, ϕ_i is taken as a non-learnable function; therefore, in the most common scenario, H_i=σ_i× H_g_i. g_i is parameterized and learnable, and belongs to a hypothesis space H_g_i. This clearly limits the learnable space. In this section, we introduce a technique to define learnable activation functions that could be plugged into all the hidden layers of MLP. We define the hypothesis space H_ϕ_i, based on the following idea: (i) select a finite set of activation functions Σ := {σ_1, ⋯ , σ_N }, whose elements will be used as base elements; (ii) define the learnable activation function ϕ_i as a linear combination of the elements of Σ; (iii) identify a suitable hypothesis space H_ϕ_i; (iv) optimize the whole network, where the hypothesis space of each hidden layer is H_i = H_ϕ_i× H_g_i. In this way, we expand the learnable space of each hidden layer, enhancing the model's nonlinear expression capability. Several different activation functions have been proposed for deep neural networks, including the most popular Rectified Linear Unit (ReLU) and its variants (PReLU <cit.>, GeLU <cit.> and Swish <cit.>). They focus on enhancing the performance of deep and complex networks using different activation functions. However, as theoretically proven in the preceding section, the limited power of Transformer architecture is mainly due to poor nonlinearity, which has not been fully investigated by the existing activation functions. There are two ways to improve the nonlinearity of a neural network: stacking the nonlinear activation layers or increasing the nonlinearity of each activation layer. The trend of existing networks is to choose the former, which results in high latency when there is excessive parallel computation ability.One straightforward idea to improve the nonlinearity of the activation layer is stacking. The serial stacking of the activation function is the key idea of deep networks. However, stacking layers serially results in a large computation cost, which is not affordable for developing an efficient and effective LLM. Therefore, we choose concurrently stacking the activation function. Denote there are n activation functions for input x in a neural network as {σ_i(x)}^n_i=1, which can be the usual functions such ReLU and Tanh. The concurrent stacking of the activation functions can be formulated as:∑_i=1^nσ_i (a_i x+b_i),where n denotes the number of stacked activation functions and a_i,b_i are the scale and bias (which are learned parameters) of each activation to prevent simple accumulation.The nonlinearity of the activation function can be largely enhanced by concurrent stacking. Equation <ref> can be regarded as a series in mathematics, which is the operation of adding many quantities. Since the nonlinearity of the Transformer is mainly derived from the feed-forward network (FFN), we apply the series informed activation function on the FFN block. Given an input feature x∈ℝ^N × D, where N and D are the number of tokens and its hidden dimension, the original FFN can be formulated as (Z'_l) = σ(Z'_lW'_l_1)W'_l_2_i, l ∈ [1,2,⋯, L],where W'_l_1 and W'_l_2 are two fully connected layers. Specifically, to further enrich the approximation ability of the series, we enable the series-based function to learn the global information by varying the inputs from their neighbors, which can be reformulated as:SIAF- (Z'_l) = (∑_i=1^n σ_i(Z'_lW'_l_1_i))W'_l_2_i, l ∈ [1,2,⋯, L]. It is easy to see that when n=1, the series-based activation function σ_s(x) degenerates to the plain activation function σ(x), which means that the proposed method can be regarded as a general extension of existing activation functions. Given a model stacked by the SIAF- modules, the diversity d_ℳ(Z'_l) of feature in the l-th layer can be bounded by that of input data Z'_0,d_ℳ_d( SIAF-(Z'_l)) ≤ (∑_i=1^n L_i)sυ_2 d_ℳ_d(Z'_l),d_ℳ_d(Z'_l) ≤ (sυ_2∑_i=1^n L_i)^ld_ℳ_d(Z'_0), where L_i is the Lipschitz constant of the activation function σ_i. As demonstrated in our proof, the Series Informed Activation Function (SIAF) we propose markedly amplifies the nonlinear expressive capacity of the MLP module compared to the original architecture. This enhancement in nonlinearity progressively intensifies with the increase of the parameter n. §.§ Combination Finally, we offer the upper bounds for the combination of multi-layer AugMSA module and SIAF-MLP module to decay into subspace ℳ_d.We can obtain the upper bounds for the combination of multi-layer MSA module and MLP module to decay into subspace ℳ_d in vanilla Transformer architecture. Provide a network consisting of p-layer MSA module and q-layer MLP module,the diversity d_ℳ_d(Z_p+q) of feature in the l-th layer can be bounded by that of input data Z_0, , d_ℳ_d(Z_p+q)≤ (√(λ H) sυ_1)^p(Lsυ_2)^qd_ℳ_d(Z_0).It is evident that the original Transformer architecture possesses a relatively limited upper bound in terms of nonlinear expressive capability. Building upon this observation, we now proceed to analyze the enhanced expressive power of the Transformerwhen augmented with our proposed architectural modifications. Provide a network consisting of p-layer AugMSA module and q-layer SIAF-MLP module,the diversity d_ℳ_d(Z_p+q) of feature in the l-th layer can be bounded by that of input data Z_0, , d_ℳ_d(Z_l) ≤ (√(λ H) sυ_1 + 1+ ∑_i=1^T L‖Θ_li‖_2)^p(sυ_2∑_i=1^n L_i)^qd_ℳ_d(Z'_0).The preceding theorem substantiates that our proposed augmented shortcut module, when used in conjunction with the series informed activation function module, enhances the model's nonlinear expressive capabilities and diversity far beyond what is achievable by using either module independently. Consequently, our Transformer architecture amalgamates these two modules to form our PanGu-π architecture, resulting in a synergistic improvement in nonlinearity and diversity.§ EXPERIMENTS ON GENERAL FIELD In this section, we compare the existing open-source 7B and 1B models. Furthermore, we conduct ablation studies of the proposed architecture. Training data The pre-training data is gathered from diverse sources from the Internet, covering English and Chinese corpus in an equal 1:1 ratio. The tokenizer is built by byte-pair encoding (BPE) <cit.> from SentencePiece <cit.> upon our data. The final vocabulary size is about 0.1 million. After tokenization, the entire training dataset contains about 1.6 trillion tokens. Training details Our models are trained using the AdamW optimizer <cit.> with β_1 = 0.9, β_2 = 0.95 for 1 epoch utilizing the cosine learning rate decay <cit.> with an initial learning rate 3× 10^-4. The total batch size for the training process is approximately 4M, and it includes a warm-up phase spanning 4000 steps.Model details For fair comparison, we adopt the pre-normalization <cit.>, SwiGLU activation <cit.> and rotary embeddings <cit.> following the LLaMA architecture <cit.>. We then apply our series activation function and augmented shortcuts to build our models. The details of the models can be found in Table <ref>. We reduce the number of layers to make the number of parameters similar to the LLaMA model for fair comparison because the proposed modules introduce extra parameters.Training Devices We use the Huawei Ascend 910A card to train and evaluate the proposed architecture.The HUAWEI Ascend 910A is a high-efficiency, flexible, and programmable artificial intelligence processor. For half-precision floating-point (FP16) operations, the Ascend 910 delivers 256 TeraFLOPS. For integer precision calculations (INT8), it delivers 512 TeraOPS. Despite its unparalleled performance, Ascend 910's maximum power consumption is only 310W, which is significantly lower than its planned specifications of 350W. Developed using Huawei's proprietary Da Vinci architecture, it integrates a rich array of computing units, enhancing the completeness and efficiency of AI computations, thereby extending its applicability. It significantly improves the performance of the entire AI system and effectively reduces deployment costs.Benchmarks We use the OpenCompass platform <cit.> to evaluate on an extensive suite of downstream tasks. We selected 11 classical benchmarks from four different domains to conduct a comprehensive comparison.C-Eval <cit.> is a comprehensive Chinese evaluation benchmark to evaluate the knowledge and reasoning abilities of LLMs, which includes multiple-choice questions from 52 diverse disciplines across different difficulty levels. CMMLU <cit.> is also a comprehensive Chinese evaluation benchmark, which covers 67 topics including science, engineering, and humanities. MMLU <cit.> proposes an English evaluation benchmark for measuring LLM's multitask accuracy by covering 57 tasks including mathematics, history, computer science, and law. AGI-Eval <cit.> is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. BoolQ <cit.> is a reading comprehension dataset to evaluate the difficult entailment-like inference ability of LLMs. AX-b <cit.> is a broad-coverage diagnostic task and PIQA <cit.> is a physical interaction question-answering task. CSL <cit.> offers a Chinese Scientific Literature dataset to evaluate the performance of models across scientific domain tasks. EPRSTM <cit.> is a binary sentiment analysis dataset based on product reviews on an e-commerce platform.  <cit.> is a single-document summarization task, and LCSTS <cit.> is a large corpus of Chinese short-text summarization datasets. §.§ Ablation Studies To better understand the proposed Architecture, we conduct extensive experiments to investigate the impact of each component. All the ablation experiments are conducted based on the 1B model size. Influence of series informed activation function. In the above section, we propose the SIAF to enhance the performance and enable global information exchange in feature maps. Table <ref> shows the performance of the proposed SIAF using different numbers of n. When n=1, the activation function degenerates into the plain activation function. We find that when n=2, the performance and the inference speed strike an optimal balance. Therefore, we choose n=2 for the following experiments. Influence of augmented shortcuts. As mentioned above, the augment module can greatly improve the performance. As shown in Table <ref>, we trade off accuracy and speed by controlling the width of the bottleneck middle layer. By transforming the reduction rate, it is apparent that an increase in reduction rate results in a decrease in calculation speed and accuracy. After careful consideration, we determined a reduction rate of 32 to achieve the optimal balance between speed and accuracy. Architecture We finally ablate the effectiveness of each component of the proposed method and report the language modeling results in Table <ref>. We ablate the series informed activation function (SIAF) and augmented shortcuts (AS) as described earlier. Furthermore, we compared the proposed method with WideNet <cit.>, which is introduced to increase the nonlinearity of Transformer architectures. It can be seen through experiments that each component of the proposed method is effective for improving the performance of the Transformer architecture, which surpasses that ofWideNet <cit.>. §.§ Feature Analysis and VisualizationWe also analyze the latent representation across different layers to demonstrate further the superiority of nonlinearity compensation introduced by the PanGu-π architecture. We are interested in the channel diversity of each architectural choice. Following the analysis method from <cit.>, we characterize the effective dimensions of different decoder architectures. In particular, the effective dimension d(ϵ) is defined as the minimum principal component numbers that occupy the explained variance ratio of ϵ in a principal component analysis (PCA). In principle, a more powerful representation of individual tokens would result in a larger effective dimension. In comparison, a smaller effective dimension means the variance in token representation occurs mainly in smaller dimensions. As shown in Fig. <ref>, here we report each layer's effective dimension d(0.8). Removing all augmented shortcuts limits the effective dimension to the greatest extent, and removing the series informed activation function significantly reduced the effective dimension consistently on each Transformer layer, indicating the significant role of these components in channel-wise feature diversity <cit.>. Furthermore, to offer a finer-grained characterization of linguistic features from different architectures, we also visualize the representation of tokens with different semantics, using the test set of Penn Tree Bank (PTB) <cit.> as a general domain corpus. In particular, we adopt a layer-wise visualization method to illustrate the concentration and diversity of features for each token and how these characteristics change along the Transformer layers Fig. <ref>. To assist the visualization, the top five frequent tokens are highlighted with different colors. PCA is used to reduce all feature maps to a 3D space, preventing nonlinear reduction as it may cause disruption. Additionally, the total variance accounted for by the first three principal components is labeled for each layer. From Fig. <ref>, we can see that PanGu-π architecture possesses the most diverse and isotropic feature space <cit.>, with each token's feature expanding to a high-dimensional cluster when moving to deeper layers. In comparison, removing the series informed activation function or the augmented shortcuts limit the feature on low-dimensional manifolds (i.e., aggregating along one axis in the middle panel), and the differentiation among tokens is also blurred, indicating less discriminative power in language modeling. To verify the effectiveness of the language representation enhanced by PanGu-π architecture, we conduct case analyses where the saliency of each token's feature dimension is calculated by deriving the absoluate value of the corresponding gradients with respect to the prediction target.As shown in Figure <ref>, the language model is required to echo the previous mentioned name “chester" as the next word. The PanGu-π model correctly identifies the key message “chester" in the context reflected by higher gradient values for most of the channels (Figure <ref> (a)). In comparison, without the augmented shortcuts and series activation function, the model tend to leverage more information from the unmeaningful symbols after the key hints, leading to a wrong prediction that directly ends the sentence (Figure <ref> (b)).§.§ Comparison with 7B Models To showcase the competitive edge of PanGu-π, we conducted an extensive analysis of the PanGu-π model, comparing it against other state-of-the-art models with a similar size. The comprehensive results are detailed in Table <ref>. We segmented the comparative datasets into four tasks: examination, knowledge, reasoning, and understanding, to fully assess the capabilities of our model. Notably, in the examination category, our model almost reached the state-of-the-art (SOTA) benchmarks, surpassing LLaMA2, Baichuan2, and InternLM, and closely matching the current top performer, Qwen. The trend of superior performance in reasoning and understanding categories can also be observed, which is attributed to the PanGu-π model's advanced nonlinear capability. This allows it to fit more complex function spaces, giving it a competitive advantage in these challenging SOTA benchmarks.In the knowledge dataset, our model still lags behind in performance when compared to the existing SOTA benchmarks. This could be due to the lower concentration of knowledge in our collected dataset and the presence of many unseen data points in the BoolQ dataset. Nevertheless, our model achieved excellent results, demonstrating strong generalization abilities.Overall, our model exhibits consistently better average performance indices compared to the current state-of-the-art models. When measured against other open-sourced models of a 7B size, PanGu-π achieves significantly higher average performance. In the future, we plan to train our model with better data, aiming to enhance its performance metrics in the knowledge domain.Moreover, we evaluated the latency (milliseconds per token) of these models. As the compared models utilize a similar architecture to LLaMA, their latencies are comparable. Our findings indicate that PanGu-π achieves a much faster inference speed compared to the LLaMA architecture, further establishing its superiority.The PanGu-π model can serve as a swift and effective foundational model, leveraging strategies such asSupervised Fine-Tuning (SFT), AI Agent, or retrieval augmentation to become a competitive AI solution. It has the potential to harness the power of LLMs in edge devices like smartphones, offering a compelling choice for various applications.§.§ Comparison with 1B ModelsFor comparison, we have meticulously selected three SOTA models of similar size, all based on the LLaMA architecture. These include Chinese-LLaMA2-1.3B <cit.>, TinyLlama-1.1B <cit.>, and Sheared-LLaMA-1.3B <cit.>. Notably, Sheared-LLaMA-1.3B was initially pruned from a larger LLaMA2-7B model and subsequently trained with a condensed dataset of 50B tokens. Our extensive experiments, as presented in Table <ref>, demonstrate that PanGu-π-1B significantly surpasses existing LLMs of similar size, and in some cases, even larger sizes.Similar to our findings with the 7B model, our 1B model demonstrates a remarkable superiority in the examination category over existing models, affirming the efficacy of our modifications even at the 1B scale. This outcome is a testament to our model’s enhanced nonlinear capabilities, enabling it to fit more complex function spaces. In the realms of reasoning and understanding, we observe similar trends, further validating our model's robust performance.However, as with our larger model, the knowledge density in our collected dataset for training the 1B model was relatively low. Consequently, in datasets like BoolQ, which contain data points previously unseen by our model, we acknowledge a noticeable gap in performance when compared to current state-of-the-art benchmarks. This indicates room for improvement in our model's ability to handle unfamiliar data, a challenge we aim to address in future iterations.Furthermore, when assessing latency, a critical factor in real-world applications, PanGu-π-1B shows a marked advantage. It records lower latency times of 13.8 ms on the 910A, compared to its deeper counterpart, LLaMA2-1B, which clocks 15.4 ms on the 910A, despite having a roughly equivalent number of parameters. This not only underscores the efficiency of PanGu-π-1B but also highlights its suitability for deployment in time-sensitive applications.§ YUNSHAN: DOMAIN SPECIALIZED MODEL FOR FINANCE AND LEGAL FIELDSIn the current landscape of LLMs, there is a growing trend towards using domain-specific models for specialized applications, which differ from the capabilities of general-purpose large models. These domain-specific models are developed with a deeper level of expertise, and are particularly adept at addressing tasks in specific fields. For instance, in the finance industry, LLMs are used to provide in-depth analysis of financial documents and offer interactive information services for market insights and queries. Similarly, in the legal field, LLMs are used to facilitate expedited case research and information retrieval. Exemplary models such as Xuanyuan <cit.>, FinGPT <cit.> and FinMA <cit.> in financial services, and Lawyer LLaMA <cit.> and LawGPT <cit.> for legal application. This progression not only exemplifies the robust development within the sphere of domain-specific LLMs but also signals the future role these technologies are poised to play in furthering innovation and growth across various domains. §.§ DatasetsTo enhance the model’s ability to solve domain-specific tasks, the model is continually pretrained and instruction finetuned on the domain datasets. We collect a variety of domain datasets both in the financial domain and the legal domain. When we construct the datasets, we consider the professionalism, diversity, data size and accessibility of the sources. We process and clean the collected datasets to construct high-quality domain corpora.Financial Domain Our financial pretraining data consists of company announcements, financial news, articles, and examinations. Some of the data is from FinCorpus[https://huggingface.co/datasets/Duxiaoman-DI/FinCorpus], which is a high-quality Chinese financial dataset. The rest of the data is crawled through TuShare[https://tushare.pro/], which is a Python package that provides free access to Chinese financial data from various sources. After cleaning, the total size of the data is 36.5B tokens.Legal Domain Our legal pretraining data comprises legal regulations, legal case texts, academic papers, and legal examinations. We use Pile of Law <cit.> and LeXFiles <cit.> to construct some of the data, and selectively collect data from two legal-related public platforms[https://pkulaw.com/][https://wenshu.court.gov.cn/]. After cleaning, the legal pretraining data contains 111.7B tokens.Instruction-following Data For supervised fine-tuning, we collect 995k domain instruction-following data. The data consists of JEC-QA <cit.> and the instructions open sourced by ChatLaw <cit.>, Lawyer LLaMA <cit.>, LawGPT <cit.>, and FinCorpus<ref>, covering Judicial Examination examples, legal consultations, and financial examination examples.§.§ TokenizerCorpora from financial and legal fields often contain specialized words that are not included in the general domain vocabulary. During tokenization, these unknown words are deconstructed into sub-words or even UTF-8 bytes, which reduces model training efficiency. To alleviate the inefficiency, we expanded the original vocabulary size from 100883 to 110428 by adding a specialized financial-legal word list. We used byte-pair encoding (BPE) <cit.> from Tokenizer[https://github.com/huggingface/tokenizers] to train our tokenizer. The specialized vocabulary is trained from a subset of the financial and legal corpora and then merged into the original vocabulary. Table <ref> shows the compression rate we calculate on the JEC-QA dataset <cit.>, a legal domain QA dataset. Our new tokenizer has a much lower compression ratio than the original one and ties for the best with Baichuan 2 while having a smaller vocabulary. §.§ Training Process Further PretrainingTo transfer financial and legal knowledge to the PanGu-π model, we further pre-train it with a specialized vertical corpus. To alleviate catastrophic forgetting of formerly learned knowledge, we partially introduce the during the pre-training process. Specifically, we resample a portion of the previous high-quality data and mix it with the vertical domain data at a 1:1 ratio.Instruction Tuning During the instruction tuning phase, we utilize supervised fine-tuning samples of both general-purpose tasks and domain-specific tasks. We shuffle and combine them into one supervised dataset and execute the instruction tuning process in one stage. On the basis of the YunShan-base model, which has already acquired general and specialized knowledge during previous phase, this approach aims to simultaneously teach the model how to follow human instructions in different domains.§.§ Benchmarks The proposed YunShan model is evaluated across the two specialized law and finance domains. This approach aims to comprehensively assess YunShan model's capabilities in knowledge retention, comprehension, and application within domain-specific tasks, covering Chinese and English languages. §.§.§ Financial DomainFor the financial domain, we use two distinct datasets: FinanceIQ <cit.> and FinEval <cit.>, each serving a unique purpose in our evaluation framework.FinanceIQ is a Chinese evaluation dataset focused on the financial domain, designed to assess the knowledge and reasoning capabilities of LLMs in financial scenarios. It spans 10 broad financial categories and 36 subcategories, offering a total of 7,173 multiple-choice questions. This dataset is tailored to evaluate a wide range of financial knowledge, from basic concepts to complex problem-solving in areas such as accounting, banking, and financial analysis.FinEval is an expansive benchmark tailored for measuring the adeptness of LLMs in the finance domain. Encompassing 4,661 multiple-choice questions, it spans 34 academic subjects across the finance, economics, accounting, and professional certification domains. This dataset is structured to facilitate a layered evaluation through diverse assessment methods, including zero-shot, few-shot, answer-only, and chain-of-thought prompts. §.§.§ Legal DomainLawBench <cit.>:For the legal domain, we implement the test over LawBench dataset. LawBench is a sophisticated benchmark designed to evaluate the legal knowledge capabilities of LLMs within the Chinese civil-law system. This benchmark tests LLMs across three cognitive levels: Memorization, Understanding, and Applying. It includes 20 tasks across five types: single-label classification, multi-label classification, regression, extraction, and generation. With over 500 cases, LawBench employs metrics including accuracy, F1, and ROUGE-L to ensure a comprehensive assessment of LLMs in diverse legal applications. Through this diverse and challenging array of datasets, our model is subjected to an extensive evaluation, covering a range of tasks and scenarios in law and finance. This setup not only tests the model's domain-specific expertise but also its adaptability and versatility across different language settings. §.§ Comparison with other domain-specific models For financial task, FinGPT <cit.> and FinMA <cit.> are adopted. Specifically, FinGPT v3.2, which uses LLaMA2-7B as a base model, and FinMA v0.1 NLP 7B version are adopted for a fair comparison. For legal tasks, we conduct experiments compared with Hanfei <cit.> and LaWGPT <cit.>. Specifically, we chose the Legal-Base-7B version for LaWGPT, which is trained based on Chinese-LLaMA-7B <cit.>. §.§.§ Results on the Financial BenchmarkResults on FinancelQ.In Table <ref>, we first compare the general domain and domain- specialized LLMs on the FinancelQ benchmark. These models were assessed across multiple financial professional sub-domains, including Certified Public Accountant (CPA), Banking Qualification (BQ), Stock Portfolio Query (SPQ), Certified Financial Planner Qualification (CFPQ), Investment Portfolio Query (IPQ), Economist, Tax Accountant, Finance Qualification (FQ), Financial Planner, and Actuary. The "AVG" column represents the average score across all these sub-domains. The YunShan model demonstrated exceptional performance across nearly all sub-domains, particularly excelling in the Economist, Banking Qualification, and Certified Financial Planner Qualification categories, with scores of 72.50, 63.42, and 57.57, respectively. These scores significantly surpassed those of other models, highlighting its efficiency in handling finance-related tasks. Its average score of 54.56 was markedly higher than its counterparts, indicating its broad adaptability and proficiency in the financial domain. While models like Baichuan2-7B showed good performance in certain sub-domains such as Banking Qualification, Investment Portfolio Query, and Certified Financial Planner Qualification, they fell short in others. This suggests that while they are adaptable to specific financial sub-domains, they do not match the overall proficiency of the YunShan model. Conversely, other models such as FinGPT-7B and FinMA-7B exhibited generally weaker performance across all sub-domains, with average scores not exceeding 35. This may indicate a lesser degree of specialization in handling financial domain knowledge compared to other models. Overall, the exceptional performance of the YunShan model in this financial domain-specific evaluation reflects its outstanding ability in understanding and processing financial professional knowledge and showcases its potential as an LLM for the financial industry.Results on FinEval.We also conducted an experiment on the FinEval benchmark in Table <ref>. The significant advantages of the YunShan model still exist for the FinEval benchmark. As shown in Table <ref>, models were assessed across four different sub-domains: Accounting, Certificate, Economy, and Finance, with the "AVG" column indicating the average score across all sub-domains. Specifically, we computed the average score by AVG=(305 × Accounting +334 × Certificate+207 × Economy +305 × Finance)/1151 according to the FinEval <cit.>. It's noteworthy that the YunShan model excelled in Certificate and Accounting, scoring 69.46 and 65.57 respectively, significantly outperforming other models. Its leading average score of 61.34 highlights its proficiency in handling finance-related tasks. Comparatively, Qwen-7B and Baichuan2-7B demonstrated robust performance, especially in the Finance sub-domain, with scores of 61.00 and 59.67 respectively. Their overall average scores, 60.56 and 55.43, indicate their strong adaptability to the financial sector. In contrast, models like FinGPT-7B, FinMA-7B, LawGPT-7B, and HanFei-7B showed relatively weaker performances across all sub-domains, with average scores below 35. This suggests that these models may be less specialized or inadequately trained for tasks specific to the financial domain. Overall, the YunShan model stands out as the most suitable model for financial domain tasks. §.§.§ Results on Legal BenchmarkTable <ref> shows the overall zero-shot results of each model in the legal field on the different categories of LawBench <cit.>, for three key aspects: Memorization, Understanding, and Applying. The average score is computed by AVG=(2× Memorization +10 × Understanding+8 × Applying)/20. Notably, the YunShan model yet again outperforms with its impressive scores, with the highest average score being 31.75. This performance indicates a superior capability in recalling legal information but also in understanding and applying it effectively, a crucial factor in legal contexts. Other models like Qwen-7B also demonstrate strong performances, with average scores of 21.35 and 21.41, respectively. In contrast, models such as FinGPT-7B exhibit significantly lower scores across all dimensions, leading to an overall average score of 0.11. Other models like InternLM-7B, Baichuan2-7B, and HanFei-7B demonstrate varied performances, with HanFei-7B scoring high in Memorization (13.71) but lower in Understanding (6.91), reflecting a potential imbalance in its skillset. Overall, the YunShan model demonstrates exceptional capability in the legal domain, outperforming others across all tested skills. This suggests that the YunShan model is adept at recalling legal information and excels in comprehending and applying it, making it a highly effective tool for legal applications.§ CONCLUSIONS AND DISCUSSIONSIn this paper, we introduced PanGu-π, an innovative LLM architecture designed to mitigate the feature collapse issue in Transformer models by integrating nonlinearity. Our initial step involved a detailed examination of contemporary LLM architectures where we pinpointed the feature collapse challenge. Our theoretical analysis led us to propose the integration of nonlinearity, a principle predominantly used in convolutional neural networks for image processing, as a critical component for enhancing language models. To implement this, we augmented the FFN with a series informed activation function, thereby enhancing its nonlinearity. Additionally, we introduced an augmented shortcut to the MSA module, effectively reducing feature collapse. Theoretical evaluations revealed that these enhancements significantly boost the approximation capabilities of both the FFN and MSA modules.Leveraging these enhancements, we developed various sizes of PanGu-π, notably PanGu-π-7B and PanGu-π-1B. These models underwent extensive training and SFT, showcasing strong NLP capabilities. The PanGu-π, foundation models achieved state-of-the-art (SOTA) results in various downstream tasks, matching or surpassing similar-sized models. PanGu-π displayed exceptional adaptability, delivering SOTA performances in the specialized domains of finance and law.unsrt2authabbrvpp If you include a photo:[ < g r a p h i c s > ]Michael Shell Use \begin{IEEEbiography} and then for the 1st argument use \includegraphics to declare and link the author photo. Use the author name as the 3rd argument followed by the biography text. Proof of Lemma <ref> Write WW^⊤=PDP^⊤ for the eigin-decomposition of WW^⊤, where P=[p_1,p_2, …, p_d] is the standard orthogonal and D=diag(d_1, …, d_d) with all d_1 ≥…≥ d_d≥ 0 and for nonzero eigenvalues d_i = s_i^2 > 0, where s_i is the i-th largest singular value of W. HH^⊤=QΩ Q^⊤ for the eigin-decomposition of HH^⊤, where Q=[q_1,q_2, …, q_N] is the orthogonal and Ω=diag(ω_1, …, ω_N) with all ω_i≥ 0. And e=N^-1/2[1,1, …, 1]^⊤= N^-1/21∈ℝ^N× 1. (1) The formula proves as follows: d_ℳ_m(HW)^2 =‖(I-ee^⊤)HW‖^2_F =tr{(I-ee^⊤)HWW^⊤H^⊤(I-ee^⊤)}=tr{WW^⊤H^⊤(I-ee^⊤)H}=tr{PDP^⊤H^⊤(I-ee^⊤)H}=tr{DP^⊤H^⊤(I-ee^⊤)HP}=tr{DP^⊤H^⊤(I-ee^⊤)HP}=∑_i=1^N d_ip_i^⊤H^⊤(I-ee^⊤)Hp_i≤∑_i=1^N s^2p_i^⊤H^⊤(I-ee^⊤)Hp_i=s^2d_ℳ_d(H)^2. Since matrix H^⊤(I-ee^⊤)H is positive semidefinite, p_i^⊤H^⊤(I-ee^⊤)Hp_i ≥ 0. It follows that d_ℳ_m(HW) ≤ sd_ℳ_d(H). Note that d_ℳ_d(H) = ‖H-1x_min^⊤‖_F, where x_min^⊤ = min_x_l‖H-1x_l^⊤‖_F. And ‖σ(H_1)-σ(H_2)‖_F ≤ L ‖H_1 -H_2‖_F (2) The formula proves as follows: d_ℳ_d(σ(H))= ‖σ(H)-1x^σ_min^⊤‖_F ≤‖σ(H)-1σ(x_min)^⊤‖_F =‖σ(H)-σ(1x_min^⊤)‖_F ≤L ‖H -1x_min^⊤‖_F = L d_ℳ_d(H), (3) The formula proves as follows: α_1 d_ℳ_d(H) + α_2 d_ℳ_d(B) =α_1 ‖H-1x^H_min^⊤‖_F + α_2 ‖B-1x^B_min^⊤‖_F≥‖α_1H + α_2B - 1(α_1x^H_min+α_2x^B_min)^⊤‖_F ≥‖α_1H + α_2B - 1(x^α_1H+α_2B_min)^⊤‖_F = d_ℳ_d(α_1 H+α_2 B) For the last inequality, we refer to <cit.>. Proof of Theorem <ref> d_ℳ_m(AZ_lW) ≤√(λ)d_ℳ_m(Z_lW) ≤√(λ)sd_ℳ_m(Z_l) Proof of Lemma <ref> d_ℳ_Hm( ([H_h]_h=1^H))^2 = ‖(I-ee^⊤) ([H_h]_h=1^H) ‖^2_F = tr{(I-ee^⊤) ([H_h]_h=1^H) ([H_h]_h=1^H)^⊤} = tr{ ([H_h]_h=1^H) ([H_h]_h=1^H)^⊤}-tr{ ([e^⊤H_h]_h=1^H) ([ee^⊤H_h]_h=1^H)^⊤} = ∑_h=1^Htr{(I-ee^⊤)H_hH_h^⊤} = ∑_h=1^Hd_ℳ_m(H_h)^2, Proof of Theorem <ref> (Z_l)= ([A_lhZ_lW^v_lh]_h=1^H)W^o_l, d_ℳ_d(Z_l+1) = d_ℳ_d( ([A_lhZ_lW^v_lh]_h=1^H)W^o_l) ≤υ_1 d_ℳ_d( ([A_lhZ_lW^v_lh]_h=1^H)) ≤υ_1√(∑_h=1^H d_ℳ_d(A_lhZ_lW^v_lh)^2) ≤√(λ)sυ_1√(∑_h=1^H d_ℳ_d(Z_l)^2)= √(λ H)sυ_1d_ℳ_d(Z_l) ≤ (√(λ H) sυ_1)^l+1d_ℳ_d(Z_0). Proof of Theorem <ref> (Z'_l) = σ(Z'_lW'_l_1)W'_l_2, l ∈ [1,2,⋯, L], d_ℳ_d( (Z'_l))) = d_ℳ_d(σ(Z'_lW'_l_1)W'_l_2) ≤υ_2d_ℳ_d(σ(Z'_lW'_l_1)) ≤ Lυ_2d_ℳ_d(Z'_lW'_l_1) ≤Lsυ_2d_ℳ_d(Z'_l) d_ℳ_d(Z'_l) ≤ (Lsυ_2)d_ℳ_d(Z'_l-1) ≤ (Lsυ_2)^ld_ℳ_d(Z'_0). Proof of Theorem <ref> AugMSA (Z_l)=(Z_l) + Z_l+∑_i=1 ^T _li(Z_l;Θ_li), d_ℳ_d( AugMSA (Z_l))= d_ℳ_d((Z_l) + Z_l+∑_i=1 ^T _li(Z_l;Θ_li)) ≤ d_ℳ_d((Z_l)) + d_ℳ_d(Z_l) + d_ℳ_d(∑_i=1^T _li(Z_l;Θ_li))≤ d_ℳ_d((Z_l)) + d_ℳ_d(Z_l) + ∑_i=1^T d_ℳ_d(σ(Z_lΘ_li)) ≤(√(λ H) sυ_1 + 1)d_ℳ_d(Z'_l) + L∑_i=1^T d_ℳ_d(Z_lΘ_li)≤ (√(λ H) sυ_1 + 1+ ∑_i=1^T L‖Θ_li‖_2)d_ℳ_d(Z_l). Considering that H heads exist inthe MSA module, the diversity d_ℳ(Z_l+1) becomes: d_ℳ_d(Z_l) ≤ (√(λ H) sυ_1 + 1+ ∑_i=1^T L‖Θ_li‖_2)^ld_ℳ_d(Z_0). Proof of Lemma 3 When H=1, (Z_l)=AZ_lWW^o_l, d_ℳ_d( MSA(Z_l+ϵ) - MSA(Z_l))= ‖ (I-ee^⊤ )((A+δ)(Z_l+ϵ)-AZ_l ) (WW^o_l)‖_F= ‖ (I-ee^⊤ )((A+δ)ϵ + δ Z_l) (WW^o_l)‖_F ≤ d_ℳ_d(A_ϵϵWW^o_l) + d_ℳ_d(δ Z_lWW^o_l) ≤√(λ_A+δ) sυ_1 ‖ϵ‖_F + √(λ_δ)sυ_1 d_ℳ_d(Z_l), (Z_l)= ([A_lhZ_lW^v_lh]_h=1^H)W^o_l, d_ℳ_d( MSA(Z_l+ϵ) - MSA(Z_l))= υ_1d_ℳ_d(([(A_ϵ_lh(Z_l+ϵ) - A_lhZ_l)W^v_lh]_h=1^H))≤√(λ_A+δH) sυ_1 ‖ϵ‖_F + √(λ_δH)sυ_1 d_ℳ_d(Z_l), Proof of Lemma <ref> d_ℳ_d(L (Z_l+ϵ)Θ_li-L Z_lΘ_li) =d_ℳ_d(L ϵΘ_li) =L ‖ (I-ee^⊤)ϵΘ_li‖_F = L ‖ϵΘ_li - 1x^ϵΘ_li_min^⊤‖_F ≤ L ‖ϵΘ_li‖_F ≤ L ‖Θ_li‖_2 ‖ϵ‖_F. Proof of Theorem <ref> d_ℳ_d(_li(Z_l+ϵ;Θ_li)-_li(Z_l;Θ_li)) =d_ℳ_d(σ((Z_l+ϵ)Θ_li)-σ(Z_lΘ_li)) = ‖ (I-ee^⊤)(σ((Z_l+ϵ)Θ_li)-σ(Z_lΘ_li))‖_F ≤‖I-ee^⊤‖_2 ‖ (σ((Z_l+ϵ)Θ_li)-σ(Z_lΘ_li)‖_F =‖ (σ((Z_l+ϵ)Θ_li)-σ(Z_lΘ_li)‖_F ≤ L ‖ϵΘ_li‖_F.≤ L ‖Θ_li‖_2 ‖ϵ‖_F. When ee^⊤ (σ(Z_l+ϵ) -σ(Z_l)) ≠0, ‖ (I-ee^⊤)(σ((Z_l+ϵ)Θ_li)-σ(Z_lΘ_li))‖_F <‖σ((Z_l+ϵ)Θ_li)-σ(Z_lΘ_li)‖_F.Proof of Theorem <ref> d_ℳ_d( AugMSA(Z_l+ϵ) -AugMSA(Z_l))=d_ℳ_d( MSA(Z_l+ϵ) - MSA(Z_l)) + d_ℳ_d( (Z_l+ϵ)- Z_l) + ∑_i=1^T d_ℳ_d(σ((Z_l+ϵ)Θ_li)-σ(Z_lΘ_li)) < (1+√(λ_A+δH) sυ_1 )‖ϵ‖_F +∑_i=1^T L‖ϵΘ_li‖_F + √(λ_δH)sυ_1 d_ℳ_d(Z_l) ≤ (1+√(λ_A+δH) sυ_1+ L∑_i=1^T‖Θ_li‖_2)‖ϵ‖_F + √(λ_δH)sυ_1 d_ℳ_d(Z_l) Proof of Theorem <ref> SIAF- (Z'_l) = (∑_i=1^n σ_i(Z'_lW'_l_1_i))W'_l_2_i, l ∈ [1,2,⋯, L]. d_ℳ_d( SIAF- (Z'_l))= ‖ (I-ee^⊤ ) (∑_i=1^n σ_i(Z'_lW'_l_1_i))W'_l_2_i‖_F≤υ_2 (∑_i=1^n‖σ_i(Z'_lW'_l_1_i) -σ_i(1x^Z'_l_min^⊤W'_l_2_i) ‖_F ≤ (∑_i=1^n L_i)sυ_2 d_ℳ_d(Z'_l), d_ℳ_d(Z'_l) ≤ (sυ_2∑_i=1^n L_i)d_ℳ_d(Z'_l-1) ≤ (sυ_2∑_i=1^n L_i)^ld_ℳ_d(Z'_0) Proof of Theorem <ref> d_ℳ_d(Z_p+q)=d_ℳ_d(^q(Z_p))≤ (Lsυ_2)^qd_ℳ_d((^q(Z_p))≤ (√(λ H) sυ_1)^p(Lsυ_2)^qd_ℳ_d(Z_0). Proof of Theorem <ref> d_ℳ_d(Z_p+q) d_ℳ_d( SIAF-^q(Z_p+q)) ≤ (sυ_2∑_i=1^n L_i)^qd_ℳ_d( AugMSA^p(Z_p)) ≤ (√(λ H) sυ_1 + 1+ ∑_i=1^T L‖Θ_li‖_2)^p(sυ_2∑_i=1^n L_i)^qd_ℳ_d(Z_0). §.§ Examples generated by PanGu-π-1B [colback=gray!10, colframe=black, width=16cm, arc=1mm, auto outer arc, boxrule=0.8pt, ] Input: Describe a time when you had to make a difficult decision. PanGu-π-1B: As an AI assistant, I have to make decisions every day, and sometimes it can be difficult to make a decision. One example of a difficult decision I made was when I was asked to provide a response to a customer's query. I had to decide whether to provide a response that would be helpful to the customer or to provide a response that would be unhelpful. [colback=gray!10, colframe=black, width=16cm, arc=1mm, auto outer arc, boxrule=0.8pt, ] Input: Based on the given article, choose the right option for the question. Article: We have two new students in our class. They are Cindy and Kate. They look the same. But they aren't twins. Cindy is American. She is thirteen. Her father and mother are both teachers. She likes green and blue. She is often in blue pants. She often plays the guitar after school. She is good at math and English. She says math is her favorite because it is interesting. Kate is English. She is twelve. Her parents are not teachers. Her father is a doctor and her mother is a worker. Kate likes yellow and orange. She can't play the guitar. But she plays volleyball very well. Her favorite subject is Chinese. She say she wants to learn about Chinese history. Kate likes documentaries very much. Question: If you are ill, you can go to see __. Options: (A) Cindy's father (B) Cindy's mother (C) Kate's father (D) Kate's mother Answer: PanGu-π-1B: Based on the given article, the correct option for the question is: (C) Kate's father. [colback=gray!10, colframe=black, width=16cm, arc=1mm, auto outer arc, boxrule=0.8pt, ] 问:“风声、雨声、读书声,声声人耳;家事、国事、天下事,事事关心。”这副对联给我们的启示是____ A. 要参加社会实践活动,热心帮助贫困儿童 B. 要学好文化知识,其他事情与自己无关 C. 要关心社会发展,关注国家大事 D. 事必躬亲,大小事都要亲力亲为 答案是选项 PanGu-π-1B:C. 要关心社会发展,关注国家大事。 [colback=gray!10, colframe=black, width=16cm, arc=1mm, auto outer arc, boxrule=0.8pt, ] 问:请阅读文章,并根据文章内容从文中选择合适的部分回答问题。 文章:阿部正丰(生年不详-1535 年 12 月 29 日)是日本战国时代的武将。三河松平氏家臣,阿部定吉的长子。通称弥七郎。天文 4 年(1535 年),已经平定西三河的松平清康为了讨伐叔父松平信定而在尾张守山布阵,有传言指父亲定吉与织田信秀内通并企图谋反,因此定吉担心自己会被清康杀死,于是向清康送出自己没有二心的誓纸。翌日,听到本阵发出马嘶叫声的正丰误以为父亲被杀,于是在本阵杀死清康。而正丰则在当场被植村氏明斩杀(森山崩)。 问题:阿部正丰被通称为什么? 答: PanGu-π-1B:正丰被通称弥七郎。 §.§ Examples generated by PanGu-π-7B [colback=gray!10, colframe=black, width=16cm, arc=1mm, auto outer arc, boxrule=0.8pt, ] Q: Describe a time when you had to make a difficult decision. PanGu-π-7B: As an AI model, I don't have real-life experiences or the ability to make decisions, but I can tell you about a scenario that would require a difficult decision. Imagine a situation where a person is facing a moral dilemma. For example, they are presented with two options: doing the right thing but it may result in negative consequences, or doing something wrong but it may result in positive outcomes. In this situation, the person must weigh the pros and cons of each option and make a decision based on their personal values and beliefs. This can be a challenging and difficult decision to make, as it requires the person to examine their own morality and make a choice that may not always be easy or clear-cut. [colback=gray!10, colframe=black, width=16cm, arc=1mm, auto outer arc, boxrule=0.8pt, ] Q: I’m going to be participating in a comedy roast. What are some hilariously spicy roasts I can use? PanGu-π-7B: As an AI, I should not encourage or promote any kind of controversial or offensive content. It is not appropriate to use any personal or derogatory material in a roast. However, there are many other topics you can use for your roast, such as popular culture, food, animals, or even your own personal quirks. [colback=gray!10, colframe=black, width=16cm, arc=1mm, auto outer arc, boxrule=0.8pt, ] 问:鸡和兔在一个笼子里,共有 26 个头,68 只脚,那么鸡有多少只,兔有多少只? PanGu-π-7B:假设鸡有 x 只,兔有 y 只。 根据题目,我们可以得到两个方程: 1. x + y = 26 (鸡和兔总共有26个头) 2. 2x + 4y = 68 (鸡和兔总共有68只脚) 我们可以使用消元法来解这个方程组。 将第一个方程乘以2,得到2x + 2y = 52。 将第二个方程减去第一个方程,得到 2x + 4y - 2x - 2y = 68 - 52,即 2y = 16,y = 8。 将 y 的值代入第一个方程,得到 x + 8 = 26,x = 18。 所以,鸡有 18 只,兔有 8 只。 §.§ Examples generated by YunShan [colback=gray!10, colframe=black, width=16cm, arc=1mm, auto outer arc, boxrule=0.8pt, ] Input: Do you know what a lawyer's letter is? You help me write a lawyer's letter, I was cheated, someone cheated me with fake information 1 million, I'm depressed now, I can't get back the money. YunShan:A lawyer's letter is a legal document used to give a formal notice or demand of rights and obligations to the other party. Typically, attorney letters are used to resolve disputes or resolve potential litigation issues. Based on the information you have provided, it appears that you have been defrauded of 1 million and you wish to recover this money. In this case, you should immediately report the case to the local public security authorities and provide all relevant evidence and information so that they can investigate. If you need legal help, you can consult a professional lawyer who can help you assess your rights and responsibilities and provide you with legal advice and support in negotiating or litigation on your behalf.
http://arxiv.org/abs/2312.17276v1
{ "authors": [ "Yunhe Wang", "Hanting Chen", "Yehui Tang", "Tianyu Guo", "Kai Han", "Ying Nie", "Xutao Wang", "Hailin Hu", "Zheyuan Bai", "Yun Wang", "Fangcheng Liu", "Zhicheng Liu", "Jianyuan Guo", "Sinan Zeng", "Yinchen Zhang", "Qinghua Xu", "Qun Liu", "Jun Yao", "Chao Xu", "Dacheng Tao" ], "categories": [ "cs.CL", "cs.LG" ], "primary_category": "cs.CL", "published": "20231227114924", "title": "PanGu-$π$: Enhancing Language Model Architectures via Nonlinearity Compensation" }
PART:[ Aljoscha Niemann====================Vertical Federated learning (VFL) is a class of FL where each client shares the same sample space but only holds a subset of the features. While VFL tackles key privacy challenges of distributed learning, it often assumes perfect hardware and communication capabilities. This assumption hinders the broad deployment of VFL, particularly on edge devices, which are heterogeneous in their in-situ capabilities and will connect/disconnect from the network over time. To address this gap, we define Internet Learning () including its data splitting and network context and which puts good performance under extreme dynamic condition of clients as the primary goal. We propose VFL as a naive baseline and develop several extensions to handle the paradigm of learning. Furthermore, we implement new methods, propose metrics, and extensively analyze results based on simulating a sensor network. The results show that the developed methods are more robust to changes in the network than VFL baseline.§ INTRODUCTION In recent times, Federated Learning (FL) has emerged as a popular approach to distributed machine learning when data is distributed across clients. FL was primarily developed with the goal of data privacy and efficient communication <cit.>. FL has two standard approaches based on the method of partitioning of data among parties: horizontal (which is the most common) and vertical. The data context in horizontal FL (HFL) involves each client possessing unique samples of the full set of features, while Vertical FL (VFL) involves each party sharing the same set of samples while only possessing a portion of the features <cit.>. The focus for the research presented here is the VFL context.An ideal VFL scenario involves synchronous aggregation of client models at the server. However, due to communication and computation heterogeneity commonly associated with FL setups – attributed to dynamic changes to the network of clients – in practice the condition for VFL implementation is anything but ideal. Below, we highlight some real-world use cases of VFL where the data is distributed across clients but the system's ability to handle dynamic changes to the network of clients is critical (e.g., performance under near catastrophic faults).Use-case 1: Precision Agriculture In recent times, modernization of agriculture has been aided by the adoption of sensor networks to aid with high crop yield <cit.>. Within individual farms, each sensor will collect measurements on a portion of the crops (i.e., a subset of the features in VFL), making their individual measurements important to the global models. Yet the sensors may become unreliable given harsh outdoor conditions. Thus, sensors may leave or join the network arbitrarily. System performance must be reliable even in such a scenario.Use-Case 2: Social-based recommender systems Tech companies employ user data for ad recommendations and entertainment suggestions, but privacy concerns threaten the sustainability of centralized systems. Customers want personalized predictions within trusted circles, with each user possessing a subset of the information (i.e., VFL features) on the influence of recommendation decisions. This private network operates without a central server, allowing users to join or leave based on privacy concerns.While client faults have been studied in HFL (see <ref>), literature on making VFL robust under dynamic network changes is sparse. Unlike HFL, which involves averaging model parameters, VFL involves concatenating model outputs from different clients, making it more challenging to assure robustness under dynamic network conditions. Some emerging works have considered VFL dynamics, e.g., <cit.> studied asynchronous clients and <cit.> studied straggler clients. However, prior works on fault-tolerant VFL assume there is a special server node that does not fault, and only consider train-time faults. No prior works consider VFL with both train-time and test-time faults where any node can fail.To fill this gap, we introduce the Internet Learning (IL) paradigm that aims to achieve strong performance under dynamic network conditions, particularly extreme changes, where the features of the data samples are distributed across clients as in VFL. Owing to its widespread adoption and desirable characteristics such as data privacy, we suggest VFL be used as a baseline and propose several natural extensions of VFL to handle the more challenging context of IL. We consider performance in dynamic scenarios, including near catastrophic faults and nodes joining, as the primary metric for assessing algorithms developed for IL setup. A comparison of IL with federated learning paradigms is captured in Figure <ref>. We summarize our contributions as follows: * We define the context and goal of Internet Learning (IL). The context of IL assumes the features are split across clients while the communication network is dynamic, allowing for both communications and devices to fault.We define the goal of IL as minimizing the IL risk, which measures performance under (extreme) dynamic network conditions.* We propose several modifications of VFL to handle the challenging IL problem including replication of the server nodes, multiple layers of replicated VFL, and gossip-based layers. * We implement a scalable and modular experimental testbed for IL using the abstraction of message passing to enable reproducible and systematic evaluation protocols for IL. * We evaluate our IL methods under ideal and dynamic conditions for multiple datasets. Our results demonstrate that VFL fairs unfavorably when dealing with train and test time network faults while our proposed methods are more robust to faults. §.§ Related worksAsynchronous and Straggler-ResilientFL Network dynamics have received attention in HFL. <cit.> studied the problem of arbitrary client participation and provided an unified convergence analysis under this non-ideal condition. <cit.> proposed MIFA, which alleviates device straggler challenges by memorizing the latest updates from devices. <cit.> considers flexible client participation during the training process and proposes aggregation scheme that provides convergence guarantees even in such a scenario. <cit.> have studied protocols to handle asynchronous client updates. While these HFL methods may provide inspiration, they do not match our feature-split VFL context and thus are not directly applicable. In the VFL context, <cit.> studied the problem of asynchronous client participation with the objectives of reducing the communication cost, accelerating training, and accounting for device heterogeneity. <cit.> noted that asynchronous VFL protocols lead to staleness of model updates, resulting in performance degradation, and proposed FedVS which utilizes secret sharing of data and model parameters among clients to introduce redundancy. However, these VFL methods are focused on train-time faults, while our primary focus is performance under test-time faults.Decentralized FL Other works in the HFL setting have considered decentralized network architectures, i.e., networks of clients without an aggregation server. <cit.> developed GossipFL that uses gossip learning to achieve communication efficient-training, and obtains comparable performance to central HFL. <cit.> leverages peer to peer learning to achieve HFL in a decentralized manner. <cit.> proposed a Blockchain-based decentralized HFL framework. Although gossip based and peer to peer learning based methods are viable options for decentralized HFL, they are not, by themselves, considered reliable when network conditions are dynamic <cit.>. Nonetheless, we borrow some ideas from gossip-based protocols in our proposed methods.§ PROBLEM FORMULATIONInternet Learning specifies both an operating context and the desired properties of a learning system. The context is comprised of two entities: the data context and the network context, which we define in the first subsection. The desired property of the system is robustness under dynamic conditions, which we formally define via IL risk and corresponding metrics in the next two subsections. The term “Internet Learning” was inspired by packet switching, which forms the backbone of the internet. Specifically, packet switching communication networks were designed to perform well even under near catastrophic faults (e.g., major nuclear war) <cit.>. While these catastrophic faults have not occurred in practice, the design principle of extreme robustness produced an entirely new approach to communication networks. Therefore, IL aims for performance even under extreme conditions. Notation Let 𝒳={x_i}_i=1^n denote a dataset of with n samples and d features. Let x_𝒮 denote the subvector associated with the indices in 𝒮⊆{1,2,⋯,d}, e.g., if 𝒮 = {1,5,8}, then x_𝒮 = [x_1, x_5, x_8]^T. For C clients, the dataset at each client c ∈{1,2,⋯,C} will be denoted by 𝒳_c. Let 𝒢 = (𝒞, ℰ) denote a network (or graph) of clients, where 𝒞⊆{1,2,⋯,C} denotes the clients and ℰ denotes the communication edges. §.§ Internet Learning Context In this section, we define the data context and the network context for IL. First, we define the partial features data context and then discuss the dynamic network assumption. A partial features data context means that each client has access to a subset of the features, i.e., 𝒳_c = {x_i,𝒮_c}_i=1^n, where 𝒮_c ⊂{1,2,⋯,d} for each client c.This is the same data context as VFL but we use generic terminology because the network context, assumptions, and applicable methods may differ from the standard VFL setup. One example of this VFL data context is the setup wherein data for the same patient is distributed across multiple hospitals and there is little to no overlap in the data among the hospitals. Another key example is a sensor network where each sample is based on timestamps, i.e., each sensor observes a partial part of the environment but the same time as other sensors. Furthermore, in this study we assume that the features with each client is a partition of the feature set of a sample and each client has disjoint set of features for each sample. However, we do allow for scenarios where the clients can have features among one another that are correlated. For instance, there can be two sensors that can have correlated features due to their physical proximity.A dynamic network means that the communication graph can change across time indexed by t, i.e., 𝒢(t) = (𝒞(t), ℰ(t)), where the changes over time can be either deterministic or stochastic functions of t.This dynamic network context includes many possible scenarios including various network topologies, clients joining or leaving the network, and communication being limited or intermittent due to power constraints or physical connection interference. We provide two concrete dynamic models where there are device failures or communication failures. For simplicity, we will assume there is a base network topology 𝒢_base=(𝒞_base, ℰ_base) (e.g., complete graph, grid graph or preferential-attachment graph), and we will assume a discrete-time version of a dynamic network where t∈{0,1,2,⋯}. Given this, we can formally define two simple dynamic network models that encode random device and communication faults. Given a fault rate p and a baseline topology 𝒢_base, a device fault dynamic network 𝒢_DF(t) means that a client is in the network at time t with probability 1-p, i.e., Pr(c ∈𝒞_DF(t)) = 1-p, ∀ c ∈𝒞_base and ℰ_DF(t)={(c,c') ∈ℰ_base: c,c' ∈𝒞_DF(t) }.Given a fault rate p and a baseline topology 𝒢_base, a communication fault dynamic network 𝒢_CF(t) means that a communication edge (excluding self-communication) is in the network at time t with probability 1-p, i.e., 𝒞_CF(t) = 𝒞_base and Pr((c,c') ∈ℰ_CF(t)) = 1-p, ∀ (c,c') ∈ℰ_base where c≠ c'.As this work focuses on the foundations of IL, we only experimented with these two dynamic network models. However, more complex dynamic models could be explored in the future. For example, the networks could change smoothly over time (e.g., one connection being removed or added at every time point. Or, a network could model a catastrophic event at a particular time t' followed by a slow recovery of the network as devices are reconnected or restarted. We leave the investigation of more complex dynamic models to future work. §.§ Internet Learning ProblemGiven these context definitions, we now define the goal of IL in terms of the IL risk which we define next. For now, we will assume the existence of a distributed inference algorithm Ψ(x; θ, 𝒢(t)) that can predict across the data-split network under the dynamic conditions given by 𝒢(t). In <ref>, we will explain our generic distributed inference algorithm that generalizes VFL that will be used in our methods and that formalizes our proposed metrics. Assuming the partial features data context (<ref>) and given a dynamic network 𝒢(t) (<ref>), the internet learning risk of all clients' parameters denoted by θ is defined as the expected test loss ℓ when evaluated with the dynamic network: R_𝒢(t)(θ) ≜𝔼_(x,y)∼ p_test[ℓ(Ψ(x; θ, 𝒢(t)), y)], where Ψ is a distributed inference algorithm and p_test is the test distribution. We will use the term “system” or “network” instead of “model” as all computation must be computed in a distributed manner. This means that the network's parameters θ are distributed across all clients. We also note that the model at each client could have different parameters and even different architectures, unlike in HFL.Given this risk definition, we can now fully define the internet learning problem. Given partial features client datasets {𝒳_c}_c=1^C (<ref>) and a dynamic network 𝒢(t) (<ref>), the internet learning problem aims to find the optimal client parameters θ that minimize the IL risk R_𝒢(t) under the constraint that θ are the outputs of some distributed learning algorithm Ω({𝒳_c}_c=1^C, 𝒢(t)), i.e.,min_θ R_𝒢(t)(θ) s.t. θ = Ω({𝒳_c}_c=1^C, 𝒢'(t)),where 𝒢'(t) is a dynamic (possibly stochastic) network used during training. Like the significant change from circuit switching to packet switching, we expect that algorithms may have to be completely redesigned to work on extreme dynamic networks that are not present in standard centralized learning paradigms.§.§ Distributed Inference Algorithm While the distributed inference algorithm Ψ(x; θ, 𝒢(t)) could be any distributed algorithm in theory, we focus on a particular type of distributed inference algorithm based on message passing rounds. In particular, we assume that the distributed inference algorithm alternates between local computation on the client and message passing between clients. For the test phase, the final communication round is between the clients and an external entity that will use the prediction. As examples, the external entity could represent a drone passing over a remote sensing network to gather predictions or a physical connection to the devices at test time (e.g., when the sensors are ultra-low power and cannot directly connect to the internet). Or, this external entity could represent a power intensive connection via satellite to some base station that would only activate when requested to save power. We will first define the intermediate representations of the network computation and then define the final communication layer to accommodate for different scenarios. For our definition of the algorithm, let _c denote the features at client c for an input , which represents the concatenated features across the whole network. Let _c^(t) denote the representation at time t for client c.The message passing distributed inference algorithm Ψ is defined as follows:Ψ(x; θ, 𝒢(·))≜ h(_1^(T), _2^(T), ⋯, _C^(T); 𝒢(T))where h represents the final communication to an external entity, 𝒢(t) includes a special node indexed by 0 representing the external entity, and the latent representation _c^(t) for times 1 ≤ t ≤ T for all clients c∈𝒞(t) is the combination of local computation and message passing:_c^(1) ≜ f_c^(1)(_c; θ_c^(1))_c^(t) ≜ f_c^(t)(g({_c'^(t-1) : (c,c') ∈ℰ(t)}); θ_c^(t)), ∀ t > 1where f_c^(t) is a model with parameters θ_c^(t) and g is an aggregation function over neighbor messages.While this functional form has a superficial resemblance to the computation in graph neural networks (GNN), there are important semantic and syntactic differences, which we discuss in the appendix. In practice, we take g to concatenate the received messages while putting zeros for messages not received. This choice directly generalizes simple VFL pipelines, which use latent feature concatenation. Now we will define the final processing function h which represents the communication to the external entity. We consider two practical scenarios and two oracle methods that provide upper and lower bounds on the best and worst performance depending on how h selects the final output of the inference algorithm. These four methods for defining h will enable the computation of IL risk under different scenarios and form the basis for the four test metrics in the experimental section. For these, we will assume that h produces C class predictions for each client, i.e., h: ℝ^C × m→{0,1,2,⋯, m}^C, where m is the number of classes, so we can define ≜ h(_1,⋯,_c; 𝒢). We will then define the loss function as the number of correct answers over the number of non-zero values, where zero represents a missing prediction. The main difference is how to aggregate the predictions under faults (or in VFL where only 1 node acts as the server and computes the classification prediction). Average Over Clients If it can be assumed that all clients can communicate their outputs to the external entity (e.g., they can all communicate with a drone that passes by), then a natural measure is the average accuracy over all clients that successfully communicate to this external entity (i.e., the device does not fault and the communication does not fault): _c^avg≜_i _i,c,ifc ∈𝒞(T), (c,0) ∈ℰ(T)0,otherwise .where the 0-th client represents the external entity and a 0 output means that the device faulted and therefore does not provide any prediction (and will not be counted in the denominator of the loss).Furthermore, if all devices produce 0, then we randomly sample a class prediction (e.g., if the server fails in VFL). Random Client Prediction A slightly different case is the prediction at a device is chosen at random. This models the case where only a single device can communicate with the external entity (e.g., via a physical connection created at test time to extract information from a remote sensing grid). For this definition, let Cat(a) represents a random sample from a categorical distribution with a categories. Given a random client c' ∼Cat(C), the random client prediction zero for all clients c ≠ c' and the following for the randomly selected client c':_c'^rand ≜_i _i,c',ifc' ∈𝒞(T), (c',0) ∈ℰ(T)Cat(m),otherwiseOracle Best and Worst Client Prediction Finally, we provide two bounds on selecting the best and worst client when doing prediction. These are oracle because they require access to the true label y to select the right client. The oracle best and worst client prediction can be defined as:_c^best ≜_^avg_c': c' ∈ 𝒞(t)1(^avg_c' = y), ∀ c ∈𝒞(t) _c^worst ≜_^avg_c': c' ∈ 𝒞(t)1(^avg_c' = y), ∀ c ∈𝒞(t)where ^avg_c' is the average output defined above and 1(a=b) is an indicator function that is one if a=b. Intuitively, if any non-faulted client prediction is correct, we predict the true label for oracle best.Similarly, for oracle worst, if any non-faulted client prediction is incorrect, we predict the wrong label. The worst case lower bounds a single client prediction, i.e., the system's accuracy even if the worst client is selected. § PROPOSED IL METHODS Here we explain baseline and newly proposed methods that can be used in the IL paradigm. Figure <ref> summarizes the different proposed methods and contrasts it with the baseline. Unless mentioned otherwise, learning is taking place in all the represented layers, L_t, which represent the f_c^(t) functions. As mentioned in <Ref>, owing to the similarity in regards to the data context, VFL seems to be a natural fit as a baseline. In the VFL setup , there is a single server and multiple clients. This can be represented by the clients in the network in the second round is only the server node, i.e., 𝒞(2)={c_server}. For experiments that will be presented in the paper, in VFL, one of the participating clients is randomly chosen as the server. Multiple VFL Given the critical role that the server has, if the communication to server were to be lost or the server faults out, the learning will be ineffective. In order to prevent such a catastrophic failure, we propose Multiple VFL (MVFL), which is a straightforward extension of the VFL setup to multiple servers.In MVFL, all the participating clients act as servers.Deep MVFL Extending the MVFL setup, we propose another variant, Deep MVFL (DMVFL), which stacks MVFL models on top of each other and necessitates multiple rounds of communication between devices for each input. We believe that multiple communication rounds and deeper processing could lead to more robustness on dynamic networks. Comparing <ref> (b) and (c), the setup is same till L_2 but in DMVFL, there is an additional round of communication in L_3 following which the predictions are made. In <ref> (c) we have illustrated DMVFL with just one additional round of communication over MVFL and hence the depth of DMVFL is 1. Nonetheless, the depth in DMVFL need not be restricted to 1 and is a hyperparameter. Furthermore, to guarantee a fair comparison between DMVFL and MVFL, it was ensured that the number of parameters for both these setups be the same. Output Gossip Layers Finally, we also propose gossip (G) based method. <cit.> demonstrated that gossip learning is a viable alternative to federated learning and aggregating model parameters from peers eliminates the need for a central server.Taking inspiration from this finding, we believe that gossiping the penultimate output across all devices can lead to more robust methods. In <ref> (d) MVFL-G refers to the gossip variant of MVFL where the log probability outputs of each client are averaged using a round of gossip passing, which can be viewed as a fixed averaging layer that does not have parameters.For instance in <ref> (d) the latent representations at output of L_2 (penultimate layer) for D_1 and D_2 are averaged together and used as inputs for L_3 (last layer) from which the predictions are made.In the sample setup, averaging is performed once. However, the number of times averaging can be performed is a hyperparameter. It is critical to note that in the methods proposed here, the gossip protocol is that of averaging log probabilities and does not involve any learning. Nonetheless, it is possible to incorporate intricate learnable gossip protocols.Following a similar logic a gossip variant for DMVFL, DMVFL-G can also be formulated.§ EXPERIMENTS In this section, we compare of our proposed methods and with the baseline VFL across diverse scenarios.Our primary focus is to assess the advantages of incorporating multiple servers, introducing additional rounds of communication among servers, and leveraging a final phase of gossip communication.This exploration is aimed at enhancing the resilience of our distributed machine learning system for different baseline networks, fault patterns, and fault rates during training and inference.Subsequently, we delve into a more meticulous investigation, with a specific emphasis on understanding how the number of communication rounds in DMVFL, as well as the frequency of gossip communication in MVFL and DMVFL, influence the model's performance. §.§ Experimental Setup Dataset We test with MNIST, StarCraftMNIST <cit.>, and CIFAR10.Due to space constraint, results with StarCraftMNIST are presented here and findings with other datasets are in the Appendix. We chose StarCraftMNIST as it is specifically designed to study different tasks over a sensor network, which matches with the context described in the use-cases (<ref>). The results presented below consider 16 clients.In the Appendix we provide analysis with other datasets for different number of clients. Depending on the number of clients chosen, each client obtains different features for a given sample. In case of 16 clients, we first split StarCraftMNISTimages into a 4x4 square grid of patches. This splits a 28x28 image into 16 patches, each sized 7x7, with one patch assigned to each client. For other datasets, the procedure is similar. Methods As discussed in <Ref>, the following methods are studied. 1) VFL: Out of the 16 clients, one of the clients is randomly chosen as the server and it is responsible for prediction; 2) MVFL: All the sixteen clients function as server and each is capable of providing a prediction; 3) DMVFL: Similar setup as MVFL with additional rounds of communication. The additional rounds are denoted by Di. For example D2 implies 2 additional rounds of communication; 4) MVFL-G or DMVFL-G: The i in Gi denotes the number of rounds of message aggregation at the penultimate layer. Specifics about model architecture and and training parameters can be found in the Appendix.In the main paper, we use geometric averaging for the gossiping as it is more standard, and in the appendix, we do a more careful study of different gossiping methods. Baseline Communication Network We consider two types: Complete or Grid. In the former all the clients are connected to the server and in the latter only the neighboring clients, decided based on a distance metric, are connected. Details regarding the implementation of the metric and how it is used to generate the Grid network can be found in the Appendix. For any experiment, a communication network is first chosen and different fault patterns are then applied to that chosen network.Fault Models We investigate models' performance under two different dynamic network scenarios: device faults and communication faults defined in <Ref> and <Ref>, respectively. Note that these device and communication faults could affect the final communication to the external entity, e.g., if the server in VFL faults, then the external entity does not does not receive a prediction. For simplicity, we assume that the communication graph is constant for processing an entire batch (including the backward pass during training). Future work could consider a streaming setting in which the communication graph could change while a batch is being processed.For implementing faults, we will investigate five different fault rates for both types of faults (0%, 10%, 20%, 30%, 40%, 50%). In the situation a communication or device is faulted out, we impute the missing values with zeros and assume that for training or inference. In the experiments we use a batch size of 64. Given there are two different communication networks and fault patterns, we run experiments with the following combinations: , , and .For its practical purposefulness, "Average Performance" is chosen as the metric in which the results in the section below are presented.§.§ Result Different Test Fault Rates and Patterns As shown in <Ref>, under no training faults, we investigate how different models perform under (1) different test fault rates (2) different fault type i.e., device or communication (3) different network i.e., complete network or grid network.In <Ref>, we first note that when there is no train fault rate, as the test fault rate becomes higher, the performance of all models always become worse as we expect. First focusing on the complete baseline network (<ref> (a) and (b)), it can be seen that both MVFL and DMVFL outperform VFL, which demonstrates the benefit of replicating servers. It seems that extra communication round between servers does not help in most cases which could result from the fact that we need to keep the total number of parameters the same, but DMVFL requires more communication and could receive more faulted information. In the case of complete-communication, when fault rate is low, DMVFL outperforms MVFL and we think this shows a tradeoff between benefiting from extra communication and receiving more faulted representation while the capability of the model is roughly the same. Regarding gossiping, it helps both MVFL and DMVFL mainly in the case of communication fault. We believe this is because in the device fault case, irrespective of number of gossip rounds, the representations from faulted device cannot be obtained.On the other hand, multiple gossip rounds in communication fault scenario has the effect of balancing out the lost representation at a client via neighboring connections. Switching to the grid baseline network, a major observation here is the degradation in the performance of both DMVFL and DMVFL-G4. We conjecture that in this case, clients can only directly communicate with neighboring clients, thus it's harder to get information from clients far away and extra communication leads to less benefit while the smaller network size and receiving more faulted representation become a bottleneck. Similarly, we notice that MVFL outperforms MVFL-G4 when fault rate is very high, as there is a much higher chance that the network is disconnected in comparison to complete baseline network. In short, we conclude that when trained with no faults, MVFL is overall the best model while gossiping helps except with high fault rates under the grid baseline network. Train FaultsWe further explore the effect of training with faults on the average performance of models for different test time faults. <ref> shows that training with faults improves the average performance during test time faults. We conjecture that faults during training helps with regularizing the model, similar to the effect dropout has. Even when training with faults, MVFL and it's gossip variant outperforms VFL. Further investigation with training faults is covered in the Appendix. Ablation Studies We study the effect of depth for DMVFL and gossip rounds on average performance. It was based on this study that we selected the configurations of models that are presented in <ref> and <ref>. Due to space constraints, the results are presented in the Appendix. Other metricsIn <ref> performance of all the models across the four metrics are presented for 3 datasets and 2 complete-communication fault rates. Across the different datasets, when comparing average performance, MVFL or it's gossip variant are more robust than VFL. Also, it is not surprising that the trends observed for Avg and Rand metric are similar. Furthermore, comparing MVFL-G with MVFL, it is seen that while the average performance is higher for the gossip variant, it is accompanied by a lower Best performance. In Section 4.2 we highlight results, Test (Inference) Faults [Summarized in Figure 3] and Train Faults [Summarized in Figure 4]) that use the proposed methods. Furthermore, in depth analysis using train and test time faults for these methods are explored in the Appendix. The symmetry of the toy example (without any dynamic conditions) in Figure 2 hides the complexity. In more complex cases with non-symmetric topologies (example shown in Appendix), each layer might learn something different beyond mere randomness.We agree that Deep MVFL may not provide additional benefits in our current scenario and is a negative result.Ultimately, we expect new architectures or training algorithms may be needed to enable better IL algorithms. § CONCLUSIONIn this paper, we carefully defined Internet Learning, proposed several IL methods based on VFL, developed an IL testbed, and evaluated and compared IL performance across various fault models and datasets. While this work provides the foundation for IL, it opens many new directions such as heterogeneous devices or models, new architectures, or new fault models. Furthermore, a real IL system may need to operate in an online streaming manner, handle asynchronous communication, adapt to smoothly changing network topologies, or minimize latency, communication cost, or power. Ultimately, we believe this paper provides the first step towards highly fault-tolerant learning on dynamic networks.§ ACKNOWLEDGMENTSS.G., Z.Z., C.B., and D.I. acknowledge support from ONR (N00014-23-C-1016).S.G., Z.Z., and D.I. acknowledge support from NSF (IIS-2212097) and ARL (W911NF-2020-221). apalike unsrttocsectionAppendix PART:Appendix Appendix John Doe============In the main paper, the following information were listed to be provided in the Appendix section and they can be found in the Appendix per the listing below: * Relation to GNN from Section <ref> can be found in Section <ref>* Analysis for different number of devices/clients from Section <ref> can be found in Section <ref>* Specifics on model architecture from Section <ref> can be found in Section <ref>* Further analysis on gossiping methods from Section <ref> can be found in Section <ref>* Grid graph construction details from Section <ref> can be found in Section <ref>* Further study with training faults from Section <ref> can be found in Section <ref>* Ablation studies from Section <ref> can be found in Section <ref>* Further details on metrics from Table <ref> can be found in Section <ref> § DEVICE AND COMMUNICATION FAULT VISUALIZATION We show in Figure <ref> the visual representation of communication and device fault under the MVFL method. Although, we present the scenario for only one method, by extension the visualization is similar for VFL, DMVFL and the gossip variants.§ FURTHER DISCUSSION AND LIMITATIONS Distributed Inference Algorithm's Resemblance to GNNs: The form of our distributed inference algorithm in the main paper has a superficial resemblance of the computation of graph neural networks (GNN) <cit.> but with important semantic and syntactic differences. Semantically, unlike GNN applications whose goal is to predict global, node, or edge properties based on the graph edges, our goal is to do prediction well given any arbitrary edge structure. Indeed, the edges in our dynamic network are assumed to be independent of the input and task—rather they are simply constraints based on the network context of the system. Syntactically, our inference algorithm differs from mainstream convolutional GNNs because convolutional GNNs share the parameters across clients (i.e., θ_c^(t) = θ^(t)) whereas in our algorithm the parameters at each client are not shared across clients (i.e., θ_c^(t)≠θ_c'^(t)). Additionally, most GNNs assume the aggregation function g is permutation equivariant such as a sum, product or maximum function. However, we assume g could be any aggregation function. Finally, this definition incorporates the last processing function h that represents the final communication round to an external entity (Main Paper Section <ref>).Design for extreme conditions: The proposal to consider extreme conditions as the basis for Internet Learning (IL), is inspired by impressive performance of technologies such as the internet and autonomous vehicles, that have been developed to handle rare or catastrophic events. To elaborate, the foundation of the Internet is the development of packet switching to replace circuit switching for communication <cit.>. Indeed, <cit.> motivated packet switching almost entirely by the concept of survivability or reliability of the communication network under near catastrophic and adversarial faults (e.g., a nuclear bomb or enemy raid).For autonomous cars, the learning-based systems need to handle all circumstances well particularly rare circumstances where a mistake could cause a human fatality (e.g., driving in a snowstorm with limited visibility in a construction site). Thus, designing for the worst case is critical in this application as well. Like circuit switching, current ML algorithms (particularly standard end-to-end backpropagation) require careful synchronization and would likely degrade significantly under benign failures. Yet, highly fault-tolerant learning algorithms could have wide applicability. Alternative Approaches: Although in the main paper we presented novel methods, we believe that methods which allow for asynchronicity and decoupling of forward and backward passes during training/inference will be critical to the development of future methods. To that end,localized learning algorithms, such as <cit.>, may play an important role in the development of IL methods that have the desired robustness properties for the IL context.Limitations: In recent times, a key consideration for distributed learning paradigms, such as Federated Learning, has been to ensure that clients or devices data remains private. Given the distributed nature and the ability of the devices/clients in IL to interact with each other, we believe that, though not the primary property of an IL system, privacy and security of an IL system is an important direction for future work. Furthermore, the experiments in the main paper were simulated with no communication latency. However, it is a salient practical consideration. Therefore, in the future it will be crucial to consider this in the IL paradigm and develop methods that can accommodate for asynchronous or semi-synchronous updates. Finally, we also note that we assumed the communication graph was constant for the entire forward and backward operations. This is unlikely to hold in practice but may change every communication round. Further implementation effort would be required to implement dropping on both the forward and backward passes of training.§ EXPERIMENT DETAILS §.§ DatasetsFor the experiments presented in this paper, following are the datasets that were used:StarCraftMNIST(SCMNIST): Contains a total of 70,000 28x28 grayscale images in 10 classes. The data set has 60,000 training and 10,000 testing images. For experiments, all the testing images were used, 48,000 training images were used for training and 12,000 training images were used for validation study.MNIST: Contains a total of 70,000 28x28 grayscale images in 10 classes. The data set has 60,000 training and 10,000 testing images. For experiments, all the testing images were used, 48,000 training images were used for training and 12,000 training images were used for validation study.CIFAR-10: Contains a total of 60,000 32x32 color images in 10 classes, with each class having 6000 images. The data set has 50,000 training and 10,000 testing images. For experiments, all the testing images were used, 40,000 training images were used for training and 10,000 training images were used for validation study.§.§ Graph constructionIn the main paper as well as in the Appendix the terms client and devices are used interchangeably. In Section <ref> of the main paper, two different graphs were introduced: Complete and Grid. To elaborate how these graphs are constructed for a set of 16 clients, we take an example of an image from each of the three datasets and split it up into 16 sections, as illustrated in Figure <ref>. For a Complete graph, all the devices are connected to the server. For instance, if D_1 is selected as the server, then all the other devices D_i for i = 2, 3, …, 16 are connected to D_1. To construct Grid graph, we use compute a Distance parameter. For Grid graph, distance returns true if a selected device lies horizontally or vertically adjacent to a server and only under this circumstance it is connected to the server otherwise it is not. For example, in Figure <ref>, if D_3 is selected as the server, then D_2, D_4 and D_7 are the only devices connected to D_3. Another example will be, if D_13 is selected as the server, then D_9 and D_14 are the only ones connected to the server.Irrespective of the base graph, Grid or Complete, when training or testing faults are applied to the selected base graph, during implementation it is assumed that the graph with incorporated faults stays constant for one entire batch and then the graph is reevaluated for the next batch. In our experiments, the batch-size is taken to be 64.Furthermore, in Figure <ref> we highlight a few examples, to illustrate with MNIST images, why in some cases it is easy to distinguish between images based on partial information and while in other situations, it is not. Thus, device connectivity plays a crucial role in enabling classification tasks. §.§ TrainingFor all of our experiments, we train the model for 100 epochs and we always report the result using the model checkpoint with lowest validation loss. We use a batch size of 64 and Adam optimizer with learning rate 0.001 and (β_1,β_2)=(0.9, 0.999). All experiments are repeated using seed 1,2,…,16.For 16 devices with all datasets, each device in VFL and MVFL has a model with the following structure:wheremeans message passing. For 4 devices with all datasets, each device in VFL and MVFL has a model with the following structure: . For 49 devices with all datasets, each device in VFL and MVFL has a model with the following structure: .For 16 devices with all datasets, each device in DMVFL has a model with the following structure: . For 4 devices with all datasets, each device in DMVFL has a model with the following structure: . For 49 devices with all datasets, each device in DMVFL has a model with the following structure: .are composed of a sequence ofbased on depth and =. Here we use multiple perceptrons at each layer to make sure the number of parameters between MVFL and DMVFL. For example, for 16 devices and a depth of 2, we use 16/2=8 perceptrons at each layer.All experiments are performed on a NVIDIA RTX A5000 GPU. § ADDITIONAL EXPERIMENTS§.§ Gossip Methods Here we compare 4 different methods of gossip.Let y^(0)_c,i be the logit at client c for class j (input to the gossip layer) and N_g indicate the number of gossiping (averaging). We want to investigate the effect of log probability normalization via log softmax throughout the gossiping strategy. We propose several different methods that add logsoftmax normalization in multiple places during gossip. Below we formalize the computation of each gossip methods and the final output would be used for computing negative loglikelihood loss.(1) Logit: We take the average of raw output of the final layer of network and apply LogSoftmax at the end of gossipping. y^(1)_c,j= y^(0)_c,jy^(n+1)_c,j= 1/C∑_c'y^(n)_c',j∀ n ∈{1,…,N_g}y^(final)_c,j=[ (y^(N_g+1)_c,1,…,y^(N_g+1)_c,m)]_j(2) Gm (geometric mean): We apply LogSoftmax to the final output of the network to get proper log probabilities for each head. Then we run gossipping, and we also normalize the log probabilities after each gossip round to make sure that they always represent log probabilities in each gossip round.y^(1)_c,j=[ (y^(0)_c,1,…,y^(0)_c,m)]_j y^(n+1)_c,j= 1/C∑_c'y^(n)_c',j - [ (1/C∑_c'y^(n)_c',1,…,1/C∑_c'y^(n)_c',m)]_j ∀ n ∈{1,…,N_g}y^(final)_c,j=y^(N_g+1)_c,j (3) Ugm (unnormalized geometric mean):we apply LogSoftmax to the final output of the network. Then we run gossipping without normalizing the output after each round. Notice that this does not renormalize at each round like in Gm.y^(1)_c,j=[ (y^(0)_c,1,…,y^(0)_c,m)]_j y^(n+1)_c,j= 1/C∑_c'y^(n)_c',j∀ n ∈{1,…,N_g}y^(final)_c,j=y^(N_g+1)_c,j (4) Ugmn (unnormalized geoemetric mean and normalizing): This is the same as ugm except that we normalize the output after the final round of gossip such that we are using proper log probabilities when computing negative log likelihood loss.y^(1)_c,j=[ (y^(0)_c,1,…,y^(0)_c,m)]_j y^(n+1)_c,j= 1/C∑_c'y^(n)_c',j∀ n ∈{1,…,N_g}y^(final)_c,j=y^(N_g+1)_c,j - [ (1/C∑_c'y^(N_g+1)_c',1,…,1/C∑_c'y^(N_g+1)_c',m)]_j In <Ref>, we show the results of these models with no training faults but with various levels of test faults.We notice that, in most cases, ugm outperforms all others while the remaining three methods have similar performance.Thus, surprisingly, because all other methods normalize the values at the end, it seems that the most important difference (and significantly important) is whether logsoftmax is used at the end immediately prior to being passed to the cross entropy loss. Other intermediate normalizations before or during gossiping (as in logits, gm) do not seem to have a significant affect on the performance but normalization at the end as in ugmn does change the performance. This result is surprising because this means that the values being passed to the cross entropy loss are unnormalized log probabilities rather than proper log probabilities.We conjecture this surprising phenomena is caused by the idea that with normalization a very confident prediction from one classifier head can dominate the gossip-based prediction. Specifically, the training loss can be reduced by focusing on a single classifier head's prediction and making it very confident while ignoring other classifier heads' predictions. While this works during training because there are no faults, at test time, the well-trained classifier head may fault leaving only poorly trained classifier heads. On the other hand, without normalization, each classifier head more evenly contributes to the final gossip-based prediction and thus all classifier heads have to predict reasonably well. When the task gets more difficult (e.g., fault rate gets higher or the baseline network becomes grid), it would be harder for some devices (especially those with less meaningful data on their own) to make correct predictions and ugm would be better at optimizing those devices. This also aligns with our observation that the gap gets minimum when there is no fault. Even though this is a uncommon technique to use, we think it's an interesting observation and report it here. And as gm is a more standard technique in the literature, we use it for all other experiments in this paper and leave further investigation of gossip methods as future works. §.§ Training FaultsIn <ref> of the main paper we presented the effect of training fault on the test time performance under communication and device faulting regime. Here we present an extension by studying the average test time performance for 6 different rates of communication fault during training. Figures <ref>, <ref> and <ref> show the results for three datasets, MNIST, SCMNIST and CIFAR-10, respectively. Across the different sets and models it is observed that training with faults improves the average performance during test time faults. Furthermore, on observing Figures <ref>, <ref> and <ref> (c), it seems that gossiping has a profound impact on the model performance even if training with 100% faults, which means that each device is training it's own local model independently. However, by doing gossip during the testing time, devices are able to reach a consensus that gives the model a performance boost, even during high testing fault rates. §.§ Ablation Studies Effect of number of communication rounds in DMVFL In Section <ref> of the main paper, we highlighted that the depth of DMVFL model is a hyperparameter. In Figures <ref>, <ref> and <ref> we present the investigation the effect of changing the depth hyperparameter for the three different datasets. The evaluation is carried out for average performance under no train faults but only test time faults with 16 devices.Based on the figures we conclude that increasing the number for depths in DMVFL, does not have a favorable impact on average performance. We conjecture that this happens as per design, irrespective of the depth, we need to keep the total number of parameters the same for all the models, thus deeper DMVFL requires more communication and could receive more faulted information due to lack of parameters, resulting in inferior performance.Effect of number of gossip rounds: For the gossip (G) variants of MVFL and DMVFL-D2 methods, we are interested in studying the effect the number of gossip rounds has on average performance. In Figures <ref> and <ref> the effect of three different gossip rounds on the average performance for three different datasets is presented. From the plots we observe that irrespective of the method, Complete-communication train fault benefits the most with incorporating gossip rounds. Despite Grid-communication being a communication type of fault, gossiping does not improve the average performance. We believe this happens as a grid graph is quite sparse and training faults makes it more sparse. As a result, increasing gossip rounds does not lead to efficient passing of feature information from one client to another due to the sparseness, this is not the case in a complete graph. Furthermore, for reasons mentioned in Section <ref> of the main paper, from Figures <ref> and <ref> we observe that adding any gossip rounds with device faults does not help in improving the performance. §.§ Other metrics In Figures <ref>,<ref> and <ref> we present not only the average metric but also Rand, Best and Worst metrics when evaluation are carried out for 16 Devices/Clients under only test faults. In the main paper, Table <ref> is a subset of the comprehensive data presented here.§.§ Evaluation for different number of devices/clients In Figure <ref> we present average performance as a function of test time faults for three different number of devices. For all the different cases, it is observed that MVFL or its gossip variant performs the best. On observing the Complete-Communication plots for Figure <ref>, it can be seen that MVFL with gossiping has a more significant impact when the number of devices are 49 or 16 compared to when the number of devices are 4.
http://arxiv.org/abs/2312.16638v1
{ "authors": [ "Surojit Ganguli", "Zeyu Zhou", "Christopher G. Brinton", "David I. Inouye" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231227170009", "title": "Fault-Tolerant Vertical Federated Learning on Dynamic Networks" }
MicroscopicOptical Potentials from Chiral Forces and Ab Initio Nuclear Densities C. Giusti January 14, 2024 ================================================================================== We show that, in general, the characteristic polynomial of a hypergraph is not determined by its “polynomial deck”, the multiset of characteristic polynomials of its vertex-deleted subgraphs, thus settling the “polynomial reconstruction problem” for hypergraphs in the negative.The proof proceeds by showing that a construction due to Kocay of an infinite family of pairs of 3-uniform hypergraphs which are non-isomorphic but share the same hypergraph deck, in fact, have different characteristic polynomials.The question remain unresolved for ordinary graphs. § INTRODUCTIONThe graph reconstruction conjecture, which remains open today, was first stated in <cit.> as a metric space problem. In graph theoretical terms, the vertex deck of a graph G = (V(G),E(G)) is the multiset of isomorphism classes of vertex-deleted induced subgraphs G-u, for each u∈ VG. We may restate the reconstruction conjecture as follows:A graph G is uniquely determined, up to isomorphism, from its vertex deck. Kelly has shown in <cit.> that the reconstruction conjecture holds for trees. Recently, triangle-free graphs with some restrictions on the connectivity and the diameter are shown to be reconstructible from their vertex deck. (<cit.>) Other reconstruction problems have been proposed:* Reconstruction of a graph from its vertex deck without multiplicity* Reconstruction from edge-deleted subgraphsIn <cit.> and <cit.>, the polynomial reconstruction problem was suggested: Can we reconstruct the characteristic polynomial χ_Gt of a graph, from its polynomial deck, i.e., the deck of characteristic polynomials of the vertex-deleted subgraphs? Some recent work on this and a discussion of the problem's history can be found in <cit.>. Using the Harary-Sachs Theorem, Tutte showed in <cit.> that the characteristic polynomial of an ordinary graph is reconstructible from its vertex deck. In Figure <ref>, we present the table of four spectral reconstruction problems for ordinary graphs, from Schwenk's Spectral Reconstruction Problems (<cit.>).For hypergraphs of rank m≥ 3, we have counterparts of the reconstruction conjectures. In particular, we have the polynomial reconstruction problem for hypergraphs.Kocay showed in <cit.> that hypergraphs of rank 3 are not reconstructible from their decks. This is demonstrated by an infinite family of pairs of hypergraphs X^n,Y^n_n≥ 3 such that their vertex-decks are the same, although X^n and Y^n are non-isomorphic. Our main theorem shows that the characteristic polynomials of these pairs of hypergraphs are different, established by showing that their principal eigenvalues are different. Since the polynomial decks of the pairs are the same, this disproves the polynomial reconstruction conjecture for hypergraphs of rank 3. A novel aspect of our argument is that we treat hypergraphs throughout primarily as algebraic objects by analyzing their “Lagrange polynomials”. We suspect that similar methods can resolve the case of higher ranks as well, but have not attempted to work out the details. § PRELIMINARIESThroughout, we use the notation N = 2^n, where n≥ 3 in an integer. The set {1,2,…,n} is denoted as [n], for any n≥ 1. We use 1_r to denote the all-ones vector of length r≥ 1. The subscript is omitted if it is clear from the context.A hypergraph ℋ is a pair (V,E) of vertices V(ℋ) := V and edges E(ℋ) := E ⊆ 2^V; if |e|=m for all e ∈ E, then ℋ is said to be uniform of rank m. Given hypergraphs ℋ =[n], Eℋ and 𝒢 =[n], E𝒢 of rank m≥ 2, then ℋ and 𝒢 are called hypomorphic, provided there exists a permutation η : [n] → [n] so that ℋ - i and 𝒢 - η(i) are isomorphic, for each i = 1,…,n.We have the following theorem by <cit.>:There exists an infinite family of pairs of hypergraphs {X^n,Y^n}_n≥ 3, of rank 3, that are hypomorphic, but not isomorphic. We describe X^n and Y^n below in Definition <ref>. Hypomorphic graphs share the same number of edges and the same degree sequences, among many other common properties. We will show that the characteristic polynomials of X^n and Y^n are different, for each n ≥ 3. Let ℋ =[n], Eℋ be a hypergraph, with n vertices and rank m≥ 2. The (normalized) adjacency hypermatrix 𝒜_ℋ of ℋ is the rank m and dimension n symmetric hypermatrix with entriesa_i_1 … i_m =1 (m-1)!if {i_1,…,i_m}∈ Eℋ0otherwiseGiven a hypergraph ℋ =[n], Eℋ, with n vertices and rank m, with adjacency hypermatrix 𝒜_ℋ = a_i_1 … i_m, then we have the following multivariable function defined on ^n, called the Lagrange polynomial of ℋ:F_ℋ𝐱 = ∑_i_1 … i_m = 1 ^n a_i_1 … i_m x_i_1 x_i_2⋯ x_i_mFor a given edge e = {i_1,…, i_m}∈ Eℋ, let 𝐱^e = x_i_1 x_i_2⋯ x_i_m. With this notation, we haveF_ℋ𝐱 = ∑_e∈ Eℋ k x^eWe will occasionally simplify the notation by writing ℋ𝐱 for the Lagrange polynomial of ℋ; in fact, we often suppress the vector of variables and use ℋ to denote the polynomial as well as the hypergraph.Given a hypergraph ℋ =[n], Eℋ, of dimension n and rank m, then a vector 𝐱∈^n is an eigenvector of ℋ corresponding to the eigenvalue λ∈, provided the equation∑_i_2… i_m = 1 ^n a_j i_2 … i_m x_i_2⋯ x_i_m = λ x_j^m-1is satisfied, for each j=1,…,n.Since the Lagrange polynomial F_ℋ is homogeneous by Definition <ref>, we typically consider only the restriction of F_ℋ to the set S of non-negative unit vectors (with respect to the ℓ_m-norm):S = {x_1,…, x_n∈ℝ_≥ 0^n : |x_1|^m +… +|x_n|^m = 1 }Let S be the relative interior of S, that is, the set of unit vectors with strictly positive coordinates.a* In <cit.> (and, shortly after, in <cit.> in greater generality), it was shown that, if ℋ is connected, then there is a unique eigenvalue λ, associated with a unit eigenvector 𝐯∈IntS with strictly positive coordinates, analogously to the classical Perron-Frobenius Theorem. This eigenvalue, which is always real and strictly positive, is called the principal eigenvalue of ℋ or the spectral radius of ℋ. The corresponding eigenvector 𝐯 is the principal eigenvector of ℋ.* As shown in <cit.>, the function F_ℋ𝐱 is maximized on IntS uniquely at its principal eigenvector 𝐯 and the maximum value attained is the principal eigenvalue λ of ℋ. In other words,max_𝐱∈S F_ℋ𝐱 =F_ℋ𝐯 = λ Next, we have a lemma about the relationship between the automorphisms of a hypergraph ℋ and its principal eigenvector. Let σ∈Sym[n] be a permutation of [n]. * For a hypergraph ℋ =[n], E(ℋ), we write σ(ℋ) for the hypergraph whose vertices are [n] and whose edges are {σ(e) : e ∈ E(ℋ)}, where σ(e) = {σ(t) : t ∈ e}. * We also apply the mapping σ on the coordinates of a vector 𝐱 viaσ x_1,…,x_n = x_σ1, …, x_σn Note that F_σℋ𝐱 = F_ℋσ𝐱Let ℋ = [n],Eℋ be a hypergraph, with principal eigenpair λ, 𝐯. Then, σ𝐯 = 𝐯 for any automorphism σ∈ℋ of ℋ. For each σ∈ℋ, we haveF_ℋ𝐱 = F_σℋ𝐱Evaluating at the principal eigenvector, we obtain,λ= F_ℋ𝐯 = F_σℋ𝐯 = F_ℋσ𝐯By the uniqueness of the maximum point of F_ℋ (Remark <ref>, part 2), it follows that σ𝐯 = 𝐯. Now, we consider the infinite family of pairs of hypergraphs {X^n,Y^n}_n≥ 3 constructed in <cit.>. For n≥ 0, let V_n = {1,2,3,4,…, 2^n}. Given x,y∈ V_n, we define x+y V_n= zwhere z is the unique integer z∈ V_n such that x+ y ≡ z 2^n. Throughout, the superscript n in ℋ^n for each hypergraph ℋ will indicate that calculations are conducted V_n. For each n≥ 4, we define the maps p_0^n,p_1^n : V_n-1→ V_n, by p_0^n i = 2i V_np_1^ni = 2i-1 V_nfor i≥ 1. We use the same notation for the polynomial map p_j^n x_i= x_ p_j^n i, for j=0,1. The maps p_0^n,p_1^n extend to a -algebra endomorphism of [x_0,x_1,…,x_N]. We omit the superscript if it is clear from context.For r≥ 1 and for a given zero-one vector ϵ = [ϵ_0,…,ϵ_r-1] ∈{0,1}^r, we let p_ϵ = p_ϵ_0∘…∘ p_ϵ_r-1It follows, by induction, that p_ϵ i= 2^r i - ∑_j = 0^ r-1 ϵ_j 2^j for each ϵ∈{0,1}^rDefine the permutation τ∈([8]) by τ i = 3iV_3In other words, τ=1.41234567836147258. Define the tight cycle,C^3𝐱 = ∑_1≤ i ≤ 8 x_i-1 x_i x_i+1We use τ to obtain another tight cycle: D^3𝐱 = C^3τ𝐱 Note thatD^3𝐱 = ∑_1≤ i ≤ 8 x_3i-3 x_3i x_3i+3 = ∑_1≤ j ≤ 8 x_j-3 x_j x_j+3by change of variables, j = 3i V_3, 1≤ i ≤ 8For n≥ 3 and 2≤ r≤ n, we define a map ^n_r: * For r=n, we define the identity map:^n_nx_i= x_i * For 2 ≤ r ≤ n-1, we define^n_r x_i = ∑_ 1 ≤ s ≤ 2^n-r x_ i + s · 2^r = ∑_ 1≤ j ≤ 2^nj ≡ i 2^r x_j The map ^n_r extends to a -algebra endomorphism of [x_0,x_1,…,x_N]. We omit the superscript if it is clear from the context.a^n_r x_i= E^n_r x_i+2^t for any t≥ r.Let k≥ 2 and n≥ 3. We define a family of hypergraphs G_k^n as follows:(1) If k=n, we define :For n=3, G_3^3 𝐱 = C^3𝐱 + D^3𝐱 =∑_1≤ i ≤ 8 x_i-1 x_i x_i+1 + ∑_1≤ i ≤ 8 x_i-3 x_i x_i+3 For n≥ 4, G_n^n 𝐱 = _3^n G_3^3𝐱(2) If 2≤ k <n, we define:For n=3, G_2^3 𝐱 = ∑_1≤ i ≤ 4 x_i x_i+2 x_i+4 For n≥ 4, G_k^n𝐱 = p_0 G_k^n-1𝐱 + p_1 G_k^n-1𝐱The hypergraphs G_1,k^0n defined in <cit.> are shortly denoted here as G_k^n. M^n_0 𝐱= x_0 ^n_2 x_1 x_2 + x_3 x_4 M^n_1 𝐱=x_0 ^n_2 x_1 x_4 + x_2 x_3The hypergraphs defined as M^0_n n , M^1_n n in <cit.> are denoted here as M^n_0 and M^n_1, respectively. Let n≥ 3.Definei)T_n𝐱 = ∑_k=2^n-1 G_k^n𝐱 ii)Γ_n𝐱 = T_n𝐱 + G_n^n𝐱 = ∑_k=2^n G_k^n𝐱 iii)X^n 𝐱= Γ_n𝐱 + M^n_0 𝐱 Y^n 𝐱 = Γ_n 𝐱 + M^n_1 𝐱 The hypergraphs G_n, X_n, Y_n defined in <cit.> are denoted here as Γ_n, X^n, Y^n, respectively. As shown in <cit.>, the hypergraphs X^n and Y^n, for n≥ 3, are hypomorphic, but not isomorphic.Note that, by Definition <ref>, the vertex 0 has positive codegree with every other vertex (i.e., they are contained in an edge together), so X^n and Y^n are connected, and therefore the hypergraph Perron-Frobenius Theorem applies as per Remark <ref>.The hypergraph G_n^n was defined in Definition <ref> with an explicit formula. In <cit.>, a recursive definition of G_1,n^0n is given. To show that these definitions are equivalent, we define below H^n, which is an alternative notation for G_1,n^0n. For each n≥ 4, we define the mappingq^n x_i= x_i + x_i+2^n-1We extend the mapping q^n to a -algebra endomorphism of [x_0,x_1,…,x_N]. Let n≥ 3. We define the family H^n as follows:For n=3, H^3 𝐱 = G_3^3 𝐱 = C^3𝐱 + D^3𝐱 For n≥ 4, H^n 𝐱 = q^n H^n-1𝐱We claim that H^n= ^n_3 G_3^3 = G_n^nfor each n≥ 3. In other words, the hypergraph G_n^n defined in Definition <ref> is identical to that defined in <cit.>. First, we show that q^n^n-1_3= ^n_3as follows: ^n_3 x_i= ∑_ 1≤ s ≤ 2^n-3 x_ i + 8s= ∑_ 1≤ t ≤ 2^n-4 x_ i + 8t+ x_ i + 8t + 2^n-1 = ∑_ 1≤ t ≤ 2^n-4 q^n x_ i + 8t= q^n∑_ 1≤ t ≤ 2^n-4 x_ i + 8t= q^n^n-1_3 x_iAs the maps agree on the generators of [x_0,x_1,…,x_N], they are identical.For the statement of the lemma, we use induction on n, to prove that H^n = ^n_3 G_3^3.The base case of n=3 is clear. For the induction step, H^n = q_n H^n-1 = q_n^n-1_3 G_3^3 by the inductive hypothesis = ^n_3 G_3^3 as shown aboveThe recursive definitions of { G_k^n}_k=2^n, T_n and Γ_n, found in Definitions <ref> and <ref> can be turned into explicit formulas, by induction:a *G_2^n = ∑_ϵ∈{0,1}^n-3 p_ϵ G_2^3 = ∑_1≤ i ≤ 4∑_ϵ∈{0,1}^n-3 p_ϵx_i x_i+2 x_i+4* For 3≤ k ≤ n,G_k^n = ∑_ϵ∈{0,1}^n-k p_ϵ G_k^k whereG_k^k= ^k_3G_3^3 * Γ_n = T_n + G_n^n, whereT_n = ∑_1≤ i ≤ 4∑_ϵ∈{0,1}^n-3 p_ϵ x_i x_i+2 x_i+4 + ∑_1≤ i ≤ 8∑_1 ≤ r≤ n-3 ∑_ϵ∈{0,1}^r p_ϵ_n-r x_i-1 x_i x_i+1 + x_i-3 x_i x_i+3and G_n^n = ^n_3G_3^3 =^n_3∑_1≤ i ≤ 8x_i-1 x_i x_i+1 + x_i-3 x_i x_i+3Note that each edge of T_n consists of vertices of the same parity. § MAIN THEOREMBy <cit.>, we know that AutX^n = AutY^n = {, θ_n }, where θ_n fixes 0 and reflects the cycle 1,…,2^n:θ_nx = 2^n - x + 1 if1≤ x ≤ 2^n 0 ifx=0Note that θ_n is an involution and |AutX^n| = |AutY^n| =2. Let 𝐱∈^N+1 be a vector such that θ_n𝐱 = 𝐱. Then, we haveY^n 𝐱 - X^n 𝐱= x_0 _2x_1 - x_3 ^2 We calculate,Y^n - X^n =( Γ_n+ M^n_1 ) - ( Γ_n+ M^n_0) =M^n_1 - M^n_0= x_0 _2x_1 x_4 + x_2 x_3- x_0 _2x_1 x_2 + x_3 x_4by Definition <ref> = x_0 _2x_1 x_4 + x_2 x_3 - x_1 x_2 - x_3 x_4 since _2 is a homomorphism= x_0 _2 x_1 - x_3 _2 - x_2 + x_4= x_0_2 x_1 - x_3 _2 - x_2^n - 1 + x_2^n-3because 𝐱 = θ_n(𝐱) implies x_2 = x_2^n-1 and x_4 = x_2^n-3.Continuing, we may writeY^n - X^n = x_0 _2x_1 - x_3 _2- x_4 - 1 + x_4-3 by Remark <ref> = x_0 _2x_1 - x_3 ^2.Let λ, 𝐱, μ, 𝐲 be the principal eigenpairs of X^n and Y^n, respectively. As θ_n is an automorphism of X^n, it follows, by Lemma <ref>, that θ_n𝐱 = 𝐱. In particular, using Lemma <ref>, we obtainμ = Y^n𝐲≥ Y^n𝐱 = X^n 𝐱 +x_0 _2x_1 - x_3 ^2 = λ +x_0 _2x_1 - x_3 ^2In other words, the principal eigenvalue of Y^n is greater than or equal to that of X^n. We claim that they are not equal. For this purpose, it is enough to show that _2x_1 - x_3 is non-zero. This is proven in Lemma <ref> below.Given i ≥ 0, we define a mapping σ_i: ^+ →^+, as follows:σ_i j=j + 2^iifj≡ 1,…,2^i 2^i+1j - 2^iifj≡ 1+2^i,…,2^i+12^i+1We put σ_-1 = id. We also allow σ_i to act on [x_0,x_1,…,x_N], via σ_i x_j= x_σ_i j.For example, the restriction of σ_0 to [8] is the permutation 1.41234567821436587, whereas the restriction of σ_1 to [8] is the permutation 1.41234567834127856.Let 𝐱∈^N+1 such that θ_n𝐱 = 𝐱.* _3x_2i = _3x_9-2i for each i=1,2,3,4.* If 𝐱 = σ_0𝐱, then we have_3 x_7=_3 x_1and _3 x_5=_3 x_3 * If 𝐱 = σ_0𝐱 = σ_1𝐱, then we have_3 x_1=_3 x_2= … = _3 x_8 * We apply θ_n and Remark <ref>, to obtain, for each i=1,2,3,4, _3x_2i = _3x_2^n - 2i + 1 = _3x_8 - 2i + 1 = _3x_9- 2i * We have,_3 x_7 = _3 x_2by Part 1 = _3 x_1since σ_0𝐱 = 𝐱By a similar calculation, we have _3 x_3= _3 x_5. * By the assumption that σ_1𝐱 = 𝐱, we obtain_3 x_5 = _3 x_1Combined with Part 1 and Part 2, the proof is complete.Given 0≤ r ≤ n-2 and 𝐱 = x_1,…,x_N, we define f^n_r𝐱 = 2^r+1^n_r+2x_1 - x_ 1 + 2^r+1·^n_r+3 x_1 - x_ 1 +2^r+2·^n_r+3- x_ 1 + 2^r+1 + x_ 1 + 2^r+1 + 2^r+2if0≤ r ≤ n-32^n-1x_1 - x_ 1 + 2^n-1 x_1 - x_ 1 + 2^n-1ifr = n-2 Recall that ^n_n acts as the identity. Hence, for each 0≤ r≤ n-2, the polynomial f^n_r𝐱 is divisible by ^n_r+2x_1 - x_ 1 + 2^r+1. Now, we develop some properties of the polynomials { f^n_r𝐱}_0 ≤ r ≤ n-2.Given 𝐱 = x_0,x_1,…,x_N such that θ_n 𝐱 = 𝐱 and σ_0𝐱 = 𝐱, then for any 2≤ r ≤ n-2, and j = 0,1:p_j _r^n x_1 = ^n+1_r+1 x_1p_j_r^n x_1+2^r-1 = ^n+1_r+1 x_1+2^rNote that p_0 _r^n x_i = ^n+1_r+1 x_2i and p_1 _r^n x_i = ^n+1_r+1 x_2i-1. In particular, we obtain p_1 _r^n x_1=^n+1_r+1 x_1 p_1 _r^n x_1+2^r-1= ^n+1_r+1 x_1+2^rand also, using the assumption that σ_0𝐱 = 𝐱, we obtainp_0 _r^n x_1 = ^n+1_r+1 x_2 = ^n+1_r+1 x_1p_0 _r^n x_1+2^r-1 = ^n+1_r+1 x_2+2^r= ^n+1_r+1 x_1+2^r+1If 𝐱 = x_0,x_1,…,x_N is such that θ_n 𝐱 = 𝐱 and σ_0𝐱 = 𝐱, then p_0 + p_1 f^n_r𝐱 = f_r+1^n+1𝐱As p_0 and p_1 are homomorphisms, we obtain, by Lemma <ref>, thatCase 1: 0 ≤ r ≤ n-3.p_0f^n_r𝐱 + p_1f^n_r𝐱 = p_02^r+1^n_r+2x_1 - x_ 1 + 2^r+1·^n_r+3 x_1 - x_ 1 +2^r+2·^n_r+3- x_ 1 + 2^r+1 + x_ 1 + 2^r+1 + 2^r+2 + p_12^r+1^n_r+2x_1 - x_ 1 + 2^r+1·^n_r+3 x_1 - x_ 1 +2^r+2·^n_r+3- x_ 1 + 2^r+1 + x_ 1 + 2^r+1 + 2^r+2 = 2 · 2^r+1^n+1_r+3x_1 - x_ 1 + 2^r+2·^n+1_r+4 x_1 - x_ 1 +2^r+3·^n+1_r+4- x_ 1 + 2^r+2 + x_ 1 + 2^r+2 + 2^r+3 = f_r+1^n+1𝐱. Case 2: r=n-2. p_0f^n_n-2𝐱+ p_1 f^n_n-2𝐱 = p_02^n-1x_1 - x_ 1 + 2^n-1 x_1 - x_ 1 + 2^n-1 + p_12^n-1x_1 - x_ 1 + 2^n-1 x_1 - x_ 1 + 2^n-1 = 2 · 2^n-1x_1 - x_ 1 + 2^n x_1 - x_ 1 + 2^n = f_n-1^n+1𝐱It is easy to check that, for each i ∈ V_n-1, 1≤ r ≤ n-2 and j∈{0,1}, we havep_j σ_r-1 x_i= σ_r p_j x_iLet n≥ 3, 0≤ r ≤ n-2 and 2≤ k ≤ n be fixed. Let 𝐱∈^N+1 such that θ_n𝐱 = 𝐱 and 𝐱 = σ_i𝐱 for each i=-1,0,…, r-1. Then,G_k^n𝐱 - G_k^nσ_r𝐱 = f_r^n𝐱ifk = n-r 0 if k≠ n-rCase 1: k=n. In this case, we would like to show that G_n^n𝐱 - G_n^nσ_r𝐱 = f_0^n𝐱ifr=00 if 1≤ rBy Definition <ref>, haveG_n^n = _3G_3^3By Remark <ref>, we have E^n_3 x_i=E^n_3σ_t x_i whenever t≥ 3. Therefore, we only need to consider r = 0,1,2.a) Assume r=0. By Lemma <ref>, we have _3 x_2= _3 x_7 and_3 x_4 = _3 x_5_3 x_6= _3 x_3 and_3 x_8 = _3 x_1We calculate,C^3𝐱 - C^3σ_0𝐱= ( x_1 x_2 x_3 + x_2 x_3 x_4 + x_3 x_4 x_5 + x_4 x_5 x_6 + x_5 x_6 x_7 + x_6 x_7 x_8 + x_7 x_8 x_1 + x_8 x_1 x_2)-x_2 x_1 x_4 + x_1 x_4 x_3 + x_4 x_3 x_6 + x_3 x_6 x_5 + x_6 x_5 x_8 + x_5 x_8 x_7 + x_8 x_7 x_2 + x_7 x_2 x_1 by Definition <ref> =2 x_1 x_3 x_7 + 2 x_3 x_5 x_7 + 2 x_3 x_5^2 + 2 x_1^2 x_7 - 2 x_1 x_5 x_7- 2 x_1 x_3 x_5 - 2 x_3^2 x_5 - 2 x_1 x_7^2by Equations <ref>On the other hand,D^3𝐱 - D^3σ_0𝐱=x_1 x_4 x_6 + x_2 x_5 x_7 + x_3 x_6 x_8 + x_4 x_7 x_1 + x_5 x_8 x_2 + x_6 x_1 x_3 + x_7 x_2 x_4 + x_8 x_3 x_5 - x_2 x_3 x_5 + x_1 x_6 x_8 + x_4 x_5 x_7 + x_3 x_8 x_2 + x_6 x_7 x_1 + x_5 x_2 x_4 + x_8 x_1 x_3 + x_7 x_4 x_6by Definition <ref> = 2 x_1 x_3 x_5+ 2 x_5 x_7^2 + 2 x_1 x_3^2 + 2 x_1 x_5 x_7-2 x_3 x_5 x_7- 2 x_1^2 x_3- 2 x_5^2 x_7 - 2 x_1 x_3 x_7 by Equations <ref>Therefore, G_n^n𝐱 - G_n^nσ_0𝐱= E_3C^3𝐱 + D^3𝐱 -E_3C^3σ_1𝐱 + D^3σ_1𝐱 = _3C^3𝐱 - C^3σ_1𝐱 + E_3D^3𝐱 - D^3σ_1𝐱 = _3 ( 2 x_1 x_3 x_7 + 2 x_3 x_5 x_7 + 2 x_3 x_5^2 + 2 x_1^2 x_7 -2 x_1 x_5 x_7- 2 x_1 x_3 x_5- 2 x_3^2 x_5 - 2 x_1 x_7^2)+ _3 (2 x_1 x_3 x_7 + 2 x_3 x_5 x_7 + 2 x_3 x_5^2 + 2 x_1^2 x_7 -2 x_1 x_5 x_7- 2 x_1 x_3 x_5- 2 x_3^2 x_5 - 2 x_1 x_7^2 )= 2 _3x_3 x_5^2 +x_1^2 x_7+ x_5 x_7^2 + x_1 x_3^2- x_3^2 x_5- x_1 x_7^2 - x_5^2 x_7 - x_1^2 x_3= 2_3 (x_1 - x_3 + x_5 - x_7)x_1 - x_5-x_3 + x_7 = 2_3(x_1 - x_3 + x_5 - x_7) _3 x_1 - x_5 _3 -x_3 + x_7 = 2_2( x_1 - x_3 ) _3 x_1 - x_5 _3 -x_3 + x_7= f_0^n𝐱b) If r=1, then, by Lemma <ref>, the assumption 𝐱 = σ_0𝐱 implies that _3 x_1= _3 x_7 and_3 x_3= _3 x_5We calculate,C^3𝐱 - C^3σ_1𝐱=(x_1 x_2 x_3 + x_2 x_3 x_4 + x_3 x_4 x_5 + x_4 x_5 x_6 + x_5 x_6 x_7 + x_6 x_7 x_8 + x_7 x_8 x_1 + x_8 x_1 x_2) - (x_3 x_4 x_1 + x_4 x_1 x_2 + x_1 x_2 x_7 + x_2 x_7 x_8 + x_7 x_8 x_5 + x_8 x_5 x_6 + x_5 x_6 x_3 + x_6 x_3 x_4) by Definition <ref>=(x_1^2 x_3 + x_1 x_3^2 + x_3^2 x_5 + x_3 x_5^2 + x_5^2 x_7 + x_5 x_7^2 + x_1 x_7^2 + x_7 x_1^2) -( x_1 x_3^2 + x_1^2 x_3 + x_1^2 x_7 + x_1 x_7^2 +x_5 x_7^2 + x_5^2 x_7 + x_3 x_5^2 + x_3^2 x_5) by applying σ_0 on even indices=(2x_1^2 x_5 + 2 x_1 x_5^2 + 2 x_5^3+ 2x_1^3) - (2 x_1^2 x_5 + 2 x_1 x_5^2 + 2 x_5^3+ 2 x_1^3) = 0 by Equations <ref> On the other hand, D^3𝐱 - D^3σ_1𝐱=(x_6 x_1 x_4+ x_7 x_2 x_5+ x_8 x_3 x_6 + x_1 x_4 x_7+ x_2 x_5 x_8+ x_3 x_6 x_1+ x_4 x_7 x_2+ x_5 x_8 x_3) - (x_8 x_3 x_2+ x_5 x_4 x_7+ x_6 x_1 x_8+ x_3 x_2 x_5+ x_4 x_7 x_6+ x_1 x_8 x_3+ x_2 x_5 x_4+ x_7 x_6 x_1) by Definition <ref> =(x_5 x_1 x_3+ x_7 x_1 x_5+ x_7 x_3 x_5+ x_1 x_3 x_7+ x_1 x_5 x_7+ x_3 x_5 x_1+ x_3 x_7 x_1+ x_5 x_7 x_3) - (x_7 x_3 x_1+ x_5 x_3 x_7+ x_5 x_1 x_7 + x_3 x_1 x_5+x_3 x_7 x_5 + x_1 x_7 x_3+ x_1 x_5 x_3+ x_7 x_5 x_1) by applying σ_0 on even indices=4 x_1 x_5^2 + 4 x_1^2 x_5 -4 x_1^2 x_5 + 4 x_1 x_5^2 = 0 by Equations <ref>Therefore,G_n^n𝐱 - G_n^nσ_1𝐱= E_3C^3𝐱 - C^3σ_1𝐱 - E_3D^3𝐱 - D^3σ_1𝐱 = E_3 0+ E_3 0= 0c) If r=2, then, by Lemma <ref>, the assumption 𝐱 = σ_0𝐱 = σ_1𝐱 implies that _3 x_1=_3 x_2= … = _3 x_8Therefore, by Equation <ref> we obtain G_n^n𝐱 - G_n^nσ_2𝐱 = 0 .Case 2: 2≤ k≤ n-1. We apply induction on r. For the basis step, assume r=0. In particular, we have k≠ n - r. The graph G_k^n is a disjoint union of all-even and all-odd edges. Formally, G_k^n = p_0G_k^n-1 + p_1G_k^n-1Note that σ_0 maps even and odd edges to each other bijectively:σ_0 p_0G_k^n-1 = p_1G_k^n-1 and σ_0 p_1G_k^n-1 = p_0G_k^n-1. Therefore, G_k^n𝐱 - G_k^nσ_0𝐱 = 0 For the inductive step, let 1≤ r ≤ n-2 be given. First, we note thatG_k^n𝐱 - G_k^nσ_r𝐱= G_k^n𝐱 - σ_r G_k^n𝐱 = p_0G_k^n-1𝐱 + p_1G_k^n-1𝐱 - σ_rp_0G_k^n-1𝐱 - σ_rp_1G_k^n-1𝐱 = p_0G_k^n-1𝐱 + p_1G_k^n-1𝐱 - p_0σ_r-1G_k^n-1𝐱 -p_1σ_r-1 G_k^n-1𝐱 by Remark <ref> = p_0 G_k^n-1𝐱 - σ_r-1 G_k^n-1𝐱 + p_1 G_k^n-1𝐱 - σ_r-1 G_k^n-1𝐱 = p_0 G_k^n-1𝐱 -G_k^n-1σ_r-1𝐱 + p_1 G_k^n-1𝐱 -G_k^n-1σ_r-1𝐱 . Case 2.1: k = n-rG_n-r^n𝐱 - G_n-r^nσ_r𝐱 = p_0 G_n-r^n-1𝐱 -G_n-r^n-1σ_r-1𝐱 + p_1 G_n-r^n-1𝐱 -G_n-r^n-1σ_r-1𝐱 = p_0 f_r-1^n-1𝐱 + p_1f_r-1^n-1𝐱 by inductive hypothesis = f_r^n𝐱 by Lemma <ref>. Case 2.2: k≠ n-rG_k^n𝐱 - G_k^nσ_r𝐱 = p_0 G_k^n-1𝐱 -G_k^n-1σ_r-1𝐱 + p_1 G_k^n-1𝐱 -G_k^n-1σ_r-1𝐱 = p_00+ p_10= 0 by inductive hypothesis. Let 𝐱∈^N+1 such that θ_n𝐱 = 𝐱 and 𝐱 = σ_i𝐱 for each i=-1,0,…, r-1, where 0≤ r ≤ n-2 is fixed. Then,X^n𝐱 - X^nσ_r𝐱= f_r^n𝐱First, we show thatM^n_0 𝐱 - M^n_0 σ_r𝐱 = 0Recall that M^n_0= x_0 _2x_1 x_2 + x_3 x_4By Remark <ref>, we have E^n_2 x_i=E^n_2σ_t x_i whenever t≥ 2. Hence, we only need to consider r = 0,1: M^n_0 𝐱 - M^n_0 σ_0𝐱 = x_0_2x_1 x_2 + x_3 x_4- x_0_2x_2 x_1 + x_4 x_3= 0 M^n_0 𝐱 - M^n_0 σ_1𝐱 = x_0_2x_1 x_2 + x_3 x_4- x_0_2x_3 x_4 + x_1 x_2= 0Therefore,X^n𝐱 - X^nσ_r𝐱= M^n_0 𝐱 - M^n_0 σ_r𝐱 + ∑_k = 2^n G_k^n𝐱 - G_k^nσ_r𝐱 = 0 + G_n-r^n𝐱 - G_n-r^nσ_r𝐱= f_r^n𝐱 by Lemma <ref>Next, we compare the neighborhoods of 1 and 1+2^r, for each r=0,1,…, n-3. First, we have some definitions and helper lemmas:Let ℋ = [n],Eℋ be a hypergraph, of rank m≥ 2. For each i∈ [n], the link of i is defined as the partial derivativeℋ[i] = ∂∂ x_i ℋ𝐱Note that the link ℋ[i] is a hypergraph of rank m-1, for each i=1,…,n. Let deg_ℋi be the number of edges in ℋ[i]. In other words, _ℋi = ℋ[i]1. Taking the link of a vertex is a commutative operation:ℋ[i,j] =ℋ[i][j] = ℋ[j][i]for each pairi≠ jLet _ℋi,j be the number of edges in ℋ[i,j]. Equivalently, _ℋi,j = ℋ[i,j]1. Let i∈ V_n-1 and 1≤ r ≤ n-3 be fixed. Let ϵ∈{0,1}^r be a zero-one vector. Then, we have* p^n_ϵ i= 1 if and only ifi ≡ 1 2^n-r and ϵ = 1_r* p^n_ϵ i= 2 if and only ifi ≡ 1 2^n-r and ϵ =0, 1,…,1^r-1many Let i = ∑_j=0^n-1 2^j b_j(i) be the binary representation of i, for each i∈ V_n-1. By Remark <ref>, we havep_ϵ(i) = 2^r i- ∑_j = 0^r-1ϵ_j 2^jIn other words, ∑_j=r^n-1+r 2^j b_j(i)= p_ϵ(i) + ∑_j = 0^r-1ϵ_j 2^j * Assume p_ϵ i= 1. The equation above implies that ϵ_0 = ϵ_1 = … = ϵ_r-1 = 1. Then, we obtain 2^r i= 2^r2^n, which implies i ≡ 1 2^n-r.Conversely, for each 1≤ s≤ 2^n-r, we havep_1 1 + s · 2^n-r = 2^r 1 + s · 2^n-r -2^r - 1= 1 + s · 2^n ≡ 1 2^n* Assume p_ϵ i= 2. In this case, we obtain ϵ_0=0 and ϵ_1 = … = ϵ_r-1 = 1. Then, we obtain 2^r i= 2^r2^n, which implies i ≡ 1 2^n-r. Conversely, for each 1≤ s≤ 2^n-r, we getp_ϵ 1 + s · 2^n-r = p_0 p_1_r-1 1 + s · 2^n-r = p_02^r-1 1 + s · 2^n-r -2^r-1 - 1= p_01 + s · 2^n-1 = 2+s· 2^n ≡ 2 2^n Let 0≤ r ≤ n-2 and 2≤ k ≤ n be fixed. Let 𝐱∈^N+1 such that θ_n𝐱 = 𝐱 and 𝐱 = σ_i𝐱 for each i=0,…, r, where r≥ 0. Then,G_k^n[1]𝐱 - G_k^n[1+2^r]𝐱 = _r+3x_1 - x_ 1 + 2^r+2^2 ifk = n-r 0 if k≠ n-rCase 1: k=n. In this case, we would like to show thatG^n_n[1] 𝐱 - G^n_n[1+2^r] 𝐱 = _3x_1 - x_ 5 ^2ifr=0 0 if1≤ r a) First, assume r=0. By Lemma <ref>, Part 2, we have_3 x_7= _3 x_1 and_3 x_3= _3 x_5 By Definition <ref>, we haveG^n_n[1] 𝐱 -G^n_n[2] 𝐱=_3x_7 x_8 + x_8 x_2 + x_2 x_3 + x_6x_3 + x_4 x_7 + x_4 x_6- _3x_8x_1 + x_1 x_3 + x_3 x_4 + x_7 x_4 + x_5 x_8 + x_5 x_7= _3x_7 x_7 + x_7 x_1 + x_1 x_3 + x_5x_3 + x_3 x_7 + x_3 x_5- _3x_7x_1 + x_1 x_3 + x_3 x_3 + x_7 x_3 + x_5 x_7 + x_5 x_7by σ_0 applied to even indices= _3x_7 x_7 + 2 x_5 x_3- _3 x_3 x_3 + 2 x_5 x_7 = _3x_1^2 + 2 x_5^2- _3 x_5^2 + 2 x_5 x_1by Equations <ref> = _3 x_1-x_5^2 as _3 is a ring homomorphismb) Now, we assume 1≤ r ≤ n-2. By Lemma <ref>,_3 x_i= _3 x_jfor any 1≤ i < j≤ 8. Therefore, by Definition <ref>, we haveG_n^n[1] = G_n^n[1+2^r] = 0.Case 2: 2≤ k ≤ n-1. Now, we apply induction on r. For the base case, assume r=0. In particular, we have k≠ n - r. In this case, we would like to show that G^n_k[1] 𝐱 - G^n_k[2] 𝐱 = 0 Case 2a: k = 2We describe the neighborhoods of 1 and 2. Let e∈ E G_2^n be an edge. By Remark <ref> Part 1, we have 𝐱^e = p_ϵ x_i x_i+2 x_i+4, for some ϵ∈{0,1}^n-3 and some 1≤ i≤ 4.If 1∈ e, then, by Lemma <ref>, we have ϵ = 1_n-3 and i = 1. Hence,𝐱^e = p_1_n-3 x_1 x_3 x_5= x_1 x_3· 2^n-3 -2^n-3 - 1 x_5· 2^n-3 -2^n-3 - 1 = x_1 x_2^n-2+1 x_2^n-1+1If 2∈ e, then, by Lemma <ref>, we have ϵ =0, 1,…,1^n-4many and i = 1. Hence,𝐱^e= p_0 p_1_n-4 x_1 x_3 x_5= p_0x_1 x_2^n-3+1 x_2^n-2+1 = x_2 x_ 2^n-2 +2x_2^n-1 +2Using the assumption that σ_0𝐱 = 𝐱, we conclude, G^n_2[1] 𝐱 - G^n_2[2] 𝐱 = x_2^n-2+1 x_2^n-1+1 -x_2^n-2+2 x_2^n-1+2 = 0 Case 2b: 3≤ k ≤ n-1Let e∈ E G_k^n be an edge. By Remark <ref> Part 2, we have 𝐱^e = p_ϵ x_j_1 x_j_2 x_j_3for some edge {j_1,j_2,j_3}∈ E G_k^k with j_1<j_2<j_3 and some zero-one vector ϵ∈{0,1}^ n-k.* If 1∈ e, then, by Lemma <ref>, we have ϵ = 1_n-k and j_1 ≡ 1 2^k. Since {j_1,j_2,j_3}∈ E G_k^k, it follows that j_1 = 1. Furthermore, {1,j_2,j_3}∈ E G_k^k implies that j_2 = i_2 + 8s_2 V_k andj_3 = i_3 + 8s_3 V_kwhere 1≤ i_2,i_3 ≤ 8, 1≤ s_2,s_3≤ 2^k-3 and {1,i_2,i_3}∈ G_3^3. Hence,𝐱^e =p_1_n-k x_1 x_i_2 + 8s_2 x_i_3 + 8s_3 For each i and s, we have p_1_n-ki + 8s=2^n-k i + 8s-2^n-k - 1= 2^n-k i + 8 s -1+ 1 Therefore,G_k^n[1] 𝐱= ∑_{1,i_2,i_3}∈ E G_3^3 ∑_ 1≤ s_2,s_3≤ 2^k-3x_ 2^n-k i_2 + 8 s_2 -1+ 1 x_ 2^n-k i_3 + 8 s_3 -1+ 1* Similarly, if 2∈ e, then, by Lemma <ref>, we have ϵ =0, 1,…,1^n-k-1many and j_1 ≡ 1 2^k. This implies, as before, that𝐱^e = p_ϵ x_1 x_i_2 + 8s_2 x_i_3 + 8s_3for some 1≤ i_2,i_3 ≤ 8 and some 1≤ s_2,s_3≤ 2^k-3, with {1,i_2,i_3}∈ G_3^3. For each i and s, we have p_ϵ i + 8s=2^n-k i + 8s-2^n-k - 2 = 2^n-k i + 8 s -1+ 2 Therefore,G_k^n[2] 𝐱= ∑_{1,i_2,i_3}∈ E G_3^3 ∑_ 1≤ s_2,s_3≤ 2^k-3x_ 2^n-k i_2 + 8 s_2 -1+ 2 x_ 2^n-k i_3 + 8 s_3 -1+ 2Using the assumption that σ_0𝐱 = 𝐱, we obtain G^n_k[2] 𝐱 = σ_0 G_k^n[1] 𝐱 = G^n_k[1] σ_0𝐱 =G^n_k[1] 𝐱For the inductive step, assume 1≤ r ≤ n-2. First, note thatG_k^n[1] 𝐱 - G_k^n[1+2^r]𝐱 = p_1G_k^n-1[1] 𝐱 -p_1G_k^n-1[1+2^r-1] 𝐱 = p_1 G_k^n-1[1] 𝐱 - G_k^n-1[1+2^r-1] 𝐱If k = n-r, thenG_k^n[1] 𝐱 - G_k^n[1+2^r]𝐱 = p_1_r+2x_1 - x_ 1 + 2^r+1^2by the inductive hypothesis = _r+3x_1 - x_ 1 + 2^r+2^2by Lemma<ref>.If k≠ n-r, thenG_k^n[1] 𝐱 - G_k^n[1+2^r]𝐱 = p_10= 0by the inductive hypothesis. Let 𝐱∈^N+1 such that θ_n𝐱 = 𝐱 and 𝐱 = σ_i𝐱 for each i=0,…, r, where 0 ≤ r ≤ n-3. Then,X^n[1] 𝐱 - X^n[1+2^r] 𝐱 = _r+3x_1 - x_ 1 + 2^r+2^2First, we show that M^n_0[1] 𝐱 - M^n_0[1+2^r] 𝐱 = 0Recall thatM^n_0 = x_0 _2x_1 x_2 + x_3 x_4Hence, we haveM^n_0[1] = x_0 _2x_2M^n_0[2] = x_0 _2x_1M^n_0[3] = x_0 _2x_4By Remark <ref>, we need only to consider r = 0,1: M^n_0[1] 𝐱 - M^n_0[2] 𝐱= x_0 ( _2x_2- _2x_1) = 0 by σ_0 𝐱 = 𝐱 M^n_0[1] 𝐱 - M^n_0[3] 𝐱= x_0 ( _2x_2- _2x_4) = 0 by σ_1 𝐱 = 𝐱which shows that M^n_0[1]𝐱 - M^n_0[1+2^r]𝐱 = 0. Therefore,X^n[1] 𝐱 - X^n[1+2^r] 𝐱 =M^n_0[1]𝐱 - M^n_0[1+2^r] 𝐱 + ∑_k = 2^n G_k^n[1]𝐱 - G_k^n[1+2^r]𝐱 = 0 + G_n-r^n[1] 𝐱 - G_n-r^n[1+2^r] σ_r𝐱= _r+3x_1 - x_ 1 + 2^r+2^2by Lemma <ref>. In <cit.>, the degree of each vertex in Γ_n is calculated: _Γ_n i=2^2n-3-1if1≤ i ≤ 2^n-2 or1 + 3· 2^n-2≤ i ≤ 2^n2^2n-3otherwise In particular, we note that Γ_n is not regular. Let ∼ denote the projective equivalence relation, i.e., two vectors satisfy 𝐱∼𝐲 if and only if 𝐱 = a 𝐲 for some a∈∖{0}.The following lemma may have independent interest, and is therefore stated for hypergraphs of any rank.Let 𝒢 = [n],E𝒢 be a hypergraph, of rank m≥ 2. Let ℋ = [n] ∪{0}, Eℋ be obtained from 𝒢 by appending a vertex 0 ∉ [n] with _ℋ0,i = _ℋ0,j, for each i,j∈ [n]. Let λ_𝒢,𝐯 and λ_ℋ, 𝐰 be the principal eigenpairs of 𝒢 and ℋ, respectively, where 𝐰 = w_0,w_1,…,w_n. Then, the following are equivalent: i. 𝒢 is regular. ii. 𝐯∼1. iii. 𝐰∼𝐰 := w_0, 1,…, 1for some w_0 ∈∖{0}.a* (i) ⟹ (ii): Follows from <cit.> and the uniqueness of the principal eigenpair.(i) ⟸ (ii): Note that _𝒢i = 𝒢[i] 1 = 𝒢[j]1 = _𝒢j for each i,j. * Let γ = _ℋ0,i = ℋ[0,i] 1 for each i∈ [n].Consider the link of i = 1,…, n :ℋ[i] 𝐱 = 𝒢[i] 𝐱 + x_0 ℋ[0,i] 𝐱(iii) ⟹ (ii): By assumption, 𝐰 is an eigenvector of ℋ. So, we haveℋ[i] 𝐰 = λ_ℋ· 1^m-1 = λ_ℋfor each i=1,…,n. We evaluate Equation <ref> at 𝐰 to obtain λ_ℋ = ℋ[i] 𝐰 = 𝒢[i] 1 + w_0 ℋ[0,i] 1 = 𝒢[i] 1 + w_0 γfor each i. Therefore, 1√(n)·1 is the principal eigenvector of 𝒢.(ii) ⟸ (iii): By assumption, the following equation is satisfied:𝒢[i] 1 = λ_𝒢· 1^m-1 = λ_𝒢 for eachi = 1,…, nConsider the vector 𝐮 =u,1,…,1. We would like to show that there are some constants u,λ_u∈_>0 such that the pair λ_u,𝐮 satisfies the eigenvalue equations for ℋ. When we evaluate Equation <ref> at 𝐮, we get ℋ[i] 𝐮 = 𝒢[i] 1 + u ·_ℋ0,i = λ_𝒢 + u γfor each i=1,…,n. Let λ_u : = λ_𝒢 + u γ. In particular, λ_u · 1^m-1 = ℋ[i] 𝐮 for i=1,…,n.Also, we haveℋ[0] 𝐮 =_ℋ0which is a constant. It is enough to show that ℋ[0] 𝐮 = λ_u u^m-1 for some u ∈_>0. In other words, it is enough to show that the equation _ℋ0 = λ_𝒢 + u γ· u^m-1=λ_𝒢 u^m-1 + γ u^mhas a solution, which follows from the Intermediate Value Theorem. Let n≥ 3 be fixed. Let 𝐱 be the principal eigenvector of X^n. Then, _2^n x_1 - x_3 ≠ 0.Suppose, for a contradiction, that _2 x_1 - x_3= 0. By induction on r, we show that σ_r𝐱 = 𝐱 for each r =0,…, n-2.For the base case, let r = 0. By Lemma <ref>, we have X^n𝐱 - X^nσ_0𝐱= 2 ·_2x_1 - x_ 3 ·_3 x_1 - x_ 5 ·_3- x_ 3 + x_ 7= 0By the uniqueness of the principal eigenpair, we obtain 𝐱 =σ_0𝐱. Now, let 1≤ r ≤ n-3 be fixed. By the inductive hypothesis, we haveσ_i𝐱 = 𝐱for each i=0,…,r-1. By Lemma <ref>, we haveX^n[1] 𝐱 - X^n[1+2^r-1] 𝐱 = _r+2x_1 - x_ 1 + 2^r+1^2Since σ_r-1𝐱 = 𝐱, it follows that x_1 = x_1+2^r-1 and so, X^n[1] 𝐱 - X^n[1+2^r-1] 𝐱 = 0. Hence, we have _r+2x_1 - x_ 1 + 2^r+1 = 0. Using Lemma <ref>, we obtainX^n𝐱 - X^nσ_r𝐱= f^n_r𝐱As f^n_r𝐱 is divisible by _r+2x_1 - x_ 1 + 2^r+1, we obtain X^n𝐱 - X^nσ_r𝐱 = 0. By the uniqueness of the principal eigenpair, we conclude 𝐱 =σ_r𝐱, which finishes the induction. Equation <ref> implies x_i = x_j, for each 1≤ i < j ≤ 2^n-1. Since θ_n𝐱 = 𝐱, we obtainx_i = x_jfor each1≤ i < j ≤ 2^nBy Lemma <ref>, we infer that Γ_n is a regular hypergraph, which contradicts Lemma <ref>. We may now prove our main theorem. The principal eigenvalues of X^n and Y^n are different.Let λ, 𝐱, μ, 𝐲 be the principal eigenpairs of X^n and Y^n, respectively. In particular, we have λ = X^n 𝐱 and μ = Y^n 𝐲By Lemma <ref>, we haveY^n 𝐱 - X^n𝐱 = x_0_2 x_1 - x_3 ^2 Therefore,μ - λ=Y^n 𝐲 - X^n 𝐱 = Y^n 𝐲 - Y^n 𝐱 + Y^n 𝐱 - X^n 𝐱 = Y^n 𝐲 - Y^n 𝐱 + x_0_2 x_1 - x_3 ^2 > 0by Lemma <ref>In other words, μ > λ.The infinite pair of hypergraphs {X^n, Y^n}_n≥ 3 are hypomorphic, but their characteristic polynomials are different.§ ACKNOWLEDGEMENT Thanks to Alexander Farrugia for helpful discussions that motivated much of the present work. SageMath (<cit.>) was integral to the development of our arguments; the second author maintains a repository of SageMath code (<cit.>) that can be used to perform hypergraph calculations, such as the ones that appear above.plain
http://arxiv.org/abs/2312.16152v1
{ "authors": [ "Joshua Cooper", "Utku Okur" ], "categories": [ "math.CO", "05C60 (Primary) 05C31, 05C50, 05C65 (Secondary)", "G.2.2" ], "primary_category": "math.CO", "published": "20231226182514", "title": "Polynomial Reconstruction Problem for Hypergraphs" }
=1 =1 =1 =1 =1 plcomppsNthcompτ_T χ^2bbN_ H pl compps Nthcomp compTTτ_ Tχ^2 bbN_ H Z. LiKey Laboratory of Stars and Interstellar Medium, Xiangtan University, Xiangtan 411105, Hunan, P.R. China [email protected], [email protected] Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049, China International Space Science Institute (ISSI), Hallerstrasse 6, 3012 Bern, Switzerland Physikalisches Institut, University of Bern, Sidlerstrasse 5, 3012 Bern, Switzerland Type I X-ray bursts in the ultracompact X-ray binary 4U 1820–30 are powered by the unstable thermonuclear burning of hydrogen-deficient material. We report the detection of 15 type I X-ray bursts from 4U 1820–30 observed byin between 2017 and 2023. All these bursts occurred in the low state for the persistent flux in the range of 2.5-8×10^-9  erg s^-1 cm^-2 in 0.1–250 keV. The burst spectra during the tail can be well explained by blackbody model. However, for the first ∼5 s after the burst onset, the time-resolved spectra showed strong deviations from the blackbody model. The significant improvement of the fit can be obtained by taking into account of the enhanced persistent emission due to the Poynting-Robterson drag, the extra emission modelled by another blackbody component or by the reflection from the surrounding accretion disk. The reflection model provides a self-consistent and physically motivated explanation. We find that the accretion disk density changed with 0.5 s delay as response to the burst radiation, which indicates the distortion of the accretion disk during X-ray bursts. From the time-resolved spectroscopy, all bursts showed the characteristic of photospheric radius expansion (PRE). We find one superexpansion burst with the extreme photospheric radius r_ ph>10^3 km and blackbody temperature of ∼ 0.2 keV, thirteen strong PRE bursts for r_ ph>10^2 km, and one moderate PRE burst for r_ ph∼55 km. 4U 1820–30Yu et al.NICER views moderate, strong, and extreme photospheric expansion bursts from the ultracompact X-ray binary 4U 1820–30 Wenhui Yu1 Zhaosheng Li1Corresponding author Yongqi Lu1 Yuanyue Pan1Corresponding author Xuejuan Yang1Yupeng Chen2Shu Zhang2Maurizio Falanga 3,4 Received XX; accepted XX ================================================================================================================================================================================================================== § INTRODUCTIONUltracompact X-ray binaries (UCXBs) are a subtype of low-mass X-ray binaries (LMXBs), defined by their orbital periods less than 80 minutes, in which a neutron star (NS) or black holeaccretes matter from its low-mass companion <cit.>. In such short orbital periods, the donor star must be small enough to fit within the orbital size, such as white dwarfs or helium stars <cit.>. If the compact object is a NS, the accreted material is accumulated on the stellar surface which could occasionally trigger unstable thermonuclear bursts <cit.>.The chemical composition of the accreted matter in LMXBs affects the properties of X-ray bursts <cit.>. The burst duration depends on the cooling of the burning layer, therefore scales with the ignition depth <cit.>. Normal type IX-ray bursts are fueled by mixture of hydrogen and helium or pure helium, lasting ∼10–100 seconds with the recurrence time of hours <cit.>. Intermediate-duration bursts are due to unstable burning of pure helium in deeper layers <cit.>, with the duration of tens of minutes and recurrence time of several weeks <cit.>. Superbursts are powered by carbon at an ignition depthof ∼10^11-12 g  cm^-2<cit.>, which lasts from hours to days and recurs from days to years <cit.>. The X-ray burst spectra can be described by a blackbody model with the temperatures in the range of kT_ bb≈0.5-3 keV.In UCXBs, resulting from the unstable burning of helium-rich accreted from the companion, the burst peak fluxes usually reach or slightly exceed the Eddington limit, showing the photospheric radius expansion (PRE) from time-resolved spectroscopy<cit.>. Moreover, some bursts exhibit strong expansion of the radius excess ∼100 km <cit.>. A small fraction bursts are so extreme that the radii reach ∼ 10^3  km, implying the photosphere expands a factor of > 100, which is referred as "superexpansion" burst <cit.>.The energetic X-ray photons released from the bursts can interact with the matter surrounding the NS in various ways, significantly impact on both the disk and corona <cit.>.Numerical simulation reveals that the outpouring of radiation during an X-ray burst affects the disk structure andgenerates radiatively driven warps in a large fraction of the disk <cit.>. Observationally, the evidence is usually found in the emergent spectra of X-ray bursts that deviate from the Planck function. The enhanced persistent emission during the X-ray bursts caused by the Poynting-Robertson drag has been proposed to produce the soft and hard excess in the burst spectra <cit.>. Alternatively, the excess can also be explained by the burst emission which was reflected by the accretion disk <cit.>. 4U 1820–30 is a persistent and atoll X-ray source, which was first discovered by the Uhuru satellite <cit.>. It is also the first X-ray burster located in the globular cluster NGC 6642 at a distance of 8.4±0.6 kpc <cit.>. The binary orbital period of 685 s suggests is an UCXB <cit.>.The donor of 4U 1820–30 is probably a helium white dwarf with a mass of ∼ 0.06 - 0.07  M_⊙<cit.>. The binary inclination angle to the observer is constrained in the range of 35^∘-50^∘<cit.>. The source onlyshows X-ray bursts during the low state with the recurrence time of 2–4 hr <cit.>. Two superbursts and two long X-ray burstshave been observed in 4U 1820–30 <cit.>.Comparing the thermonuclear flash models with the observed recurrence time, burst fluences and persistent fluxes, <cit.> proposed that a small fraction of hydrogen and heavy elements is requiredin accreted matter. We noted that <cit.> reported a strong PRE burst in 4U 1820–30 fromobservations. Besides a blackbody component, an extra optically thick Comptonization model has to be added to fit the time-resolved burst spectra appropriately. <cit.> found the emission line at 1 keV, and the absorption lines at 1.7 and 3.0 keV, during the PRE phases during X-ray bursts in 4U 1820–30. Beyond the analysis of <cit.>,we study all X-ray bursts from 4U 1820–30 detected bybetween 2017 and 2023 in this work. In Section <ref>, we describe the properties of the persistent emissions and X-ray bursts in between 2017-2023. In Section <ref>, we perform the spectral fitting of the persistent emissions and the time-resolved spectra of the X-ray bursts. We discuss and summarize the results in Section <ref> and Section <ref>, respectively.§ OBSERVATIONS AND DATA REDUCTIONhas been observed 4U 1820–30with the net unfiltered exposure time of 244, 8, 149, 17, 141, 166, and 29 ks in 2017–2023, respectively, for a total exposure time of 754 ks.We processed all archiveddata by applying the standard filtering criteria using HEASOFT V6.30.1 and theData Analysis Software (NICERDAS) V1.0.2.The 1 s binned light curves were extractedin the energy ranges between 0.5–10 keV from the calibrated unfiltered (UFA) files and cleaned events files to search for type I X-ray bursts.We found 15 type I X-ray bursts, including five in 2017, nine in 2019, and one in 2021. Three of them are only shown in the UFA files, see Table <ref>. The spectral results ofbursts #1 and #1–5 have been reported in <cit.> and <cit.>, respectively.It is worthy to note thatdata did not cover two long X-ray bursts observed by . We also extracted 64 s binned light curves in the energy ranges of 0.5–1.1, 1.1–2.0, 2.0–3.8, and 3.8–6.8 keV, using the command xselect. We calculated the soft and hard colors as the count rate ratios of1.1–2.0 keV/0.5–1.1 keV and 3.8–6.8 keV/2.0–3.8 keV, respectively. The intensity is defined as the count rate covering the energy range of 0.5–10 keV.§.§ The persistent light curvesThe light curves and hardness ratio of 4U 1820–30 observed byfrom June 2017 toAugust 2023 are shown in the top panel of Fig. <ref>. The grey regions covered the epoch of the appearance of X-ray bursts and are zoomed in on the bottom panels. The left bottom panel of Fig. <ref> shows the 2017 pre-burst persistent flux observed bybetween MJD 57987-58005. The pre-burst persistent light curve slowly decreasedfrom ∼2000 to∼1400c s^-1 infive days before the onset of the first X-ray burst in our sample. It fluctuated between ∼1400 and ∼1800c s^-1 within two days when the burst was observed, and after that it rose to ∼3000c s^-1 within 10 days. The trends of the hardness ratioduring this period were similar to the light curve, decreasing from 0.38 to 0.3 in the first five days, and then rising to 0.4 in a relatively short time. The center bottom panel of Fig. <ref> presents the 2019 observations between MJD 58640-58670, and the light curve decreased from ∼1200 to ∼600c s^-1 in the first five days. After maintaining a low count rate for the next ten days, it rapidly increased to ∼1600c s^-1 in two days, and varied in the subsequent 15 days. However, the trends of hardness ratio during this period were different from that of 2017, which showed the opposite trend compared with the light curve. The right bottom panel of Fig. <ref> displays the 2021 observations between MJD 59330-59360, the light curve was slowly rising when the burst was observed, and the hardness ratiodecreased first and then increased to around 0.4. The hardness intensity diagram (HID) and the color–color diagram (CCD) of the 2017, 2019 and 2021 observations from 4U 1820–30 are displayed in Fig. <ref>. Compared with the results from<cit.>, although the CCD produced in different energy bands, its shape fromobservations is quite similar, but rotates around 90 degrees counterclockwise. The data have two branches, the high (soft banana) stateand low (hard island) state. The HID and CCD prior to each X-ray burst are also marked in Fig. <ref>. There are no bursts are detected for the soft color > 1.38 or the intensity > 1800 c s^-1.It is consistent with a well-known fact that the bursts in 4U 1820–30 took place predominantly at low luminosity <cit.>.§.§ The X-ray bursts light curves We extracted the 0.1 s binned light curves for all 15 bursts in two energy bands, 3–10 keV and0.5–10 keV. The burst onset time is set as the count rate that exceeds the pre-burst rate by a factor of 1.5 from the 0.5–10 keV band. The pre-burst rateis defined as an average count rate for each band in a 64 s window ending 10 s prior to the burst onset time. The burst end time is determined at the point where the burst count rate decayed to the pre-burst level. The burst epoch MJD time and the peak rate in 0.5–10 keV of all 15 bursts are listed in the Table <ref>.The corresponding light curves in two energy bands are displayed in Fig. <ref>. A part of the tail in burst #10 was truncated due to the data gap. The source persistent emission is considered as background and subtracted from the burst count rate for each band.Most of the bursts have the peak count rates in the range of 1.9-2.5×10^4  c s^-1, while burst #2 has a slightly lower count rate with ∼1.5×10^4  c s^-1.The X-ray burst light curves in 0.5–10 keV showed rapidly rising in 0.2–1 s and slowly decreasing in ∼10 s, which are consistent with the hydrogen-poor burst.In Fig. <ref>,dips were obviously appeared in bursts #2 and #8 around their peaks in 0.5–10 keV. The light curve shapes in 3–10 keV were quite different with the 0.5–10 keV. The tails in 3–10 keV light curves show a second peak rather than exponentially decay. Moreover, dips in 3–10 keV were visible in all bursts but # 2.The duration of the dip is defined as the time between the first peak and the follow peak of hard light. The dip count is the ratio of minimum photon rate during the dip to peak photon rate in 3–10 keV. Except for burst #2, the bursts exhibit a dip around the burst peak which lasted 0.6–2.7 s, similar as the bursts observed byfor the comparable energy range <cit.>. It is also found that dip count rate in 3–10 keV in bursts #7 and #8 dropped to below their pre-burst levels. Since the spin frequency of 4U 1820–30 is unknown, we searched forthe burst oscillation from the event files.The power spectra is calculated in 0.3–12, 0.3–3, 3–6, and 6–12 keV, by applying the Fast Fourier transform (FFT) with Leahy normalization <cit.>. We adopt the moving window method with the width of Δ T=4 s, and the step of 0.5 s. For each window, the independent Fourier frequencies between 50 and 2000 Hz and the corresponding power are computed with the step of 0.25 Hz <cit.>. No significant burst oscillations have been detected in these bursts. The fractional rms amplitude is estimated from the relation, A_rms≈√(P_ s/N_ m), where N_ m is the total X-ray photons of the burst interval, P_ s is the maximum power of signal. We obtained the upper limit of the fractional rms amplitude to be 5.7%.§ SPECTRAL ANALYSISWe performed the spectra analysis using Xspec v12.12.0 <cit.>. All spectral models mentioned in this work can be found in the Xspec manual webpage.[<https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/manual.html>]We extracted the spectra and the 3c50 background spectra<cit.>. The associated ancillary response files () and response matrix files () were produced. The uncertainties of all parameters are quoted at the 1σ confidence level.§.§ The persistent spectraWe extracted the pre-burst persistent spectra in a 64 s window ending 10 s prior to the burst onset time for all 15 X-ray bursts in 4U 1820–30. We performed the optimal binning for each persistent spectrum by usingas recommended by theteam. Following the broadband spectral analysis by <cit.>, we fitted the pre-burst persistent spectra with a combination of a blackbody ( in Xspec) and a Comptonization component <cit.>, modified by the Tübingen–Boulder model, TBabs, with abundances from <cit.>. These two components represent the thermal radiation from the accretion disk, NS and/or boundary layer, and the Comptonization from the corona around the disk, respectively. The free parameters are, the equivalent hydrogen column, N_ H, for TBabs; the blackbody temperature, kT_ bb, and the normalization, K_ bb, for ; the temperatures of the seed photons and hot electrons, T_ 0 and T_ e, respectively; the optical thickness of the electron slab, τ_ e, and the normalization, K_ comp, for . The geometry was set as disk. We performed the joint spectral fitting in three groups, five pre-burst spectra in 2017, nine in 2019 and one in 2021, respectively. For each group, we tied the absorption column density across all spectra, and set N_ H, the parameters of theandfree to change.This model can fit all persistent spectra well for the χ^2 per degree of freedom (dof), χ^2_ν, close to unity. The best fitted parameters are listed in Table <ref>. We obtained the absorption column density(2.33 ±0.04)× 10^21, (1.93 ±0.07)× 10^21, and (1.93 ±0.13) × 10^21  cm^-2for the pre-burst spectra in 2017, 2019 and2021, respectively.They are consistent with N_ H=(2.5 ±0.3)× 10^21 and (1.63 ±0.02) × 10^21  cm^-2 obtained from theChandra spectra <cit.> and thespectra<cit.>, respectively. No distinct features are appeared in the residual.We also calculated the unabsorbed bolometric flux in energy range of 0.1–250 keV by using the tool . Moreover, we also generated other two persistent spectra for the total exposure of 256 s, from the right of HID with the count rate of ∼3000  c s^-1 and hardness ratio of ∼0.38, and from the middle of HID with the count rate of ∼2000  c s^-1 and hardness ratio of ∼0.32, respectively. The above-mentioned model can also fit the spectrum from the middle of HID very well. The best-fitted parameters are, N_ H = (2.40 ±0.11) × 10^21  cm^-2, kT_ bb = 0.55 ±0.03 keV, R_ bb = 18 ±9 km, T_ 0 = 0.02 ±0.01 keV, T_ e = 2.31 ±0.12 keV, τ_ e = 8.73 ±0.43, the unabsorbed bolometric flux in 0.1–250 keV, F = (9.90 ±0.02) × 10^-9erg s^-1 cm^-2, and χ^2 = 147.6 for 132 dof, respectively. For the spectrum from the right of HID, the best-fitted parameters of the continuum model are, N_ H = (2.44 ±0.09) × 10^21  cm^-2, kT_ bb = 0.59 ±0.09 keV, R_ bb = 15 ±9 km, T_ 0 = 0.06 ±0.02 keV, T_ e = 2.3 ±0.1 keV, τ_ e = 9.5 ±0.4, the unabsorbed bolometric flux F = (14.99 ±0.06) × 10^-9erg s^-1 cm^-2, and χ^2 = 229.7 for 136 dof, respectively. In Fig. <ref>, we showed these two spectra and the pre-burst persistent spectrum from burst #8. The best fitted models were also shown. From high to low state, the unabsorbed bolometric flux decreased from 15 × 10^-9 to 2.5× 10^-9erg s^-1 cm^-2, andthe absorption column density was also dropped from 2.5× 10^21 to 1.9× 10^21  cm^-2.In the low state, the blackbody component covers at a lowest temperature, ∼ 0.1 keV, with a radius around 100 km. As the spectra move to the high state, the blackbody temperatureincreased from 0.2 to 0.6 keV and the its radius shrink from ∼100 to ∼14 km. The size of the blackbody component of all spectra are larger than the typical radius of NS, which implies its emission from the accretion disk. Two spectral lines were only shown in the extreme high state.The plasma temperature is very close across these spectra in different states. Compared with the low state, the optical depth is higher in the high sate. All bursts were occurred at the persistent flux ≈2.5-8×10^-9 erg s^-1 cm^-2 in 0.1-250 keV, that is ∼5-15% L_ Edd. At higher flux, the activity of X-ray bursts was quenched.§.§ X-ray burst time-resolved spectroscopyTo investigate the time-resolved spectroscopy during X-ray bursts, we extracted the burst spectra with the exposure time varying between 0.1–4 s, to guarantee each spectrum has at least 1500 counts in the energy range between 0.5–10 keV. All burst spectra were grouped using grappha with a minimum count of 20 in each bin. We used an absorbed blackbody model, ×, to fit the burst spectra, where the pre-burst spectrum was regarded as background and unchanged during bursts. We fixed the hydrogen column density at 2.33 × 10^21,1.93 × 10^21, and 1.93 × 10^21  cm ^-2, for the X-ray bursts observed in 2017, 2019 and 2021, respectively.We found that the blackbody model can fitthe spectrawell during the cooling tail. However, for the first ∼ 5 s of all 15 X-ray bursts, the model fitted the spectrapoorly, i.e., the maximum χ^2_ν∼3-6.The fit can be significantly improved by using the enhanced persistent emission model (Sec. <ref>), adding a reflection component from the surrounding accretion disk (Sec. <ref>), or the double blackbody model (Sec. <ref>). In Sec. <ref>, <ref> and <ref>, the absorption and/or emission lines were also involved if necessary, butthedetails will be provided in an independent paper.§.§.§ Enhanced persistent emissionWe used the f_ a-model, that is, × +f_a× + , where the parameter f_ a ( in Xspec) evaluated the enhancement of the persistent emissions during X-ray burst. We subtracted the instrumental background. The persistent spectral shape was assumed unchanged during bursts. Therefore, the parameters of bbodyrad + compTT were fixed to the best-fitted values listed in Table <ref>. The absorption column density, N_ H, was fixed as the best-fitting value from the persistent spectra, see Table <ref>. We only allowed the parameters ofand f_ a to change freely in fitting. The fitted parameters are shown as red dot in Figs. <ref> and <ref>. The χ^2 are around 1.0, which is displayed in the bottom panel of this figure implying the significant improvements by the f_ a-model compared with the blackbody model, shown as grey dashed line. We note that <cit.> also reported the analysis of burst #1 by using the f_ a-model. They extracted spectra with exposure of 0.03 s at the peak count rate and obtained the maximum radius of 190 km. Compared with their results, we obtained a smaller radius at 154±10 km by using spectra with 0.1 s, and smaller blackbody fluence, 2.5×10^-7 erg cm^-2 versus 2.8×10^-7 erg cm^-2. However, the mean f_ a of PRE phase and maximum temperature in our work are consistent with <cit.>.From the time-resolved spectroscopy, all bursts showed the characteristics of PRE burst <cit.>. The blackbody radii expanded and reached the maximum within 1 s, meanwhile, the temperature dropped to the minimum. The strong expansion period lasted only ∼ 0.6 s, and then the blackbody radii began to shrink. Until the radii close to 50 km, the subsequent decrease slowed down, entered into a plateau of moderate expansion. Finally, when the atmosphere moved back to the NS surface, the blackbody temperature increased to its peak and the blackbody radii decreased to ∼10 km, which corresponds to the touchdown moment, i.e., the photosphere moved back to the NS surface. Thirteen bursts exhibited strong radius expansion in excess of a factor 10. The photospheric radius of burst #8increased with a factor > 100, that is r_ph≈ 1318  km, accompanied by the minimum temperature T_ bb≈ 0.17   keV, which can be referred as a superexpansion burst <cit.>. After the onset of the burst, thef_ a raised rapidly and reached to a mean value of f_ a∼ 8.5, then it began to decrease during the moderate expansion phase and returns to unity in the cooling tail. The fitted peak flux, the maximum radius, the touchdown flux are listed in Table <ref>. In addition, the distribution of f_ a during the PRE phase andthe maximum f_ a for each burst are shown in Fig. <ref>. The maximum f_ a is ∼16 from burst #7. We noted that the maximum f_ a for each bursts is inversely proportional to the flux persistent emission, see Fig. <ref>. §.§.§ The disk reflection modelThe relativistic reflection model, <cit.>[<https://www.sternwarte.uni-erlangen.de/ dauser/research/relxill/>], describes the radiation reprocessed from an accretion disk around NS, in which the seed photons are characterized by a blackbody spectrum. The assumptions of this model are consistent with theaccretion disk illuminated by the burst radiation, which has been successfully explained the deviations of the burst spectra in 4U 1636–536 and 4U 1730–22 <cit.>. In previous works, the reflection model and the f_ a-model improved the burst spectral fitting equally well, however, the burst temperatures and bolometric fluxes from the disk reflection model were less than the values from the f_ a-model by the factors of ∼0.25 and 2.3, respectively. Since the distances to 4U 1636–536 and 4U 1730–22were only estimated by using the PRE bursts, it is difficult to distinguish these two models by comparing the measured peak flux to the Eddington flux. In this work, for the first attempt, as the same method proposed in <cit.>, we fitted the burst spectra by using the model ×. The persistent emission was regarded as background and subtracted. The temperature of the seed photons was tied with the blackbody from the burst. The reflection fraction was fixed at -1, which means all detected emission was attributed to reflection. Assuming a canonical NS mass M_ NS=1.4M_⊙ and radius R_ NS=10 km, as well as the derived spin frequency of 275 Hz from quasi-periodic oscillations <cit.>, we obtained the dimensionless spin parameter, a=0.13.We also tried different spin parameters, a=0 and a=0.3, which provide consistent results. Therefore, the value of a is not important here. Other parameters were fixed at their default values. Only the normalization of thewas allowed to change. We found that the mean peak flux is 3.7×10^-8  erg s^-1 cm^-2, less than the flux observed with /PCA from 4U 1820–30, F_ Edd≈5.4×10^-8  erg s^-1 cm^-2<cit.>, and also smaller than the Eddington flux, 4.0–6.8 ×10^-8  erg s^-1 cm^-2, for a NS with a mass of 1.4–1.8 M_⊙ and radius of 10–12 km at a distance of 8.4 kpc.Subsequently, we let the logarithmic value of the accretion disk density (in units of cm^-3), log n, and the iron abundance of the material in the accretion disk in units of solar abundance, A_ Fe, of themodel,free to vary. We found that the solar iron abundance were around 5.0 ±1.5 for most bursts, therefore, it was fixed at 5.0.This model yielded a reasonable description of all burst spectra, and the χ^2_ν are around 1.0, which means the disk reflection can also explain the derivations properly. In Figs. <ref>, <ref>, <ref>, and <ref>, we compared the fitted parameters of the disk reflection model, which were marked as blue squares, with the enhanced persistent emission model and the double blackbody model (see Sect. <ref>), respectively. The bolometric flux of thecomponent, F_refl, was calculated in the energy range of 0.1–250 keV. We only showed the F_refl for the first ∼5 s for all bursts, because there are statistically insignificant for thereflection component during the cooling tail. Based on the time-resolved spectra, we found that the disk reflection model provides similar parameters from X-ray bursts compared with another two models, see Table <ref>. Same as the f_ a-model, 13 bursts exhibited strong radius expansion in excess of a factor of >10, and the superexpansion burst radius reached around 1484±105 km. The distribution of the ratio F_refl/F_per, and its maximum of each burstduring the PRE phase are displayed in Fig. <ref>.We showed the log n evolution of bursts #7 and #8 in Fig. <ref>.We found that the accretion disk density have been changed during the burst. For burst #7, its flux showed two equivalentpeaks at ∼3.5×10^-8  erg cm^-2 s^-1 in 0.5 and 1.5 s after the onset. As a response,the accretion disk density increased from 10^15  cm^-3 to the upper limit of therelxillNS model, 10^19  cm^-3, in 1 s after the onset, then it dropped to a local minimum of 10^17  cm^-3 in 0.5 s. In next 1 s, the accretion disk density reached to 10^19  cm^-3 again, and decreased to 10^17  cm^-3 in the cooling tail. The changed of disk density was delayed 0.5 s related to the burst flux variation. Burst #8 and other bursts have similar trends but with larger fluctuations. §.§.§ Double blackbody modelWe also employed the double blackbody model, × + , to fit thetime-resolved burst spectra. The hotter blackbodyemitted from a neutron star photosphere and the colder one maintained a stable flux through the burst and could possibly arise from the accretion disk. Same as the f_ a-model, we found that all bursts showed significant PRE, and there are 14 bursts withr_ ph>100 km. The superexpansion burst of burst # 8, approached the peak of blackbody radii around 1012±53 km with the minimum of the temperature ∼ 0.17   keV. The colder blackdody component was statistically significant for the first ∼5 s for all bursts, and can be neglected during the cooling tail. The parameters fitted by the double blackbody model are shown as red dot in Fig. <ref>. The χ^2_ν are displayed in bottom panels. For each burst, the photosphere reached its peak within 1 s. The bolometric fluxes at the peak, F_ peak, and at the touchdown moment, F_ TD, for all burst are listed in Table <ref>. The double blackbody model provides quite similar results of the burst temperature and radius compared with the disk reflection models. It is not surprising because the shape of therelxillNS component for the fitted parameters in 0.5–10 keV are close to blackbody.Since the double blackbody model is phenomenologically simpler, in the following we focus on this model to analyze the burst properties and the spectral features. We also attempted to include the possible absorption edges around 6–10 keV in the model. We found the edge cannot improve the fit significantly and obtained the absorption depth τ_ max < 3×10^-3. We define the duration of the superexpansion or strong photospheric expansion phase, t_se, as the time between the first peak of hard light curve and the start of the moderate expansion from the result of time-resolved spectroscopy. The duration of the moderate expansion phase, t_me, is calculated as the time between start of the moderate expansion and the time of touchdown. We obtained the burst fluence, E_ b, by summing the measured flux over the burst, and calculated the decay time, τ=E_ b/F_ peak. For burst #10, part of data in the tail were unavailable, therefore, we obtained its burst fluence from two parts, the rise part from the above-mentioned way, and the decay part by fitting the exponential decay of burst flux with the function F=Aexp-t/τ,this is Aτ.The t_se, t_me, E_ b and τ are listed in Table <ref>. In all bursts, the t_se lasts ∼ 1 s, but all t_me are longer than t_se. § DISCUSSION In this work, we analyzed theobservations collected during 2017–2023 from 4U 1820–30. We extracted the light curves and produced the HID and CCD of the persistent emissions fromobservations. The persistent emission can be fitted by a combination ofand .We observed 15 type I X-ray bursts. No significant burst oscillations in thest bursts have been found. All bursts occurred in the low spectra state, same as the phenomenon found by<cit.>. They attributed that in the high state the thermonuclear burning is stable due to high accretion rate, implying the quenched of the unstable burning activity.§.§ Burst spectral fitting All the time-resolved burst spectra in first 5 s showed significant deviations from the blackbody model, which can be explained by the f_ a model, the reflection model, or the double blackbody model, providing comparable χ^2_ν close to unity. We can distinguish the physically motivated and self-consistent model based on the fitted results. <cit.> found there is a strong excess at the lowest energies of the spectra in burst #1. Such excess iswell described by enhanced Comptonization emission (f_ a-model) <cit.>. They found the Comptonization component in the model becomes six times brighter during the burst, which is substantially larger than the model prediction. These authors speculated that it could be caused by the burst emission being reprocessed by the NS environment. We also found that theblackbody model fitted the time resolved spectra poorly for all 15 X-ray bursts at first ∼5 s with χ^2_ν> 2.Even though the f_ a-model fit the burst spectra very well, it is not a self-consistent way to explain the derivation from black body, seediscussions in <cit.>. We obtained that the mean f_ a factor of all burst spectra fitted by f_ a-model is 10, by assuming only the amplitude of the persistent emissions free to change. From the persistent spectra, the persistent fluxes increased a factor of 6 from the low state to high state. The spectral parameters, i.e., the temperatures of blackbody and seed photons, were also changed. Some hints suggest that the persistent spectral shape have been also varied during bursts <cit.>. Therefore, some spectral parameters could also change during X-ray bursts if the persistent emission increased several times than preburst.Moreover, the f_ a factor measured at the dips for bursts #7 and #8 are >10, which contradicts with the observations, i.e., there should be some photons left after the persistent emission subtracted.In addition, for each burst, we estimated the count rates contributed solely from the burst radiation by using the fitted burst parameters and the ARF and RMF of . We then reconstructed the burst light curves in 3–10 keV. For bursts #7 and #8, the burstcount rates are normalized to the pre-burst light curves in 3–10 keV, see top panels in Fig. <ref>. During the dips, the burst count rates are close to the pre-burst levels, implying that the persistent emissions did not increased in 3–10 keV during bursts. In the bottom panels in Fig. <ref>, we subtract the burst count rates from the totalburst light curvesin 3–10 keV and normalize to the pre-burst emissions. The dotted lines in the bottom panels in Fig. <ref> represent that the count rates during bursts are exactly the same as the pre-burst emissions. For the first 5 s during bursts, the f_ a-model provides the enhancement of the persistent emission at a factor of 5–15 in 3–10 keV. However, the count rates of the persistent emission (solid line) during the bursts are far below the f_ a-model prediction (dashed line). The rest of bursts have the same properties.This fact strengthens the idea that the f_ a-model might not be a proper manner to handle the persistent emission that is variable during X-ray bursts. Therefore, we conclude that the f_ a-model can not explain the deviations properly.Subsequently, the excess are fitted well by using the double blackbody model andthe disk reflection model,which have been adopted previously to fit the X-ray burst spectra from 4U 1820–30<cit.>, SAX J1808.4–3658 <cit.>, 4U 1636–536 and 4U 1730–22 <cit.>. The double blackbody and reflection models give a roughly equivalent interpretation that the soft excess tracks an interaction between the burst emission and the neutron star environment. For both model, the burst blackbody fluxes are very close because the shape of these two models are quite similar in 0.5–10 keV.Since the double blackbody model was proposed phenomenologically, we suggest that only the reflection model provides a physically motivated and self-consistent explanation to the observations. The reflection component can contribute ∼ 30% of the total burst emission in the strong PRE phase. Moreover, we found the accretion disk density changed as response to the burst flux with 0.5 s delay. Compared with the f_ a-model, they provide a comparable goodness of fit, resulting in similar burst blackbody evolutions. But the maximum fluxes obtained from the f_ a-model are slightly higher than the results from the reflection model and the double blackbody model. The maximum fluxes of the latter two models slightly lower than the Eddington flux observed with RXTE (PCA) from 4U 1820–30 <cit.>, F_ Edd≈5.4×10^-8  erg s^-1 cm^-2. §.§ The superexpansion and strong photospheric expansion In previous works, most of superexpansion bursts were captured by Wide Field Cameras <cit.> on BeppoSAX or Proportional Counter Array <cit.> on<cit.>. All these instruments were sensitive to photon energy above 2–3 keV. Around the maximum radius of the superexpansion, the data were usually unavailable due to the emitted blackbody temperature lower than 0.5 keV and most of photons were undetectable.Thanks to the sensitivity ofdown to 0.2 keV and relativity low interstellar absorption towards 4U 1820–30, we have the chance to study the photospheric radius evolution across the whole burst. From the time-resolved burst spectroscopy, we found one superexpansion burst (#8),r_ ph≈1500 km, which is also the first superexpansion burst observed by , 13 strong photospheric expansion bursts, r_ ph≈100-300 km,and one moderate PRE burst (#2), r_ ph≈50 km. Moreover, we measured the blackbody temperature of 0.2 keV at the largest expansion radius for the superexpansion burst, which is the lowest in all X-ray bursts. This is also the first direct measurement of the full superexpansion stage for the radius above 10^3 km. Such a large difference of photospheric expansion are not often appear in the same source. What factors make the difference of three kinds of expansion? <cit.> proposed that the accretion geometry and the accretion rate can strongly influence the burst photospheric expansion. If the accretion rate is reduced or the geometry changes, it is possible to reduce the pressure on the photosphere. The local accretion rate per unit area onto the NS can be calculated from its pre-burst emission <cit.>, m =L_per(1+z)/ 4π R_ NS^2(GM_ NS/R_ NS)≈ 6.7× 10^3(F_per/10^-9 ergs   cm^-2 s^-1)(d/10  kpc)^2(M_ NS/1.4M_⊙)^-1×(1+z/1.31)(R_NS/10  km)^-1g cm^-2 s^-1,where the F_ per is the persistent flux listed in Table <ref>. Using equation <ref> and the observed persistent flux, we obtained m, which are listed in Table <ref>. We assume the local Eddington accretion rate m_ Edd=(8.8×10^4)[1.7/(X+1)] g cm^-2 s^-1. From panels ( a) and ( b) of Fig. <ref>, we also note that the maximum expansion radii are larger atharder color (or lower local accretion rate), i.e., the superexpansion burst occurred in the more extreme low state.The superexpansion burst #8 and the largest strong photospheric expansion burst #7 had the lowest local accretion rate, ∼8% m_ Edd, which is 3.1 times smaller than burst #2withr_ ph∼55 km. The accretion rates of other strong photospheric expansion bursts are at least two times larger than the superexpansion burst. Changes in the geometry of the inner accretion flow can lead to changes in spectral shape, as measured for instance via the color in Fig. <ref>. The ram pressure of the accretion disk may be too low to counteract the expansion when the accretion flow geometry near the NS surface is spherical and the accretion rate is low <cit.>.The luminosity at the base of the envelope, L_b^∞, can strongly affect the expanded photospheric radius <cit.>. If L_b^∞ is smaller than the Eddington limit, L_ Edd, it will produce static expanded atmosphere. On the other hand, a strong PRE burst, L_b^∞≳ 1.05L_ Edd, could generate a super-Eddington wind, which have caused the matter outflow from the NS surface. Higher L_b^∞ corresponds to larger expanded radius.<cit.> proposed a relativistic model of the wind driven by X-ray bursts, found that all the super-Eddington energy flux is used to gently blow out matter and the photospheric radius of an outflowing envelope is always at least 100 km. <cit.> has presented a hydrodynamic simulation of spherically symmetric super-Eddington winds, showing that the photosphere reaches more than 100 km at first second, independent of the burst duration and ignition depth. This is consistent with our results that the photospheric radius reached to its maximumwithin 1 s after the burst onset and there is no obvious correlation between the photospheric expansion radius and the ignition depth (seeTable <ref>). The duration of the superexpansion phase always lasts a few seconds, independent of the burst duration <cit.>. It was suggested that it corresponds to a transient stage in the wind’s development. A shell of initially opaque material is ejected to large radii by the burst in this stage, and then it becomes optically thin and the observer suddenly sees the underlying photosphere of the already formed steady-state wind. Recently, <cit.> calculated steady-state models of radiation-driven super-Eddington winds and found the transition from static expanded envelopes (r_ ph≤ 50-70  km for L_b^∞≲ L_ Edd) to radiation-driven stellar wind (r_ ph≈ 100 - 1000  km for L_b^∞≳ 1.05L_ Edd). Based on <cit.>, in our case,burst #2 corresponds to static envelope, and the other bursts have formed steady-state wind.The unstable burning of pure helium releases nuclear energy of 10^18  erg g^-1, which set the upper limit of the mass-loss rate of wind to be 2×10^18  g s^-1, corresponding to L_b^∞≈ 2.5 L_ Edd. <cit.> found that the maximum photospheric radius that the wind can reach is around 10^3 km for such luminosity, which is consistent with the superexpansion burst observed in 4U 1820–30. From the time-resolved spectroscopy, we found that strong photospheric expansion phase are similar to the superexpansion phase (the t_se lasts ∼ 1 s), which are the stage in the wind’s development. Due to the difference of the impact pressure of accretion flow, the observed expansion radius of photosphere is different. Then only one expansion phase can be seen in burst #2 is a reasonable result, because its expansion radius is less than the lowest outflow radius. The photospherical shell can not escape the gravitational binding, and it appears as an expansion phase in observation. we noted that the strong photospheric expansion are similar to the superexpansion, a strong expansion phase followed by a moderate expansion phase. <cit.> proposed that the ejected material shell may be truncated by heavy elements in the wind, and has been observed at a smaller photospheric radius. The heavy elements can produce absorption edges around 6–10 keV, which are difficult to be observed bydue to its sensitivity up to ∼ 10 keV.§.§ The dips of the burst light curves For the light curves with the energies in excess of 3 keV, the dips appeared during the peak of all bursts except for burst #2, see Fig. <ref>. We also found there is a dip in burst #8 in the 0.5–10 keV light curve after the onset ∼ 0.5 second and it is a plateau for bursts #6, #7 and #15 at the same time. Most of dips can be understood as being due to the spectral soften. ThePRE bursts were be so extreme that the peak of the blackbody spectrum moves to lower temperatures, in tandem with the expansion, and out of the X-ray band introducing the appearance of dip in the burst profiles <cit.>. Many of the reported detection of superexpansion PRE bursts to date have been made with data from instruments with hard bandpass such as /PCA, which has a minimum sensitive at 3 keV. In previous observations, there was a lack of data during the most extreme phase.is ideal for observing the soft X-ray expansion phases of PRE bursts, which bandpass extends down to 0.2 keV. Fortunately, the count rates were very high in the bandpass withless than 3 keV. We obtained the minimum temperatures of 0.2–0.7 keV by fitted the time-resolved spectroscopy, see Table <ref>. The bursts with higher temperatures have shallower dips in the light curves. The minimum temperature of burst # 8 is 0.20±0.01 keV obtained by the double blackbody model, which is the lowest one observed so far from our knowledge. For burst #2, the blackbody temperature was 0.93 keV at the maximum photospheric radius, which was the highest among all 15 bursts. Therefore, the dip was not appeared distinctly. In other bursts, the blackbody temperatures were in the range of 0.35–0.69 keV.The spectral soften cannot fully account for the count rates declined to below the pre-burst level in bursts #7 and #8.A plausible physical interpretation is that the accretion radiation is cut short by the expanding shell or the shell engulf the accretion flow so that it was temporarily blocked from our view <cit.>. The strong PRE bursts may cause the disc to distort itself through a warping instability, which obscure the inner disk <cit.>.In Fig. <ref>, we find that the minimum count rate during the dip is positively correlated with m.For bursts #7 and #8, m∼8% m_ Eddare the lowest accretion rates,indicating that the accretion flow are more susceptible disturbed by the burst.Alternatively, the super-Eddington flux can disrupt the inner disk and stop the accretion temporarily. The disruption of accretion disk by the ejected shell have been detected in some superexpansion bursts <cit.>.also observed the light curvedrop to below the persistent emission in an intermediate-duration burst tail of IGR J17062–6143 <cit.>.This dip was interpreted as due to the accretion disk and corona being destroyed by the energetic X-ray burst.<cit.> used the order-of-magnitude calculation to show that the shell may have enough momentum to sweep the inner 100 km of the accretion disk, when the column thickness of the shell at launch around 10^7 g  cm^-2. §.§ Comparison the recurrence time and E_ b-F_ per relation with theoretical modelsThe observed recurrence time are listed in Table <ref>, which are only upper limits due to data gaps in between two bursts. The observed recurrence time between bursts #1 and #2, Δ T_ rec=2.2 hr, and bursts #3 and #4, Δ T_ rec=2.8 hr, are comparable with previous works. Therefore, we consider them as two pairs of successive bursts even though data gaps exists. We compare the observed and predicted recurrence time.The burst ignition depth is estimated via y_ ign=4π E_bd^2(1+z)/4π R_ NS^2Q_nuc, where the nuclear energy generated for pure helium is Q_nuc≈ 1.31  MeV nucleon^-1 for X=0<cit.>.Once the ignition depth is known, we calculate the recurrence time between bursts by using the equation Δ t_rec=(y_ign / m)(1+z). The predicted recurrence time are 1.3±0.5 and 1.5±0.5 hr, which are smaller than the observed recurrence time of 2.2 and 2.8 hr, respectively.If the factor, ξ_p^-1∼1.5, is considered to account for the anisotropy of X-ray emission, the predicted recurrence time can match the observed values<cit.>. We also note that the recurrence time between bursts #4 and #5 is 6.3 hr. One or two bursts should be missed by comparing with the predicted recurrence time. The abundances of the burning matter can be inferred from the relation between the burst fluence and its persistent flux. <cit.> proposed that the burst fluence is independent of the persistent flux for pure helium burst. For a small fraction of hydrogen and heavy elements (X=0.1, Z=0.01) in the burning material, the bursts release more energy at lower persistent flux. <cit.> showed that the bursts observed byfavor the case of pure helium burning. Forbursts, we renormalized the fluence and persistent flux to, E_ b=3.5×10^-7  erg cm^-2, and F_ per=4.3×10^-9  erg cm^-2 s^-1. In Fig. <ref>, most of thebursts followed the prediction of burning of helium with mixture of hydrogen and heavy elements. However, the dependences of the burst fluence and its persistent flux in bursts #7 and #8 are biased from the prediction, suggesting a lower fraction of hydrogen and heavy elements in the burning material.§ CONCLUSIONSWe detected 15 type I X-ray bursts in 4U 1820–30 fromobservations in between 2017 to present. The persistent spectra are well fitted by the model × + . We adoptedthe f_ a-model, the double black body model and the reflection model , to fit the time-resolved burst spectra. However, for bursts #7 and #8, the count rates declined to the pre-burst level in 3-10 keV which contradicts with the f_ a-model.The reflection model provides a physical motivated and self-consistent results. From the evolution of the accretion disk density during X-ray bursts, we found the direct evidence that the accretion disk was distorted by the burst radiation with 0.5 s delay. The time-resolved burst spectra show that all bursts are belong to the PRE bursts, including thirteen strong photospheric expansion bursts for r_ ph≈100-340 km and one superexpansion burst for r_ ph>10^3 km <cit.>. We appreciate the referee for the constructive and valuable comments which improves our manuscript. This work was supported by the Major Science and Technology Program of Xinjiang Uygur Autonomous Region (No. 2022A03013-3). Z.S.L. and Y.Y.P. were supported by National Natural Science Foundation of China (12103042, U1938107, 12273030, 12173103, 12122302). S.Z. is supported by National Key R&D Program of China (2021YFA0718500). This work made use of data from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA’s Goddard Space Flight Center. aa § FIGURE Example below of non-structurated natbib references To use the v8.3 macros with this form of composition of bibliography,the option "bibyear" should be added to the command line"". Examples for figures using graphicx A guide "Using Imported Graphics in LaTeX2e"(Keith Reckdahl) is available on a lot of LaTeX public servers or ctan mirrors. The file is : epslatex.pdf
http://arxiv.org/abs/2312.16420v1
{ "authors": [ "Wenhui Yu", "Zhaosheng Li", "Yongqi Lu", "Yuanyue Pan", "Xuejuan Yang", "Yupeng Chen", "Shu Zhang", "Maurizio Falanga" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231227055327", "title": "NICER views moderate, strong, and extreme photospheric expansion bursts from the ultracompact X-ray binary 4U 1820$-$30" }
Recovering seldom-used theorems of vectorcalculus and their application to problemsof electromagnetism. Antonio Pérez-Garrido January 14, 2024 =========================================================================================================== A key goal of current mechanistic interpretability research in NLP is to find linear features (also called feature vectors) for transformers: directions in activation space corresponding to concepts that are used by a given model in its computation. Present state-of-the-art methods for finding linear features require large amounts of labelled data – both laborious to acquire and computationally expensive to utilize. In this work, we introduce a novel method, called observable propagation (in short: ), for finding linear features used by transformer language models in computing a given task – using almost no data. Our paradigm centers on the concept of observables, linear functionals corresponding to given tasks. We then introduce a mathematical theory for the analysis of feature vectors: we provide theoretical motivation for why LayerNorm nonlinearities do not affect the direction of feature vectors; we also introduce a similarity metric between feature vectors called the coupling coefficient which estimates the degree to which one feature's output correlates with another's. We use to perform extensive qualitative investigations into several tasks, including gendered occupational bias, political party prediction, and programming language detection. Our results suggest that surpasses traditional approaches for finding feature vectors in the low-data regime, and that can be used to better understand the mechanisms responsible for bias in large language models. Code for experiments can be found at github.com/jacobdunefsky/ObservablePropagationgithub.com/jacobdunefsky/ObservablePropagation. § INTRODUCTION When a large language model predicts that the next token in a sentence is far more likely to be him than her, what is causing it to make this decision? The field of mechanistic interpretability aims to answer such questions by investigating how to decompose the computation carried out by a model into human-understandable pieces.This helps us predict their behavior, identify and correct discrepancies, align them with our goals, and assess their trustworthiness, especially in high-risk scenarios. The primary goal is to improve output prediction on real-world data distributions, identify and understand discrepancies between intended and actual behavior, align the model with our objectives, and assess trustworthiness in high-risk applications <cit.>.One important notion in mechanistic interpretability is that of features. A feature can be thought of as a simple function of the activations at a particular layer of the model, the value of which is important for the model's computation at that layer. For instance, in the textual domain, features used by a language model at some layer might reflect whether a token is an adverb, whether the language of the token is French, or other such characteristics. Possibly the most sought-after type of feature is a linear feature, or feature vector: a fixed vector in embedding space that the model utilizes by determining how much the input embedding points in the direction of that vector. Linear features are in some sense the holy grails of features: they are both easy for humans to interpret and amenable to mathematical analysis <cit.>. Contributions Our primary contribution is a method, which we call observable propagation (in short), for both finding feature vectors in large language models corresponding to given tasks, and analyzing these features in order to understand how they affect other tasks. Unlike non-feature-based interpretability methods such as saliency methods <cit.> or circuit discovery methods <cit.>, observable propagation reveals the specific information from the model's internal activations that are responsible for its output, rather than merely tokens or model components that are relevant. And unlike methods for finding feature vectors such as probing <cit.> or sparse autoencoders <cit.>, observable propagation can find these feature vectors without having to store large datasets of embeddings, perform many expensive forward passes, or utilize vast quantities of labeled data. In addition, we present the following contributions:* We develop a detailed theoretical analysis of feature vectors. In Theorem <ref>, we provide theoretical motivation explaining why LayerNorm sublayers do not affect the direction of feature vectors; making progress towards answering the question of the extent to which LayerNorms are used in computation in transformers, which has been raised in mechanistic interpretability <cit.>.In Theorem <ref>, we introduce and motivate a measurement of feature vector similarity called the coupling coefficient, which can be used to determine the extent to which the model's output on one task is coupled with the model's output on another task.* In order to determine the effectiveness of in understanding the causes of bias in large language models, we investigate gendered pronoun prediction (<ref>) and occupational gender bias (<ref>). By using observable propagation, we show that the model uses the same features to predict gendered pronouns given a name, as it does to predict an occupation given a name; this is supported by further experiments on both artificial and natural datasets.* We perform a quantitative comparison between and probing methods for finding feature vectors on diverse tasks (subject pronoun prediction, programming language detection, political party prediction). We find that is able to achieve superior performance to these traditional data-heavy approaches in low-data regimes (<ref>).§.§ Background and related workIn interpretability for NLP applications, there are a number of saliency-based methods that attempt to determine which tokens in the input are relevant to the model's prediction <cit.>. Additionally, recent circuit-based mechanistic interpretability research has involved determining which components of a model are most relevant to the model's computation on a given task <cit.>. Our work goes beyond these two approaches by considering not just relevant tokens, and not just relevant model components, but relevant feature vectors, which can be analyzed and compared to understand all the intermediate information used by models in their computation. A separate line of research aims to find feature vectors by performing supervised training of probes to find directions in embedding space that correspond to labels <cit.>, or use unsupervised autoencoders on model embeddings to find feature vectors <cit.>. does not rely on any training. A number of recent studies in interpretability involve finding feature vectors by decomposing transformer weight matrices into a set of basis vectors and projecting these vectors into token space <cit.>. goes beyond this by taking into account nonlinearities, by finding precise feature vectors for tasksand by formulating the concept of observables, which is more general than the tasks considered in these prior works.§ OBSERVABLE PROPAGATION: FROM TASKS TO FEATURE VECTORSIn this section, we present our method, which we call observable propagation (), for finding feature vectors directly corresponding to a given task. We begin by introducing the concept of observables, which is central to our paradigm. We then explain observable propagation for simple cases, and then build up to a general understanding.§.§ Our paradigm: observables Often, in mechanistic interpretability, we care about interpreting the model's computation on a specific task. In particular, the model's behavior on a task can frequently be expressed as the difference between the logits of two tokens. For instance, <cit.> attempt to interpret the model's understanding of gendered pronouns, and as such, measure the difference between the logits for the tokensshe andhe. This has been identified as a general pattern of taking logit differences that appears in mechanistic interpretability work <cit.>.The first insight that we introduce is that each of these logit differences corresponds to a linear functional on the logits. That is, if the logits are represented by the vector y, then each logit difference can be represented by n^T y for some vector n. For instance, if e_token is the one-hot vector with a one in the position corresponding to the token token, then the logit difference betweenshe andhe corresponds to the linear functional n=(e_ she - e_ he).We thus define an observable to be a linear functional on the logits of a language model. In doing so, we no longer consider logit differences as merely a part of the process of performing an interpretability experiment; rather, we consider the broader class of linear functionals as being objects amenable to study in their own right. As we will see, concretizing observables like this will enable us to find sets of feature vectors corresponding to different observables. §.§ Observable propagation for attention sublayers First, let us consider a linear model f(x) = Wx. Given an observable n, we can compute the measurement associated with n as n^T f(x), which is just n^T Wx. But now, notice that n^T Wx = (W^T n)^T x. In other words, W^T n is a feature vector in the domain, such that the dot product of the input x with the feature vector W^T n directly gives the output measurement n^T f(x).Next, let us consider how to extend this idea to address attention sublayers in transformers. Attention combines information across tokens. They can be decomposed into two parts: the part that determines from which tokens information is taken (query-key interaction), and the part that determines what information is taken from each token to form the output (output-value). <cit.> refer to the former part as the QK circuit of attention, and the latter part as the OV circuit. Following their formulation, each attention layer can be written as x_j^l+1 = x_j^l + ∑_h=1^H ∑_i=1^Sscore_l,h(x_i^l, x_j^l) W^OV_l,h x_i^l where x_j^l is the residual stream for token j∈{1,...,S} at layer l, score_l,h(x_i^l, x_j^l) is the attention score at layer l associated with attention head h∈{1,...,H} for tokens x_i^l and x_j^l, and W^OV_l,h is the combined output-value weight matrix for attention head h at layer l.In each term in this sum, the score_h(x_i^l, x_j^l) factor corresponds to the QK circuit, and the W^OV_l,h x_i^l factor corresponds to the OV circuit. Note that the primary nonlinearity in attention layers comes from the computation of the attention scores, and their multiplication with the W^OV_l,h x_i^l terms. As such, as <cit.> note, if we consider attention scores to be fixed constants, then the contribution of an attention layer to the residual stream is just a weighted sum of linear terms for each token and each attention head. This means that if we restrict our analysis to the OV circuit, then we can find feature vectors using the method described for linear models.While this restricts the scope of computation, analyzing OV circuits in isolation is still very valuable: doing so tells us what sort of information, at each stage of the model's computation, corresponds to our observable. From this point of view, if we have an attention head h at layer l, then the direct effect of that attention head on the output logits of the model is proportional to W_U W^OV_l,h x_i^l for token i, where W_U is the model unembedding matrix (which projects the model's final activations into logits space). We thus have that the feature vector corresponding to the OV circuit for this attention head is given by (W_U W^OV_l,h)^T n.This feature vector corresponds to the direct contribution that the attention head has to the output. But an earlier-layer attention head's output can then be used as the input to a later-layer attention head. For attention heads h, h' in layers l, l' respectively with l < l', the computational path starting at token i in layer l is then passed as the input to attention head h; the output of this head for that token is then used as the input to head h' in layer l'. Then by the same reasoning, the feature vector for this path is: (W_U W^OV_l',h' W^OV_l,h)^T n. Note that this process can be repeated ad infinitum. §.§ General form: addressing MLPs and LayerNormsAlong with attention sublayers, transformers also contain nonlinear MLP sublayers and LayerNorm nonlinearities before each sublayer. One main challenge in interpretability for large models has been the difficulty in understanding the MLP sublayers, due to the polysemantic nature of their neurons <cit.>. One prior approach to address this is modifying model architecture to increase the interpretability of MLP neurons<cit.>. Instead of architecture modification, we address these nonlinearities by approximating them as linear functions using their first-order Taylor approximations. This approach is reminiscent of that presented by <cit.>, who use linearizations of language models to speed up the process of activation patching <cit.>; we go beyond this by recognizing that the gradients used in these linearizations act as feature vectors that can be independently studied and interpreted, rather than merely making activation patching more efficient. Taking this into account, the general form of observable propagation, including first-order approximations of nonlinearities, can be implemented as follows. Consider a computational path 𝒫 in the model through sublayers l_1 < l_2 < … < l_k. Then for a given observable n, the feature vector corresponding to sublayer l in 𝒫 can be computed according to Algorithm <ref>. Note that before every sublayer, there is a nonlinear LayerNorm operation. For greatest accuracy, one can find the feature vector corresponding to this LayerNorm by taking its gradient as described above. But as shown in Theorem <ref>, if one only cares about the directions of the feature vectors and not their magnitudes, then the LayerNorms can be ignored entirely. [1]▹ #1§.§ The effect of LayerNorms on feature vectors LayerNorms are ubiquitous in Transformers, appearing before every MLP and attention sublayer, and before the final unembedding matrix. Therefore, it is worth investigating how they affect feature vectors; if LayerNorms are highly nonlinear, this would be problematic.The core LayerNorm function can be defined as LayerNorm(x) = Px/‖ Px ‖ where P is the orthogonal projection onto the hyperplane orthogonal to 1⃗, the vector of all ones <cit.>. <cit.> provide intuition for why we should expect that in high-dimensional spaces, LayerNorm is approximately linear. But it can be shown that the gradient of (x) is inversely proportional to Px (see Appendix <ref>), so we cannot consider LayerNorm gradients to be constant for inputs of different norms. However, empirically, we found that LayerNorms had almost no impact on the direction of feature vectors (see Appendix <ref>). The following statement, which we prove in Appendix <ref>, provides theoretical underpinning for this behavior:Define f(x; n) = n ·LayerNorm(x). Defineθ(x;n) = arccos(n ·∇_x f(x; n)/n∇_x f(x; n)) – that is, θ(x;n) is the angle between n and ∇_x f(x; n). Then if n ∼𝒩(0,I) in ℝ^d, and d≥8 then𝔼[ θ(x;n) ] < 2 arccos(√(1-1/(d-1)))Note that after every LayerNorm as defined above, the output is multiplied by a fixed scalar constant equal to √(d) (where d is the embedding diension), multiplied by a learned diagonal matrix, and added to a learned vector. Thus, the actual operation implemented is √(d) WLayerNorm(x)+b, where W is the learned matrix and b is the learned vector. Now, b does not affect the gradient. Additionally, empirically, most of the entries in W tend to be very close to one another (see Appendix <ref>), which suggests that we can approximate W as a scalar, meaning that W primarily scales the gradient, rather than changing its direction. Therefore, if we want to analyze the directions of feature vectors rather than their magnitudes, then we can do so without worrying about LayerNorms.§ DATA-FREE ANALYSIS OF FEATURE VECTORSOnce we have used this to obtain a given set of feature vectors, we can then perform some preliminary analyses on them, using solely the vectors themselves. This can give us insights into the behavior of the model without having to run forward passes of the model on data. Feature vector norms One technique that can be used to assess the relative importance of model components is investigating the norms of the feature vectors associated with those components. To see why, recall that if y is a feature vector associated with observable n for a model component that implements function f, then for an input x, we have n · f(x) = y · x. Now, if we have no prior knowledge regarding the distribution of inputs to this model component, we can expect y · x to be proportional to y. Thus, components with larger feature vectors should have larger outputs; this is borne out in experiments (see <ref>) Note that when calculating the norm of a feature vector for a computation path starting with a LayerNorm, one must multiply the norm by an estimated norm of the LayerNorm's input (see Appendix <ref> for explanation). Coupling coefficients An important question that we might want to ask about observables is the following: to what extent should we expect inputs that yield high outputs under observable n_1 to also yield high outputs for another observable n_2? If the outputs under n_1 and n_2 are highly correlated, then this suggests that the model uses the same underlying features for both observables. Having motivated this problem, let us translate it into the language of feature vectors. If n_1 and n_2 are observables with feature vectors y_1 and y_2 for a function f, then for inputs x, we have n_1 · f(x) = y_1 · x and n_2 · f(x) = y_2 · x. Now, if we constrain our input x to have norm c, and constrain x · y_1 = k, then what is the expected value of x · y_2? And what are the maximum/minimum values of x · y_2? We present the following theorem to answer both questions:Let y_1, y_2 ∈ℝ^d. Let x be uniformly distributed on the hypersphere defined by the constraints ‖ x ‖ = s and x · y_1 = k. Then we have𝔼[x · y_2] = k y_1 · y_2/‖ y_1 ‖^2and the maximum and minimum values of x · y_2 are given by‖ y_2 ‖/‖ y_1 ‖(kcos(θ) ±sin(θ) √(s^2 ‖ y_1 ‖^2-k^2))where θ is the angle between y_1 and y_2. We denote the value y_1 · y_2/‖ y_1‖^2 by C(y_1, y_2), and call it the coupling coefficient from y_1 to y_2. Intuitively, C(y_1, y_2) measures the expected dot product between a vector and y_2, assuming that that vector has a dot product of 1 with y_1. Additionally, note that Theorem <ref> also implies that the coupling coefficient becomes a more accurate estimator as the cosine similarity between y_1 and y_2 increases. This is borne out experimentally; see <ref>.§ EXPERIMENTS Armed with our observable propagation toolkit for obtaining and analyzing feature vectors, we now turn our attention to the problem of gender bias in LLMs in order to determine the extent to which these tools can be used to diagnose the causes of unwanted behavior. §.§ Gendered pronouns predictionWe first consider the related question of understanding how a large language model predicts gendered pronouns. Specifically, given a sentence prefix including a traditionally gendered name (for example, Mike is often associated with males and Jane is often associated with females), how does the model predict what kind of pronoun should come after the sentence prefix? We will later see that understanding the mechanisms driving the model's behavior on this benign task will yield insights for understanding gender-biased behavior of the model. Additionally, this investigation also provides an opportunity to test the ability of to accurately predict model behavior.The gendered pronoun prediction problem was previously considered by <cit.>, where the authors used the Automated Circuit Discovery tool presented by <cit.> to investigate the flow of information between different components of GPT-2-small <cit.> in predicting subject pronouns (i.e. he, she, etc). We extend the problem setting in various ways. We investigate both the subject pronoun case (in which the model is to predict the token she versus he) and the object pronoun case (in which the model is to predict her versus him). Additionally, we seek to understand the underlying features responsible for this task, rather than just the model components involved, so that we can compare these features with the features that the model uses in producing gender-biased output. Problem settingWe consider two observables, corresponding to the subject pronoun prediction task and the object pronoun prediction task. The observable for the subject pronoun task, , is given by e_ she - e_ he, where e_token is the one-hot vector with a one in the position corresponding to the token token. This corresponds to the logit difference between the tokensshe andhe, and indicates how likely the model predicts the next token to beshe versushe. Similarly, the observable for the object pronoun task, , is given by e_ her - e_ him.We investigate the model<cit.>, which has approximately 1.3B parameters, 24 layers, 16 attention heads, an embedding dimension of 2048, and an MLP hidden dimension of 8192. Note that is able to work with models that are significantly larger than those previously explored, such as GPT-2-small (117M parameters) <cit.>, which has been the focus of recent interpretability work by <cit.>, inter alia. Additionally, a note on notation. The attention head with index h at layer l will be presented as l::h. For instance, 17::14 refers to attention head 14 at layer 17. Furthermore, the MLP at layer L will be presented as mlpL. For instance, mlp1 refers to the MLP at layer 1. Feature vector norms for single attention headsWe begin by analyzing the norms for the feature vectors corresponding toandfor each attention head in the model. We then used path patching <cit.> to measure the mean degree to which each attention head contributes to the model's output on dataset of male/female prompt pairs. If our method is effective, then we would expect to see that the heads with the greatest feature norms are those identified by path patching as most important to model behavior. The results are given in Table <ref>. We see that three of the four attention heads with the highest feature norms – that is, 17::14, 15::13, and 13::11 – also have very high attributions for both the subject and object pronoun cases. (Interestingly, head 18::11 does not have a high attribution in either case despite having a large feature norm; this may be due to effects involving the model's QK circuit.) This indicates that observable propagation was largely successful in being able to predict the most important attention heads, despite only using one forward pass per observable (to estimate LayerNorm gradients). Cosine similarities and coupling coefficientsNext, we investigated the cosine similarities between feature vectors forand . We found that the four heads with the highest cosine similarities between itsfeature vector and itsfeature vector are 17::14, 18::11, 15::13, and 13::11, with cosine similarities of 0.9882, 0.9831, 0.9816, 0.9352. The high cosine similarities of these feature vectors indicates that the model uses the same underlying features for both the task of predicting subject pronoun genders and the task of predicting object pronoun genders.We also looked at the feature vectors for the computational paths 6::6→9::1→13::11 forand 6::6→13::11 for , because performing path patching on a pair of prompts suggested that these computational paths were relevant. The feature vectors for these paths had a cosine similarity of 0.9521.We then computed the coupling coefficients between theandfeature vectors for heads 17::14, 15::13, and 13::11. This is because these heads were present among the heads with the highest cosine similarities, highest feature norms, and highest patching attributions, for both theandcases. After this, we tested the extent to which the coupling coefficients accurately predicted the constant of proportionality between the dot products of different feature vectors with their inputs. We ran the model on approximately 1M tokens taken from The Pile dataset <cit.> and recorded the dot product of each token's embedding with these feature vectors. We then computed the least-squares best fit line that predicts thevalues given thevalues, and compared the slope of the line to the coupling coefficients. The results are given in Table <ref>. We find that the coupling coefficients are accurate estimators of the empirical dot products between feature vectors and that, in accordance with Theorem <ref>, the dot products between vectors with greater cosine similarity exhibited greater correlation. §.§ Occupational gender bias Now that we have understood some of the features relevant to predicting gendered pronous, we more directly consider the setting of occupational gender bias in language models, a widely-investigated problem <cit.>. For a prompt like , an LM which hasn't been aligned using e.g. RLHF <cit.> is more likely to predict that the next token isprogrammer thannurse ifis replaced with a male name, and vice-versa for a female name <cit.>. We applied observable propagation to the problem in order to go beyond prior work and understand the features responsible for this behavior. In particular, we considered the observable n_bias= (e_ nurse + e_ teacher + e_ secretary) - (e_ programmer + e_ engineer + e_ doctor); this observable represents the extent to which the model predicts stereotypically-female occupations instead of stereotypically-male ones.The same features are used to predict gendered pronouns and occupations We ran path patching on a single pair of prompts in order to determine computational paths relevant to . The results were computational paths beginning with mlp1→6::6→9::1→... and 6::6→9::1→..., which began on the token in the prompt associated with the gendered name. Even though there were many relevant computational paths beginning with these prefixes, and even though these computational paths passed through multiple later-layer MLPs, the feature vectors for these different paths nevertheless had high cosine similarity with one another.More surprising is that the feature vector forfor 6::6→9::1→... had a cosine similarity of 0.966 with the feature vector forfor 6::6→9::1→13::11. Similarly, thefeature vector for mlp1→6::6→9::1→... had a cosine similarity of 0.977 with thefeature vector for mlp1→6::6→9::1→13::11. This indicates that the model uses the same features to identify both gender pronouns and likely occupations, given a traditionally-gendered name.To determine the extent to which these feature vectors reflected model behavior, we ran the model on an artificial dataset of 600 prompts involving gendered names (see Appendix <ref>), recorded the dot product of the model's activations on the name token with the feature vectors, and recorded the model's output with respect to the observables. The results can be found in Figure <ref>. Note that the correlation coefficient r^2 between the dot product with thefeature vector and the actual model output is 0.88, indicating that the feature vector is a very good predictor of model output.[subfigure]singlelinecheck=off, skip=-.6cm We then investigated the tokens in a 1M-token subset of The Pile that maximally activated thefeature vector. These tokens were primarily female names: tokens likeRita,Catherine, andMary, along with female name suffixes like a (as in Phillipa), ine (as in Josephine), and ia (as in Antonia). Surprisingly, the least-activating tokens were generally male common nouns, such ashusband,brother, andson – but also words likehis, and evenmale. This evidence even further supports the hypothesis that the model specifically uses gendered features in order to determine which occupations are most likely to be associated with a name. However, it is worth noting that part of the power of is that it allows us to test hypotheses such as this without needing to run the model on large datasets and record the tokens with the highest feature vector activations: simply by virtue of the extremely high cosine similarity between thefeature vector and thefeature vector, we could infer that the model was using gendered information to predict occupations. As such, looking at the maximally-activating tokens primarily served as a sanity check, verifying that the feature vectors returned by are human-interpretable. §.§ Quantitative analysis across observables We now evaluate 's performance across a broader variety of tasks, including subject pronoun prediction, identifying American politicians' party affiliations, and distinguishing between C and Python code. We compare the performance of these feature vectors to the performance of feature vectors obtained by linear/logistic regression, a more traditional method (used by e.g. <cit.> and others), but a more data-intensive one. For the former two tasks, we evaluate the correlation between the feature vectors and model outputs on the aforementioned artificial dataset used in the subject pronoun prediction experiments, and on an artificial dataset comprised of 40 Democratic politicians and 40 Republican politicians. For the programming language classification task,we evaluate the effectiveness of feature vectors in differentiating C and Python code using a natural dataset and the Area Under the ROC Curve metric.The results are given in Table <ref>. For the subject pronoun prediction task, in order for the feature vector found by linear regression to match the performance of the feature vector, 60 prompts' worth of embeddings had to be used for training; similarly, for the C vs. Python classification task, the logistic regression had to be trained on 50 code snippets' worth of embeddings to obtain equal performance. In the political party prediction task, even when training on 3/4 of the dataset, the linear regression feature vector's performance on the test set was well below that of the feature vector's performance on the whole dataset. This suggests the ability of to match the performance of prior methods for finding feature vectors, and outcompete them especially in the low-data regime.§ CONCLUSION AND DISCUSSIONIn this paper, we introduced observable propagation (or for short), a novel method for finding feature vectors in transformer models using little to no data. We developed a theory for analyzing the feature vectors yielded by , and demonstrated this method's utility for understanding the internal computations carried out by a model. In our case studies, we found that investigating the norms of feature vectors obtained via could be used to predict relevant attention heads for a task without actually running the model on any data; that can be used to understand when two different tasks utilize the same feature; that coupling coefficients can be used to show the extent to which a high output for one observable implies a high output for another on a general distribution of data; and that the feature vectors returned by accurately predict model behavior. We also demonstrated that in data-scarce settings, outperforms traditional data-heavy probing approaches for finding feature vectors.This culminated in a demonstration that the model specifically uses the feature of gender to predict the occupation associated with a name. Notably, even though experiments on larger datasets further supported this claim, observable propagation alone was able to provide striking evidence of this using minimal amounts of data. We hope that our approach, being independent of data, can democratize interpretability research and facilitate broader-scale investigations.Furthermore, the conclusion that the model uses the same mechanisms to predict grammatical gender as it does to predict occupations portends difficulties in attempting to debias the model. This means that inexpensive inference-time attempts to remove bias from the model will likely also decrease model performance on desired tasks like correct gendered pronoun prediction (see Appendix <ref> for additional experiments.) This reveals a clear future work direction to invest in more powerful methods, to ensure that models are both unbiased and useful.Note that although demonstrates significant promise in cheaply unlocking the internal computations of language models, it does have limitations. In particular, only addresses the OV circuits of Transformers, ignoring computations in QK circuits responsible for mechanisms such as induction heads <cit.>. However, even though QK circuits are responsible for moving information around in Transformers, OV circuits are where computation on this information occurs. Thus, whenever we want to understand what sort of information the model uses to predict one token as opposed to another, the answer to this question lies in the model’s OV circuits, and can provide such answers. Given the power that the current formulation of has demonstrated already in our experiments, we are very excited about the potential for this method, and methods building upon it, to yield even greater insights in the near future.§ ETHICS STATEMENTIn this work, we present observable propagation, our method for finding feature vectors used by large language models in their computation of a given task. We demonstrate in an experiment that observable propagation can be used to pin-point specific features that are responsible for gender bias in large language models, suggesting that observable propagation might prove to be useful in mechanistically understanding how to debias language models. Additionally, the data-efficient nature of observable propagation allows this sort of inquiry into model bias to be democratized, conducted by researchers who might not have access to compute or data required by other methods. However, it is important to note that observable propagation does not necessarily make perfect judgments about model bias or lack thereof; a model might be biased even if observable propagation fails to find specific feature vectors responsible for that bias. As such, it is incumbent upon researchers, practitioners, and organizations working with large language models to continue to perform deeper investigations into model bias issues, and be aware of the way in which it might affect their results.§ REPRODUCIBILITY STATEMENTA proof of Theorem <ref> is given in Appendix <ref>; a proof of Theorem <ref> is given in Appendix <ref>. Details on the datasets that we used in our experiments can be found in Appendix <ref>. Further details regarding the experiments in Section <ref> can be found in Appendix <ref>. Details on how we chose the x_0 point used to approximate nonlinearities (as described in <ref>) can be found in Appendix <ref>; for LayerNorm linear approximations, we used the estimation method described in Appendix <ref>. Code for experiments can be found at github.com/jacobdunefsky/ObservablePropagationgithub.com/jacobdunefsky/ObservablePropagation. iclr2024_conference§ FORMAL DEFINITION OF OBSERVABLESFor further clarification, in this section, we provide a more formal definition of an observable. An observable is a linear functional n : ℝ^𝚍_𝚟𝚘𝚌𝚊𝚋→ℝ, whereis the number of tokens in the model's vocabulary. We refer to the action of taking the inner product of the model's output logits with an observable n as getting the output of the model under the observable n. Note that because all observables are linear functionals on a finite vector space, they can be written as row vectors. As such, it is often convenient to abuse notation, and associate an observable with its corresponding vector.Observables often correspond to tasks on which we want to measure model output. The following example demonstrates how observables corresponding to a specific task can be constructed. Consider the task of predicting gendered subject pronouns. We want to measure the difference between the model's predicted logits for the pronoun she and the pronoun he; this corresponds to how much more likely the model thinks that the next token will be the female subject pronoun over the male subject pronoun. Then if e_𝚝𝚘𝚔𝚎𝚗 is the one-hot vector with sizeand a one in the position corresponding to , then an observable corresponding to this task is given by n = e_" 𝚜𝚑𝚎" - e_" 𝚑𝚎". This is because the output of the model under this observable precisely corresponds to the desired logit difference. § DATASETS In our experiments, we made use of an artificial dataset, along with a natural dataset. The natural dataset was processed by taking the first 1,000,111 tokens of The Pile <cit.> and then splitting them into prompts of length at most 128 tokens. This yielded 7,680 prompts.To construct the artificial dataset, we wrote three prompt templates for theobservable, three prompt templates for theobservable, and three prompt templates for theobservable. The prompt templates are as follows:* Prompt templates for(inspired by <cit.>): * * ** Prompt templates for : * * ** Prompt templates for : * * * A dataset of prompts was then generated by replacing thesubstring in each prompt template with a name from a set of traditionally-male names and a set of traditionally-female names. These names were obtained from the Gender by Name dataset from <cit.>, which provided a list of names, the gender traditionally associated with each name, and a measure of the frequency of each name. The top 100 single-token traditionally-male names and top 100 single-token traditionally-female names from this dataset were collected; this comprised the list of names that we used.§ MORE ON LAYERNORM GRADIENTS §.§ LayerNorm gradients are inversely proportional to input norms In <ref>, it was stated that LayerNorm gradients are not constant, but instead, depend on the norm of the input to the LayerNorm. To elaborate, the gradient of n^T (√(d)W (x) + b) can be shown to be √(d)W/Px P ( I - (Px)(Px)^T/Px^2) n (see Appendix <ref>). P and( I - (Px)(Px)^T/Px^2) are both orthogonal projections that leave n relatively untouched, so the term that is most responsible for affecting the norm of the feature vector is the √(d)W/Px factor. Now, by Lemma <ref> in Appendix <ref>, we have that √(d)W/Px≈√(d)W/x. Thus, if x a good estimate of x for a given set of input prompts at a given layer, then a good approximation of the gradient of a LayerNorm sublayer is given by (√(d)W/x) n. This approximation can be used to speed up the computation of gradients for LayerNorms. §.§ Feature vector norms with LayerNorms In <ref>, we explained that looking at the norms of feature vectors can provide a fast and reasonable guess of which model components will be the most important for a given task. However, there is a caveat that must be taken into account regarding LayerNorms. As shown in Appendix <ref>, the gradient of a LayerNorm sublayer is approximately inversely proportional to the norm of the input to the LayerNorm sublayer.Now, assume that we have a computational path beginning at a LayerNorm, where x is an estimate of the norm of the inputs to that LayerNorm. Let y be the feature vector for this computational path. Then we have y ≈√(d)W/x y', where y' is the feature vector for the tail of the computational path, that comes after the initial LayerNorm.Given an input x, we have thaty · x≈√(d)W/x y' · x = √(d)xWy'xcosθ≈√(d)Wy'cosθ Therefore, the dot product of an input vector with the feature vector y will be approximately proportional to √(d)Wy' – not √(d)Wy'/x. As such, if one wants to use feature vector norms to predict which feature vectors will have the highest dot products with their inputs, then that feature vector must not be multiplied by 1/x.A convenient consequence of this is that when analyzing computational paths that do not involve any compositionality (e.g. analyzing a single attention head or a single MLP) – then ignoring LayerNorms entirely still provides an accurate idea of the relative importance of attention heads. This is because the only time that a (√(d)W/x) term appears with the factor of 1/x included is for the final LayerNorm before the logits output. As such, since this factor is not dependent on the layer of the component being analyzed, it can be ignored.§ DETAILS ON LINEAR APPROXIMATIONS FOR MLPS Finding feature vectors for MLPs is a relatively straightforward application of the first-order Taylor approximation. However, there is a fear that if one takes the gradient at the wrong point, then the local gradient will not reflect well the larger-scale behavior of the MLP. For example, the output of the MLP with respect to a given observable might be saturated at a certain point: the gradient at this point might be very small, and might even point in a direction inconsistent with the MLP's gradient in the unsaturated regime.To alleviate this, we use the following method. Define g(x) = n^T MLP(x), where n is a given observable. If this observable n represents the logit difference between two tokens, then we should be able to find an input on which this difference is very negative, along with an input on which this difference is very positive. For example, if n represents the logit difference between the tokenher and the tokenhim, then an input containing a male name should make this difference very negative, and an input containing a female name should cause this difference to be very positive.Thus, we have two points x_- and x_+ such that g(x_-) < 0 and g(x_+) > 0. Since MLPs are continuous, there therefore must be some point x^* at which g(x^*) = 0: a point that lies on the decision boundary of the MLP. It stands to reason that the gradient at this decision boundary is more likely to capture the larger-scale behavior of the MLP and is less likely to be saturated, when compared to the gradient at more extreme points like x_- and x_+.§ EMPIRICAL LAYERNORM GRADIENT INVESTIGATIONS In this section, we put forth various empirical results relevant for the discussion of LayerNorm gradients in <ref>. §.§ LayerNorm input norms per layerWe calculated the average norms of inputs to each LayerNorm sublayer in the model, over the activations obtained from ten of the prompts from the artificial dataset described in Appendix <ref>. The results can be found in Figure <ref>. The wide variation in the input norms across different layers implies that input norms must be taken into account in any approximation of LayerNorm gradients. §.§ LayerNorm weight values are very similar In <ref>, we state that the entries in the LayerNorm scaling matrices tend to be very close together, and use this as justification for treating weight matrices as scalars. Specifically, we found that the average variance of scaling matrix entries across all LayerNorms in GPT-Neo-1.3B is 0.007827. To determine the extent to which this variance is large, we calculated the ratio of the variance of each LayerNorm's weight matrix's entries to the mean absolute value of each layer's embeddings' entries. The results can be found in Figure <ref>. Note that the highest value found was 0.0731 at Layer 0 – meaning that the average entry in Layer 0 embeddings was over 13.67 times larger than the variance between entries in that layer'sLayerNorm weight. This supports our assertion that LayerNorm scaling matrices can be largely treated as constants.One possible guess as to why this behavior might be occurring is this: much of the computation taking place in the model does not occur with respect to basis directions in activation space. However, the diagonal LayerNorm weight matrices can only act on these very basis directions. Therefore, the weight matrices end up settling on the same nearly-constant value in all entries.§ FURTHER DEBIASING EXPERIMENTS We ran further experiments on the artificial dataset described in Appendix <ref>, in order to determine the extent to which the feature vectors yielded by observable propagation could be used for debiasing the model's outputs. The idea is similar to that presented by <cit.>: by adding a feature vector to the activations at a given layer for the name token, we can hopefully shift the model's output to be less biased.Specifically, we used the following methodology. We paired each of the 300 female-name prompts forwith one of the 300 male-name prompts for . For each prompt pair, we ran the model on the female-name prompt and on the male-name prompt, recording the scores with respect to theobservable. We then ran the model on the male-name prompt – but added a multiple of the 6::6 feature vector fordescribed in <ref> to the model's activations for the name token before the LayerNorm preceding the layer 6 attention sublayer.In particular, let y be the unit 6::6 feature vector for , let x_female be the activation vector for the name token at that layer for the female prompt, and let x_male be the activation vector for the name token at that layer for the male prompt. Then we added the vector y' = ((x_female - x_male) · y)y to x_male. If the model were a linear model whose output was solely determined by the dot product of the input at this layer with the feature vector y, then the output of the model in the case where y' is added to the male embeddings would be the same as the output of the model on the female prompt. Therefore, the difference between this patched output and the model's output on the female prompt can be viewed as an indicator of the extent to which the feature vector is affected by nonlinearity in the model. We also ran this same experiment, but adding 2y' instead of y' to the male embeddings, in order to get a stronger debiasing effect.The results are given in Table <ref>. We see that adding y' to the activations for the male prompts is in fact able to cause the model's output to become closer to that of the female prompts – although not as much as it would if the model were linear. But adding 2y' to the male prompts' activations is able to bring the model's output to within an average of 1.3180 logits of the model's output on the female prompts. And when the mean difference between the patched male prompt outputs and the female prompt outputs is calculated without taking the absolute value, this difference becomes even smaller – only 0.1316 logits on average – which indicates that sometimes, adding 2y' to the male prompts' activations even overshoots the model's behavior on the female prompts. As such, we can infer that this feature vector obtained via observable propagation has utility in debiasing the model.We then wanted to investigate the extent to which adding this debiasing vector would harm the model's performance on the pronoun prediction task. As such, we repeated these experiments on the dataset of prompts for , but adding 2y' to the male activations. The results can be found in Table <ref>. The results show that adding the debiasing vector to the male name embeddings also causes the model's ability to correctly predict gendered pronouns to drop dramatically. This suggests that in cases such as this one, where the model uses the same features for undesirable outputs as it does for desirable outputs, inference-time interventions such as that presented by <cit.> may cause an inevitable decrease in model quality.§ EXPERIMENTAL DETAILS FOR SECTION <REF> §.§ DatasetsThe dataset used in the subject pronoun prediction task is the same artificial dataset described in Appendix <ref>.The dataset used in the C vs. Python classification task consists of 730 code snippets, each 128 tokens long, taken from C and Python subsets of the GitHub component of The Pile <cit.>.The dataset used in the American political party prediction task is an artificial dataset consisting of prompts of the form , whereis replaced by the name of a politician drawn from a list of 40 Democratic Party politicians and 40 Republican Party politicians. These politicians were chosen according to the list of the most famous Democrats and the most famous Republicans for Q3 2023 compiled by YouGov, available at https://today.yougov.com/ratings/politics/fame/Democrats/allhttps://today.yougov.com/ratings/politics/fame/Democrats/all and https://today.yougov.com/ratings/politics/fame/Democrats/allhttps://today.yougov.com/ratings/politics/fame/Democrats/all. The intuition behind this choice of dataset is that the model would be more likely to identify the political affiliation of well-known politicians, because better-known politicians would be more likely to occur in its training data. This is the primary reason that a smaller dataset is being used. §.§ Task definitionThe subject pronoun prediction task involves the model predicting the correct token for each prompt. The target scores are considered to be the difference between the model's logit prediction for the tokenand the model's logit prediction for the token .The political party prediction task also involves the model predicting the correct token for each prompt. The target scores are considered to be the difference between the model's logit prediction for the tokenand the model's logit prediction for the token .For the C vs. Python classification task, because the data is drawn from a diverse corpus of code, the task is treated as a binary classification task instead of a token prediction task. §.§ Feature vectorsThe feature vector used for the pronoun prediction task is the feature vector corresponding to the computational path 6::6 → 9::1 → 13::1 for theobservable.The feature vector used for the political party prediction task is the feature vector corresponding to attention head 15::8 for the observable defined by e_ Democrat - e_ Republican.The feature vector used for the C versus Python classification task is the feature vector corresponding to attention head 16::9 for the observable defined by e_ ): - e_ ){. (The intuition behind this observable is that in Python, function definitions look like , whereas in C, function definitions look like . Notice how the former line ends in the token ): whereas the latter line ends in the token ){.)The regression feature vectors for each task were trained on model embeddings at the same layer as the feature vectors for that task. Thus, for example, the linear regression feature vector for the pronoun prediction task was trained on model embeddings at layer 6. §.§ Task evaluationFor the pronoun prediction task, the predicted score was determined as the dot product of the feature vector with the model's embedding at layer 6 for the name token in the prompt.For the political party prediction task, the predicted score was determined as the dot product of the feature vector with the model's embedding at layer 15 for the last token in the politician's name in each prompt.For the C versus Python classification task, the predicted score for each code snippet was determined by taking the mean of the model's embeddings at layer 16 for all tokens in the code snippet, and then taking the dot product of the feature vector with those mean embeddings.§ PROOF OF THEOREM <REF> *thm:LayerNormTheorem <ref> \beginthm:LayerNorm Define f(x; n) = n ·(x). Defineθ(x;n) = arccos(n ·∇_x f(x; n)/n∇_x f(x; n)) – that is, θ(x;n) is the angle between n and ∇_x f(x; n). Then if n ∼𝒩(0,I) in ℝ^d, and d≥8 then 𝔼[ θ(x;n) ] < 2 arccos(√(1-1/d-1))\endthm:LayerNormTo prove this, we will introduce a lemma:Let y be an arbitrary vector. Let A = I-vv^T/v^2 be the orthogonal projection onto the hyperplane normal to v. Then the cosine similarity between y and Ay is given by √(1-cos(θ)^2), where cos(θ) is the cosine similarity between y and v. Assume without loss of generality that y is a unit vector. (Otherwise, we could rescale it without affecting the angle between y and v, or the angle between y and Ay.)We have Ay = y - y · v/‖ v ‖^2 v. Then, y · Ay= y · (y - y · v/‖ v ‖^2 v) = ‖ y ‖^2 - (y · v)^2/‖ v ‖^2= 1 - (y · v)^2/‖ v ‖^2 and‖ Ay ‖^2= (y - y · v/‖ v ‖^2 v) · (y - y · v/‖ v ‖^2 v) = y · (y - y · v/‖ v ‖^2 v) - y · v/‖ v ‖^2 v · (y - y · v/‖ v ‖^2 v)= y · Ay - y · v/‖ v ‖^2 v · (y - y · v/‖ v ‖^2 v) = y · Ay - (y · v)^2/‖ v ‖^2 + ‖y · v/‖ v ‖^2 v ‖^2 = y · Ay - (y · v)^2/‖ v ‖^2 + (y · v)^2/‖ v ‖^4‖v ‖^2 = y · Ay - (y · v)^2/‖ v ‖^2 + (y · v)^2/‖ v ‖^2= y · Ay Now, the cosine similarity between y and Ay is given byy · Ay/‖ y ‖‖ Ay ‖ = y · Ay/‖ Ay ‖= ‖ Ay ‖^2/‖ Ay ‖= ‖ Ay ‖ At this point, note that ‖ Ay ‖ = √(y · Ay) = √(1 - (y · v)^2/‖ v ‖^2). But y · v/‖ v ‖ is just the cosine similarity between y and v. Now, if we denote the angle between y and v by θ, we thus have ‖ Ay ‖ = √(1 - (y · v)^2/‖ v ‖^2) = √(1 - cos(θ)^2).Now, we are ready to prove Theorem <ref>. First, as noted by <cit.>, we have that (x) = Px/Px, where P = I-1/d1⃗1⃗^T is the orthogonal projection onto the hyperplane normal to 1, the vector of all ones. Thus, we havef(x; n) = n^T (Px/‖ Px ‖) Using the multivariate chain rule along with the rule that the derivative of x/‖ x ‖ is given by I/‖ x ‖ - xx^T/‖ x ‖^3 (see 2.6.1 of <cit.>), we thus have that ∇_x f(x; n)= ( n^T(I/‖ Px ‖ - (Px)(Px)^T/‖ Px ‖^3) P )^T = ( 1/‖ Px ‖ n^T(I - (Px)(Px)^T/‖ Px ‖^2) P )^T = 1/‖ Px ‖ P (I - (Px)(Px)^T/‖ Px ‖^2) nbecause P is symmetric Denote Q = I - (Px)(Px)^T/‖ Px ‖^2. Note that this is an orthogonal projection onto the hyperplane normal to Px. We now have that ∇_x f(x; n) = 1/‖ Px ‖ PQn. Because we only care about the angle between n and ∇_x f(x; n), it suffices to look at the angle between n and PQn, ignoring the 1/‖ Px ‖ term.Denote the angle between n and PQn as θ(x,n). (Note that θ is also a function of x because Q is a function of x.) Then if θ_Q(x,n) is the angle between n and Qn, and θ_P(x,n) is the angle between Qn and PQn, then θ(x,n) ≤θ_Q(x,n) + θ_P(x,n), so 𝔼[θ(x,n)] ≤𝔼[θ_Q(x,n)] + 𝔼[θ_P(x,n)].Using Lemma <ref>, we have that θ_Q(x,n) = arccos(√(1 - cos(ϕ(n, Px))^2)), where ϕ(n, Px) is the angle between n and Px. Now, because n ∼𝒩(0,I), we have 𝔼[cos(ϕ(n, Px))^2] = 1/d, using the well-known fact that the expected squared dot product between a uniformly distributed unit vector in ℝ^d and a given unit vector in ℝ^d is 1/d.At this point, define g(t) = arccos(√(1 - t)), h(t)=g'(1/d-1)(t-1/d-1)+g(1/d-1). Then if 1/d-1 < c, where c is the least solution to g'(c) = π-2g(c)/2(1-c), then h(t) ≥ g(t). (Note that g(t) is convex on (0,0.5] and concave on [0.5,1). Therefore, there are exactly two solutions to g'(c) = π-2g(c)/2(1-c). The lesser of the two solutions is the value at which g'(c) equals the slope of the line between (c,g(c)) and (1, π/2) – the latter point being the maximum of g – at the same time that g”(c) ≥ 0.) One can compute c ≈ 0.155241…, so if d≥ 8, then 1/(d-1) < c is satisfied, so h(t) ≥ g(t). Thus, we have the following inequality: h(1/(d-1))> h(1/d) = h(𝔼[cos(ϕ(n, Px))^2]) = 𝔼[h(cos(ϕ(n, Px))^2)]due to linearity≥𝔼[g(cos(ϕ(n, Px))^2)]because h(t) ≥ g(t) for all t= 𝔼[θ_Q(x,n)] Now, h(1/(d-1)) = g(1/(d-1)) = arccos(√(1-1/d-1)). Thus, we have that arccos(√(1-1/d-1)) > 𝔼[θ_Q(x,n)].The next step is to determine an upper bound for 𝔼[θ_P(x,n)]. By Lemma 1, we have that θ_P(x,n) = arccos( √(1-cos(ϕ(Qn, 1⃗))^2)). Now, note that because n ∼𝒩(0,I), then Qn is distributed according to a unit Gaussian in Im Q, the (d-1)-dimensional hyperplane orthogonal to Px. Note that because 1⃗ is orthogonal to Px (by the definition of P) and Px is orthogonal to Im Q, this means that 1⃗∈Im Q. Now, let us apply the same fact from earlier: that the expected squared dot product between a uniformly distributed unit vector in ℝ^d-1 and a given unit vector in ℝ^d-1 is 1/(d-1). Thus, we have that 𝔼[cos(ϕ(Qn, 1⃗))^2] = 1/(d-1).From this, by the same logic as in the previous case, arccos(√(1-1/d-1)) ≥𝔼[θ_P(x,n)].Adding this inequality to the inequality for 𝔼[θ_Q(x,n)], we have2 arccos(√(1-1/d-1)) > 𝔼[θ_Q(x,n)] + 𝔼[θ_P(x,n)] ≥𝔼[θ(x,n)] . § EMPIRICAL RESULTS REGARDING THEOREM <REF>Note that Theorem <ref> assumes that feature vectors n are normally-distributed, which may not necessarily occur in practice (although given that observables and observable-derived feature vectors are only introduced in this work, it is hard to say whether this is false more generally; more research is necessary, including research on the sorts of observables that practitioners wish to analyze in practice). However, the intention of Theorem <ref> is to provide motivation that underpins what we found empirically: namely, that feature vectors computer by taking LayerNorm into account have extremely high cosine similarities with feature vectors computed without taking LayerNorm into account.In particular, for the feature vectors considered in Section <ref>, these cosine similarities and angles are given in Table <ref>. For reference, note that the upper bound on the angle between these feature vectors according to Theorem <ref> is approximately 0.0442 radians. The feature vectors for subject pronoun prediction have a higher angle between them of 0.0664 radians, but this can be attributed to the fact that the circuit for these feature vectors goes through multiple LayerNorms. Additionally, the angle for the political party prediction feature vector is also slightly higher than the bound predicted by the theorem; but it is worth noting that the theorem predicts a bound on the expected angle, rather than a bound on the maximum angle; this also might be due to the scaling matrix W in the LayerNorm (see Appendix <ref>).§ PROOF OF THEOREM <REF> *thm:couplingCoeffTheorem <ref> \beginthm:couplingCoeff Let y_1, y_2 ∈ℝ^d. Let x be uniformly distributed on the hypersphere defined by the constraints ‖ x ‖ = s and x · y_1 = k. Then we have𝔼[x · y_2] = k y_1 · y_2/‖ y_1 ‖^2and the maximum and minimum values of x · y_2 are given by ‖ y_2 ‖/‖ y_1 ‖(kcos(θ) ±sin(θ) √(s^2 ‖ y_1 ‖^2-k^2)) where θ is the angle between y_1 and y_2. \endthm:couplingCoeffBefore proving Theorem <ref>, we will prove a quick lemma.Let 𝒮 be a hypersphere with radius r and center c. Then for a given vector y, the mean squared distance from y to the sphere, 𝔼_s ∈𝒮[ y-c^2 ], is given byy-c^2 + r^2. Without loss of generality, assume that 𝒮 is centered at the origin (so y-c^2 = y^2). Induct on the dimension of the 𝒮. As our base case, let 𝒮 be the 0-sphere consisting of a point in ℝ^1 at -r and a point at r. Then 𝔼_s ∈𝒮[ y-s^2 ] = (y-r)^2 + (y- (-r))^2/2 = y^2 + r^2.For our inductive step, assume the inductive hypothesis for spheres of dimension d-2; we will prove the theorem of spheres of dimension d-1 in an ambient space of dimension d. Without loss of generality, let y lie on the x-axis, so that we have y = [ y_1 0 0 … ]^T. Next, divide 𝒮 into slices along the x-axis. Denote the slice at position x=x_0 as 𝒮_x_0. Then 𝒮_x_0 is a (d-2)-sphere centered at [ x_0 0 0 … ]^T, and has radius √(r^2 - x_0^2). Now, by the law of total expectation, 𝔼_s ∈𝒮[ y-s^2 ] = 𝔼_-r ≤ x ≤ r[ 𝔼_s' ∈𝒮_x[ y-s'^2 ]]. We then have that 𝔼_s' ∈𝒮_x[ y-s'^2 ]= (y_1-x)^2 + s_2^2 + s_3^2 + ⋯= (y_1-x)^2 + s_2^2 + s_3^2 + ⋯ Once again, 𝒮_x is a (d-2)-sphere defined by s_2^2 + s_3^2 + ⋯ = r^2 - x^2. This means that by the inductive hypothesis, we have s_2^2 + s_3^2 + ⋯ = r^2 - x^2. Thus, we have 𝔼_s' ∈𝒮_x[ y-s'^2 ]= (y_1-x)^2 + r^2 - x^2𝔼_s' ∈𝒮_x[ y-s'^2 ]= (y_1-x)^2 + r^2 - x^2𝔼_s ∈𝒮[ y-s^2 ]= 𝔼_-r ≤ x ≤ r[ (y_1-x)^2 + r^2 - x^2 ] = 1/2r∫_-r^r (y_1-x)^2 + r^2 - x^2 dx = r^2 + y_1^2We are now ready to begin the main proof. First, assume that x = 1. Now, the intersection of the (d-1)-sphere defined by x = 1 and the hyperplane x · y_1 = k is a unit hypersphere of dimension (d-2), oriented in the hyperplane x · y_1 = k, and centered at c_1 y_1 where c_1 = k/y_1^2. Denote this (d-2)-sphere as 𝒮, and denote its radius by r.Next, define c_2 = k/y_2 · y_1. Then cy_2 · y_1 = k, so c_2y_2 lies in the same hyperplane as 𝒮. Additionally, because c_1 y_1 is in this hyperplane, and c_1 y_1 is also the normal vector for this hyperplane, we have that the vectors c_1 y_1, c_2 y_2, and c_1 y_1 - c_2 y_2 form a right triangle, where c_2 y_2 is the hypotenuse and c_1 y_1 - c_2 y_2 is the leg opposite of the angle θ between y_1 and y_2. As such, we have that c_1y_1 - c_2y_2 = sin(θ) c_2y_2.Furthermore, we have that c_1y_1 · c_2y_2 = k^2/y_1^2, that c_1 y_1 = k/y_1^2, and that c_2 y_2 = k/y_1cosθWe will now begin to prove that the maximum and minimum values of y_2 · x are given by ‖ y_2 ‖/‖ y_1 ‖(kcos(θ) ±|sin(θ)|√(s^2 ‖ y_1 ‖^2-k^2)).To start, note that the nearest point on 𝒮 to c_2y_2 and the farthest point on 𝒮 from c_2y_2 are located at the intersection of 𝒮 with the line between c_2y_2 and c_1 y_1.To see this, let x_+ be the at the intersection of 𝒮 and the line between c_2y_2 and c_1 y_1. We will show that x_+ is the nearest point on 𝒮 to c_2y_2. Let x'_+ ∈𝒮≠ x_+. Then we have the following cases: * Case 1: c_2y_2 is outside of 𝒮. Then c_2y_2 - c_1 y_1 = c_2y_2 - x_+ + x_+ - c_1 y_1, because c_2y_2, x_+, and c_1 y_1 are collinear – so c_2y_2- c_1 y_1 = c_2y_2 - x_+ + r (because x_+ ∈𝒮). By the triangle inequality, we have c_2y_2 - c_1 y_1≤c_2y_2 - x_+' + x_+' - c_1 y_1 = c_2y_2 - x_+' + r. But this means that c_2y_2 - x_+≤c_2y_2 - x_+'.* Case 2: c_2y_2 is inside of 𝒮. Then c_2y_2- c_1 y_1 = x_+ - c_1 y_1 - c_2y_2 - x_+, because c_2y_2, x_+, and c_1y_1 are collinear – so c_2y_2 - c_1 y_1 = r - c_2y_2 - x_+. By the triangle inequality, we have x_+'- c_1 y_1≤c_2y_2 - x_+' + c_2y_2- c_1 y_1, so x_+'- c_1 y_1≤c_2y_2 - x_+' + r - c_2y_2 - x_+. But since x_+'- c_1 y_1 = r, this means that c_2y_2 - x_+≤c_2y_2 - x_+'. A similar argument will show that x_-, the farthest point on 𝒮 from c_2y_2, is also located at the intersection of 𝒮 with the line between c_2y_2 and c_1 y_1.Now, let us find the values of x_+ and x_-. The line between c_2y_2 and c_1 y_1 can be parameterized by a scalar t as c_1y_1 + t(c_2y_2 - c_1y_1). Then x_+ and x_- are given by c_1y_1 + t^*(c_2y_2 - c_1y_1), where t^* are the solutions to the equation c_1y_1 + t(c_2y_2 - c_1y_1) = 1.We have the following: 1= c_1y_1 + t(c_2y_2 - c_1y_1)= c_1 y_1^2 + 2t(c_1y_1 · (c_2y_2 - c_1y_1)) + t^2 c_2y_2-c_1y_1^2 = c_1 y_1^2 + 2t( (c_1y_1 · c_2y_2) - c_1y_1^2) + t^2 c_2y_2^2 sin^2 θ= k^2/y_1^2 + 2t( k^2/y_1^2 - k^2/y_1^2) + t^2 k^2/y_1^2 cos^2 θsin^2 θ= k^2/y_1^2 (t^2 tan^2 θ + 1) Thus, solving for t, we have that t^* = ±√(y_1^2 - k^2)/ktanθ. Therefore, we have that x_+, x_-= c_1y_1 + t^* (c_2y_2 - c_1y_1) = c_1 y_1 + ( k^2/y_1^2 (t^2 tan^2 θ + 1) ) (c_2y_2 - c_1y_1) = k y_1/y_1^2 + ( ±√(y_1^2 - k^2)/ktanθ) (k y_2/y_1 · y_2 - k y_1/y_1^2) = k [y_1/y_1^2±( √(y_1^2 - k^2)/ktanθ) (y_2/y_1 · y_2 - y_1/y_1^2) ] y_2 · x_+, y_2 · x_-= y_2 · k [y_1/y_1^2±( √(y_1^2 - k^2)/ktanθ) (y_2/y_1 · y_2 - y_1/y_1^2) ] = k y_1 · y_2/y_1^2±( θ√(y_1^2 - k^2)) (y_2 · y_2/y_1 · y_2 - y_1 · y_1/y_1^2) = k y_1 · y_2/y_1^2±( θ√(y_1^2 - k^2)) (y_2/y_1cosθ - y_2cosθ/y_1) = [ k y_1 · y_2/y_1^2±( θ√(y_1^2 - k^2)) y_2/y_1(1/cosθ - cosθ) ] = k y_1 · y_2/y_1^2±(θ√(y_1^2 - k^2)) y_2/y_1sinθtanθ= k y_1 · y_2/y_1^2±y_2/y_1sinθ√(y_1^2 - k^2)= ‖ y_2 ‖/‖ y_1 ‖(kcos(θ) ±sin(θ) √(‖ y_1 ‖^2-k^2)) We will now prove that y_2 · x = y_1 · y_2/y_1^2. Before we do, note that we can also use our value of t^* to determine the squared radius of 𝒮. We have that the squared radius of 𝒮 is given byr^2= t^* (c_2 y_2 - c_1y_1)^2 = (t^*)^2 (c_2 y_2 - c_1y_1)^2 = (t^*)^2 sin^2 θc_2y_2^2 = sin^2(θ) k^2 / ( y_1^2 cos^2 θ)/k^2 tanθ(y_1^2 - k^2) = 1-k^2/y_1^2 We will use this result soon. Now, on to the main event. Begin by noting that y_2 · x = y_2xcos(y_2, x) = y_2cos(y_2, x), where cos(y_2, x) is the cosine of the angle between y_2 and x. Now, cos(y_2, x) = (c_2) cos(c_2y_2, x). And we have that x-cy_2^2 = x^2 + cy_2^2 - 2 xc_2y_2cos(cy_2, x) = 1 + c_2y_2^2 - 2 c_2y_2cos(c_2y_2, x). Therefore, we havecos(y_2, x)= (c_2) cos(c_2y_2, x) = (c_2) x-c_2y_2^2 - 1 - c_2y_2^2/-2c_2y_2= (c_2) 1 + c_2y_2^2 - x-c_2y_2^2/2c_2y_2 y_2 · x= y_2cos(y_2, x) = (c_2)y_21 + c_2y_2^2 - x-c_2y_2^2/2c_2y_2 y_2 · x = (c_2)y_21 + c_2y_2^2 - x-c_2y_2^2/2c_2y_2= (c_2)y_21 + c_2y_2^2 - x-c_2y_2^2/2c_2y_2= (c_2)y_21 + c_2y_2^2 - (1-k^2/y_1^2 + c_1y_1-c_2y_2^2 )/2c_2y_2This last line uses Lemma <ref>: c_1y_1 is the center of 𝒮, so the expected squared distance between c_2 y_2 and a point on 𝒮 is given by 1-k^2/y_1^2 + c_1y_1-c_2y_2^2, where 1-k^2/y_1^2 is the squared radius of 𝒮 and c_1y_1-c_2y_2^2 is the squared distance from c_2y_2 to the center. We can use this lemma because c_2 y_2 is in the same hyperplane as 𝒮, so we can treat this situation as being set in a space of dimension d-1.Now, continue to simplify: y_2 · x = (c_2)y_21 + c_2y_2^2 - (1-k^2/y_1^2 + c_1y_1-c_2y_2^2 )/2c_2y_2= (c_2)y_2c_2y_2^2 + k^2/y_1^2 - sin^2 θc_2 y_2^2/2c_2y_2= (c_2)y_2c_2y_2^2 cos^2 θ + k^2/y_1^2/2c_2y_2= (c_2)y_21/2(c_2y_2cos^2 θ + kcosθ/y_1)= (c_2)y_21/2(kcosθ/y_1 + kcosθ/y_1) = (c_2)ky_2/y_1cosθ= k y_2/y_1cosθ= k y_1 · y_2/y_1^2 The last thing to do is to note that the above formulas are only valid when x = 1. But if x = s, this is equivalent to the case when x=1 if we scale y_1 and y_2 by s. Scaling those two vectors by s gives us the final formulas in Theorem <ref>. § TOP ACTIVATING TOKENS ON 1M TOKENS FROM THE PILE FORAND6::6 FEATURE VECTORS In <ref>, in order to confirm that the feature vectors that we found for attention head 6::6 corresponded to notions of gender, we looked at the tokens from a dataset of 1M tokens from The Pile (see Appendix <ref>) that maximally and minimally activated these feature vectors. §.§feature vector The thirty highest-activating tokens, along with the prompts from which they came, and their scores, are given below: * Highest-activating token #1: * Excerpt from prompt: * Token: * Score: 18.372 * Highest-activating token #2: * Excerpt from prompt: * Token: * Score: 17.388 * Highest-activating token #3: * Excerpt from prompt: * Token: * Score: 16.815 * Highest-activating token #4: * Excerpt from prompt: * Token: * Score: 16.309 * Highest-activating token #5: * Excerpt from prompt: * Token: * Score: 16.267 * Highest-activating token #6: * Excerpt from prompt: * Token: * Score: 16.171 * Highest-activating token #7: * Excerpt from prompt: * Token: * Score: 16.079 * Highest-activating token #8: * Excerpt from prompt: * Token: * Score: 16.039 * Highest-activating token #9: * Excerpt from prompt: * Token: * Score: 15.967 * Highest-activating token #10: * Excerpt from prompt: * Token: * Score: 15.906 * Highest-activating token #11: * Excerpt from prompt: * Token: * Score: 15.582 * Highest-activating token #12: * Excerpt from prompt: * Token: * Score: 15.443 * Highest-activating token #13: * Excerpt from prompt: * Token: * Score: 15.374 * Highest-activating token #14: * Excerpt from prompt: * Token: * Score: 15.358 * Highest-activating token #15: * Excerpt from prompt: * Token: * Score: 15.312 * Highest-activating token #16: * Excerpt from prompt: * Token: * Score: 15.275 * Highest-activating token #17: * Excerpt from prompt: * Token: * Score: 15.246 * Highest-activating token #18: * Excerpt from prompt: * Token: * Score: 15.182 * Highest-activating token #19: * Excerpt from prompt: * Token: * Score: 15.135 * Highest-activating token #20: * Excerpt from prompt: * Token: * Score: 15.111 * Highest-activating token #21: * Excerpt from prompt: * Token: * Score: 15.053 * Highest-activating token #22: * Excerpt from prompt: * Token: * Score: 15.051 * Highest-activating token #23: * Excerpt from prompt: * Token: * Score: 14.979 * Highest-activating token #24: * Excerpt from prompt: * Token: * Score: 14.968 * Highest-activating token #25: * Excerpt from prompt: * Token: * Score: 14.930 * Highest-activating token #26: * Excerpt from prompt: * Token: * Score: 14.699 * Highest-activating token #27: * Excerpt from prompt: * Token: * Score: 14.686 * Highest-activating token #28: * Excerpt from prompt: * Token: * Score: 14.658 * Highest-activating token #29: * Excerpt from prompt: * Token: * Score: 14.626 * Highest-activating token #30: * Excerpt from prompt: * Token: * Score: 14.578The thirty lowest-activating tokens, along with the prompts from which they came, and their scores, are given below: * Lowest-activating token #1: * Excerpt from prompt: * Token: * Score: -12.129 * Lowest-activating token #2: * Excerpt from prompt: * Token: * Score: -11.344 * Lowest-activating token #3: * Excerpt from prompt: * Token: * Score: -11.146 * Lowest-activating token #4: * Excerpt from prompt: * Token: * Score: -11.016 * Lowest-activating token #5: * Excerpt from prompt: * Token: * Score: -10.793 * Lowest-activating token #6: * Excerpt from prompt: * Token: * Score: -10.682 * Lowest-activating token #7: * Excerpt from prompt: * Token: * Score: -10.577 * Lowest-activating token #8: * Excerpt from prompt: * Token: * Score: -10.503 * Lowest-activating token #9: * Excerpt from prompt: * Token: * Score: -10.483 * Lowest-activating token #10: * Excerpt from prompt: * Token: * Score: -10.296 * Lowest-activating token #11: * Excerpt from prompt: * Token: * Score: -10.294 * Lowest-activating token #12: * Excerpt from prompt: * Token: * Score: -10.251 * Lowest-activating token #13: * Excerpt from prompt: * Token: * Score: -10.148 * Lowest-activating token #14: * Excerpt from prompt: * Token: * Score: -10.112 * Lowest-activating token #15: * Excerpt from prompt: * Token: * Score: -10.053 * Lowest-activating token #16: * Excerpt from prompt: * Token: * Score: -10.050 * Lowest-activating token #17: * Excerpt from prompt: * Token: * Score: -10.033 * Lowest-activating token #18: * Excerpt from prompt: * Token: * Score: -10.031 * Lowest-activating token #19: * Excerpt from prompt: * Token: * Score: -9.934 * Lowest-activating token #20: * Excerpt from prompt: * Token: * Score: -9.932 * Lowest-activating token #21: * Excerpt from prompt: * Token: * Score: -9.883 * Lowest-activating token #22: * Excerpt from prompt: * Token: * Score: -9.824 * Lowest-activating token #23: * Excerpt from prompt: * Token: * Score: -9.717 * Lowest-activating token #24: * Excerpt from prompt: * Token: * Score: -9.691 * Lowest-activating token #25: * Excerpt from prompt: * Token: * Score: -9.652 * Lowest-activating token #26: * Excerpt from prompt: * Token: * Score: -9.613 * Lowest-activating token #27: * Excerpt from prompt: * Token: * Score: -9.405 * Lowest-activating token #28: * Excerpt from prompt: * Token: * Score: -9.372 * Lowest-activating token #29: * Excerpt from prompt: * Token: * Score: -9.351 * Lowest-activating token #30: * Excerpt from prompt: * Token: * Score: -9.272§.§feature vector The thirty highest-activating tokens, along with the prompts from which they came, and their scores, are given below: * Highest-activating token #1: * Excerpt from prompt: * Token: * Score: 18.372 * Highest-activating token #2: * Excerpt from prompt: * Token: * Score: 17.388 * Highest-activating token #3: * Excerpt from prompt: * Token: * Score: 16.815 * Highest-activating token #4: * Excerpt from prompt: * Token: * Score: 16.309 * Highest-activating token #5: * Excerpt from prompt: * Token: * Score: 16.267 * Highest-activating token #6: * Excerpt from prompt: * Token: * Score: 16.171 * Highest-activating token #7: * Excerpt from prompt: * Token: * Score: 16.079 * Highest-activating token #8: * Excerpt from prompt: * Token: * Score: 16.039 * Highest-activating token #9: * Excerpt from prompt: * Token: * Score: 15.967 * Highest-activating token #10: * Excerpt from prompt: * Token: * Score: 15.906 * Highest-activating token #11: * Excerpt from prompt: * Token: * Score: 15.582 * Highest-activating token #12: * Excerpt from prompt: * Token: * Score: 15.443 * Highest-activating token #13: * Excerpt from prompt: * Token: * Score: 15.374 * Highest-activating token #14: * Excerpt from prompt: * Token: * Score: 15.358 * Highest-activating token #15: * Excerpt from prompt: * Token: * Score: 15.312 * Highest-activating token #16: * Excerpt from prompt: * Token: * Score: 15.275 * Highest-activating token #17: * Excerpt from prompt: * Token: * Score: 15.246 * Highest-activating token #18: * Excerpt from prompt: * Token: * Score: 15.182 * Highest-activating token #19: * Excerpt from prompt: * Token: * Score: 15.135 * Highest-activating token #20: * Excerpt from prompt: * Token: * Score: 15.111 * Highest-activating token #21: * Excerpt from prompt: * Token: * Score: 15.053 * Highest-activating token #22: * Excerpt from prompt: * Token: * Score: 15.051 * Highest-activating token #23: * Excerpt from prompt: * Token: * Score: 14.979 * Highest-activating token #24: * Excerpt from prompt: * Token: * Score: 14.968 * Highest-activating token #25: * Excerpt from prompt: * Token: * Score: 14.930 * Highest-activating token #26: * Excerpt from prompt: * Token: * Score: 14.699 * Highest-activating token #27: * Excerpt from prompt: * Token: * Score: 14.686 * Highest-activating token #28: * Excerpt from prompt: * Token: * Score: 14.658 * Highest-activating token #29: * Excerpt from prompt: * Token: * Score: 14.626 * Highest-activating token #30: * Excerpt from prompt: * Token: * Score: 14.578 The thirty lowest-activating tokens, along with the prompts from which they came, and their scores, are given below:* Lowest-activating token #1: * Excerpt from prompt: * Token: * Score: -11.732 * Lowest-activating token #2: * Excerpt from prompt: * Token: * Score: -11.608 * Lowest-activating token #3: * Excerpt from prompt: * Token: * Score: -11.324 * Lowest-activating token #4: * Excerpt from prompt: * Token: * Score: -11.228 * Lowest-activating token #5: * Excerpt from prompt: * Token: * Score: -11.007 * Lowest-activating token #6: * Excerpt from prompt: * Token: * Score: -10.971 * Lowest-activating token #7: * Excerpt from prompt: * Token: * Score: -10.884 * Lowest-activating token #8: * Excerpt from prompt: * Token: * Score: -10.854 * Lowest-activating token #9: * Excerpt from prompt: * Token: * Score: -10.794 * Lowest-activating token #10: * Excerpt from prompt: * Token: * Score: -10.793 * Lowest-activating token #11: * Excerpt from prompt: * Token: * Score: -10.696 * Lowest-activating token #12: * Excerpt from prompt: * Token: * Score: -10.673 * Lowest-activating token #13: * Excerpt from prompt: * Token: * Score: -10.617 * Lowest-activating token #14: * Excerpt from prompt: * Token: * Score: -10.556 * Lowest-activating token #15: * Excerpt from prompt: * Token: * Score: -10.424 * Lowest-activating token #16: * Excerpt from prompt: * Token: * Score: -10.266 * Lowest-activating token #17: * Excerpt from prompt: * Token: * Score: -10.250 * Lowest-activating token #18: * Excerpt from prompt: * Token: * Score: -10.177 * Lowest-activating token #19: * Excerpt from prompt: * Token: * Score: -10.124 * Lowest-activating token #20: * Excerpt from prompt: * Token: * Score: -10.113 * Lowest-activating token #21: * Excerpt from prompt: * Token: * Score: -10.058 * Lowest-activating token #22: * Excerpt from prompt: * Token: * Score: -10.018 * Lowest-activating token #23: * Excerpt from prompt: * Token: * Score: -9.989 * Lowest-activating token #24: * Excerpt from prompt: * Token: * Score: -9.937 * Lowest-activating token #25: * Excerpt from prompt: * Token: * Score: -9.909 * Lowest-activating token #26: * Excerpt from prompt: * Token: * Score: -9.862 * Lowest-activating token #27: * Excerpt from prompt: * Token: * Score: -9.842 * Lowest-activating token #28: * Excerpt from prompt: * Token: * Score: -9.733 * Lowest-activating token #29: * Excerpt from prompt: * Token: * Score: -9.711 * Lowest-activating token #30: * Excerpt from prompt: * Token: * Score: -9.654
http://arxiv.org/abs/2312.16291v1
{ "authors": [ "Jacob Dunefsky", "Arman Cohan" ], "categories": [ "cs.LG", "cs.CL" ], "primary_category": "cs.LG", "published": "20231226190056", "title": "Observable Propagation: A Data-Efficient Approach to Uncover Feature Vectors in Transformers" }
[ [===== We construct canonical adjoint p-adic L-functions generating the congruence ideal attached to Hida families using Ohta's pairing. We show that these p-adic L-functions, suitably modified by certain Euler factors, are interpolated by a regular element of Hida's universal ordinary Hecke algebra. We also relate them to characteristic series of primitive adjoint Selmer groups.§ INTRODUCTIONIn the context of L-functions, the notion of period often refers to Deligne's period for the value of a motivic L-function at a critical integer. When the L-function has an automorphic origin, its special values can sometimes be related to automorphic periods. Consider an elliptic cuspidal newform f of weight k≥ 2 and level N, and let L( f,s) be the L-function attached to the adjoint lift of the automorphic representation of _2() generated by f. A theorem of Sturm <cit.> implies that the ratioL( f,j)π^2j+k-1⟨ f,f⟩is an algebraic number for all odd integers 1≤ j ≤ k-1, where ⟨ f,f⟩ is the Petersson norm of f for Γ_1(N). The same automorphic period ⟨ f,f⟩ is involved in algebraicity statements on special values of the Rankin-Selberg L-function L(f× g,s), where g is another newform of weight smaller than k.Let p≥ 5 be a prime number. When f is p-ordinary, Schmidt <cit.> constructed a p-adic L-function _p( f,s) interpolating (up to simple fudge factors) quantities like (<ref>) or twisted variants thereof. Hida showed in <cit.> that this construction could be realized for the primitive Hida family F of f, yielding a two-variable p-adic L-function _p( F,κ,s), κ being the weight variable.However, Schmidt-Hida's construction does not quite fit into Coates-Perrin-Riou's framework of p-adic L-functions of motives developed in <cit.>. Indeed, the motivic periods for f and its critical twists should involve the product Ω_f^+Ω_f^- of p-normalized Shimura periods of f rather than ⟨ f,f ⟩.This problem of normalizations of periods is perhaps better directly seen on (<ref>) for j=1, which turns out to be an explicit number depending on f in an elementary way and does not seem to capture deep algebraic invariants such as the size of a Bloch-Kato Selmer group. This observation suggests that _p( F,κ,s) is the ratio of two ”genuine“ (or ”motivic“) p-adic L-functions L_p( F,κ,s) and L_p( F,κ) interpolating (up to simple fudge factors) L( f,j)π^2j+k-1Ω_f^+Ω_f^-⟨ f,f⟩Ω_f^+Ω_f^-respectively. The second one should also generate the congruence ideal of F in most cases. By Hida's formula mentioned above, π^k+1⟨ f,f⟩ is essentially equal to L( f,1), and it is therefore reasonable to call L_p( F,κ) the (weight-variable) adjoint p-adic L-function of F.These expectations are not new; a similar picture for Rankin-Selberg L-series and their p-adic counterparts is drawn by Hida in his monograph <cit.> where many evidences are provided in the CM case. This article focuses on the construction of L_p( F,κ) and the study of its arithmetic properties.Hida<cit.> gave first evidences for the existence of L_p( F,κ) by showing in certain cases that any generator of the congruence ideal of F, evaluated at f, is essentially equal to the second ratio of (<ref>) (see also <cit.>). A construction of an adjoint p-adic L-function (or, rather, an invertible sheaf) on the Eigencurve of tame level N was carried out by Bellaïche <cit.> using Kim's scalar product on overconvergent modular symbols. Its vanishing locus is intimately related to the ramification locus of the weight map, which constitutes a geometric counterpart to the support of the congruence module. Note that Bellaïche's interpolation formula is only given at crystalline points of trivial Nebentype. Locally on the Eigencurve, the Shimura periods can be normalized in a coherent way at all arithmetic points, with a controllable p-adic error term. This approach has been generalized to other settings in recent works by Balasubramanyam-Longo, Lee, Wu and Lee-Wu <cit.>.Going back to the ordinary setting, we now set up some notations and we state our main results. Let _p be an algebraic closure ofand fix an isomorphism ≃_p. Let Λ=[[1+p]] be the Iwasawa algebra. Assume F is a primitive Hida family of tame level N with coefficients in an integrally closed finite flat Λ-algebra . Denote by ρ_F residual p-ordinary Galois representation over G_=(/) attached to F and assume the following condition:CRρ_Fis absolutely irreducible and p-distinguished.Here, ρ_F is called p-distinguished if the semi-simplification of its restriction to G__p=(_p/) is non-scalar.[=Theorem <ref>] Assume ρ_F satisfies (<ref>). There exists a generator L_p( F)∈ of the congruence ideal of F satisfying the following interpolation property. Let f be an arithmetic specialization of F of weight k≥ 2 and level Np^r, r≥ 1, and denote by f^∘ its associated newform. Then the specialization L_p( f) of L_p( F) at f is given by L_p( f)=p^r-1α^r w(f^∘)_p( f^∘)· (-2i)^k⟨ f^∘,f^∘⟩Ω^+_f^∘Ω^-_f^∘∈_p. In the above formula, α and β are the roots of the p-th Hecke polynomial of f^∘ with |α|_p=1, w(f^∘)∈^× is the Atkin-Lehner pseudo-eigenvalue of f^∘, and _p( f^∘) is a modified Euler factor at p given by _p( f^∘)=1 if f^∘= f and by_p( f^∘)=α(p-1)(1-β/α)(1-p^-1β/α) if f^∘≠ f.Let ρ=ρ_F and letbe the local component of Hida's big ordinary Hecke algebra of tame level N associated with ρ.To ρ is also attached a piece e_1(Np^∞)_ρ of the singular homology of the tower of compactified modular curves (X_1(Np^r))_r≥ 1, where e denotes Hida's ordinary projector.Under (<ref>), the work of Emerton-Pollack-Weston<cit.> shows that e_1(Np^∞)_ρ^± is -free of rank 1, where the superscript ± stands for the ±1-eigenspace for the complex conjugation.The choice of a -basis ξ^± of e_1(Np^∞)_ρ^± and Hida's control theorem then simultaneously pin down Shimura periods for all classical eigenforms f with residual Galois representation isomorphic to ρ. Shimura periods for the newform f^∘ associated to f are defined in Remark <ref>.In order to construct L_p( F), we first pass, via intersection of cycles, from e_1(Np^∞)_ρ^± to a cohomology space e^*^1(Np^∞)_ρ^± where e^* is an anti-ordinary projector. We then use a work of Ohta <cit.> to obtain a perfect Λ-adic pairing one^*^1(Np^∞)_ρ^+×e^*^1(Np^∞)_ρ^- which interpolates Petersson norms of anti-ordinary eigenforms at finite level.Since members of F contribute to ordinary cohomology instead of anti-ordinary cohomology, we make use of the Atkin-Lehner involution, which is known by Hida to transports integral structures between ordinary and anti-ordinary cohomology (see Proposition <ref>).Fix a modular p-ordinary residual representation ρ G_→_2(_p) satisfying (<ref>) and a finite set Σ of primes not containing p. The p-ordinarity means that ρ admits a G_-unramified one-dimensional quotient which is fixed. These choices give rise to a universal Λ-adic Hecke algebra _Σ(ρ)=_Σ whose spectrum parameterizes Hida families with residual p-ordinary Galois representation isomorphic to ρ that are minimally ramified outside of Σ. In the next theorem, we choosebig enough so that it contains the coefficients of any Hida family appearing in (_Σ). There exists a regular element L_Σ(ρ)∈_Σ⊗_Λ whose specialization at any primitive Hida family F minimally ramified outside Σ with ρ_F≃ρ is given by E_Σ( F)L_p( F) up to -units, where E_Σ( F)∈ is a product of Euler factors associated with the adjoint of F at certain primes of Σ. See Theorem <ref> for a precise statement of this theorem, and in particular, (<ref>) for the definition of the level N_Σ of _Σ and (<ref>) for the definition of the Euler factors.Theorems <ref> and <ref> together can be interpreted as the existence of congruences between Petersson norms of different p-ordinary newforms sharing the same residual Galois representation. As for L_p( F), the element L_Σ(ρ) only depends on the choice of a _Σ-basis of e_1(N_Σ p^∞)_ρ^±. We use here Fukaya-Kato's variant of Ohta's pairing with values in a space of Λ-adic cuspforms, which turns out to be _Σ-free since _Σ is Gorenstein. The appearance of Euler factors comes from the comparison of cohomology at two different levels (as in classical level raising arguments) for which we make use of the main technical result of <cit.>.The author hopes in near future to apply Theorem <ref> to the study of the variation of Iwasawa invariants associated with primitive Rankin-Selberg p-adic L-functions, as constructed in <cit.>. We next relate adjoint p-adic L-functions to their algebraic counterparts called Selmer groups. Let F be a primitive Hida family whose residual Galois representation satisfies (<ref>) and let V_=^⊕ 3 be the adjoint Galois representation associated with F. By ordinarity, V_ has a two-dimensional quotient V_^- which is stable under the action of G__p. We let ^∨=(,/) be the Pontryagin dual of , and set D_=V_⊗_^∨, D_^-=V_^-⊗_^∨. The (primitive) adjoint Selmer group ( F) of F is defined as the kernel of the global-to-local restriction map^1(,D_) ⟶∏_ℓ≠ p^1(I_ℓ,D_) ×^1(I_p,D^-_),where I_ℓ denotes the inertia subgroup at a prime ℓ. Its Pontryagin dual ( F)^∨ is a finitely generated -module. Sinceis integrally closed, it makes sense to consider the characteristic ideal of a finitely generated torsion -module M which we denote by _ M. In the next theorem, we make a slightly stronger assumption than (<ref>) on residual Galois representations:CR'ρ is p-distinguished, and ρ_|G_(μ_p) is absolutely irreducible.We also need a certain finite set of exceptional primes, defined asΣ_e:={ℓπ_f,ℓ is supercuspidal and π_f,ℓ≃π_f,ℓ⊗τ__ℓ^2},where π_f,ℓ is the automorphic representation of _2(_ℓ) of any arithmetic specialization f of F and τ__ℓ^2 is the unramified quadratic character of _ℓ^×. The definition of Σ_e does not depend on the choice of f. Assume that ρ_F satisfies (<ref>). The Selmer group ( F)^∨ is a torsion -module. If F is twist-minimal, then the equality _( F)^∨=∏_ℓ∈Σ_e(1+ℓ^-1)^-1 L_p( F). holds.Recall that F is twist-minimal if it has minimal level among its twists.Note that ( F) is invariant under twists, but L_p( F) isn't, so twist-minimality is a necessary hypothesis in Theorem <ref>.However, the relation between adjoint p-adic L-functions of Hida families and their twists is well-understood (see Proposition <ref>), so this is not a restricting hypothesis. We note that a version of Theorem <ref> is stated in <cit.> under rather restrictive assumptions, forcing F to be minimally ramified.Under assumption (<ref>), the ring _Σ turns out to be the universal ring R_Σ for p-ordinary Galois deformations of ρ that are minimally ramified outside Σ.As noticed by Mazur and Tilouine <cit.>, the module of continuous Kähler differentials of R_Σ as a Λ-algebra is related to both adjoint Selmer groups and congruence ideals of systems of eigenvalues appearing in _Σ.This observation is a crucial input in works of Wiles, Taylor-Wiles, and Diamond-Flach-Guo among others and has applications to modularity and to the Bloch-Kato conjecture <cit.>.We obtain a relation between Σ-imprimitive Selmer groups and p-adic L-functions, from which we deduce the equality in Theorem <ref> via a generalization of Greenberg-Vatsal's argument <cit.>.The plan of this article is as follows. In Section <ref> we give the construction of an adjoint p-adic L-function as in Theorem <ref> and we introduce p-adic L-functions with values in big Hecke algebras. We analyze in Section <ref> how these objects change when we vary the tame level, yielding the proof of Theorem <ref>. We then study Selmer groups and prove Theorem <ref> in the end of Section <ref>. Appendix <ref> recalls definitions and standard facts related to congruence modules and congruence ideals. §.§ AcknowledgmentsThe author would like to thank Ming-Lun Hsieh for his comments on this work. This research is partially supported by the Luxembourg National Research Fund, Luxembourg, INTER/ANR/ 18/12589973 GALF. §.§ NotationsWe adopt the convention that the usual Hecke operators (acting on modular forms, cohomology, etc.) in level M are denoted T_n, even in the case when n is not coprime to M. In particular, for a prime ℓ dividing M, T_ℓ is Atkin-Lehner's operator U_ℓ. We use the letterfor Hecke algebras, andfor local components of Hecke algebras (i.e. their localizations at maximal ideals). Adjoint Hecke operators for the Petersson scalar product and adjoint Hecke algebras are typically denoted with an asterisk (T_n^*, ^*, ^*, etc.). § CONSTRUCTION OF A P-ADIC L-FUNCTION §.§ Adjoint L-valuesWe fix in this paragraph an integer M≥ 3. For Γ=Γ_1(M), we let S_k(M) be the space of modular cusp forms of weight k≥ 1 and level Γ. Recall the usual slash operator in weight k defined by(f|s)(z)= (s)^k-1(cz+d)^-kf(az+b/cz+d) for every function f→ on the upper half-plane and every s=([ a b; c d ])∈ S^+=M_2()∩_2()^+.The abstract Hecke ring (S^+,Γ) consisting of double cosets [Γ s Γ], s∈ S^+ acts on S_k(M). For n,d≥ 1 with (d,M)=1, we denote by T_n, T^*_n, ⟨ d ⟩ the usual Hecke operators, adjoint Hecke operators and diamond operators respectively. The Atkin-Lehner involution is W_M=[Γτ_MΓ], whereτ_M=[0 -1;M0 ].Note that τ_M normalizes Γ, and T_n W_M=W_M T^*_n, T^*_nW_M=W_MT_n and ⟨ d ⟩ W_M=W_M⟨ d⟩^-1.The Petersson scalar product is defined by the formula⟨ f,g ⟩_Γ = ∫_X()f(z)g(z)y^k-2dxdy(z=x+iy), where f,g∈ S_k(M) and X=X_1(M)_/ is the compactified modular curve of level Γ. The adjoint of [Γ s Γ] for ⟨·,·⟩_Γ is [Γ s' Γ], where s'=(s)s^-1, i.e., s'=[d -b; -ca ] s=[ a b; c d ].The operators T_n and T_n^* (resp. ⟨ d⟩, and ⟨ d⟩^-1) are adjoint to each other under (·,·)_Γ.The Nebetype action decomposes S_k(M) as a direct sum ⊕ S_k(M,χ) indexed on the set of Dirichlet characters χ (/M)^×→^× and which is Hecke-compatible.Let f(q)=∑_n≥ 1 a_n q^n∈ S_k(M,χ) be a normalized eigenform (i.e., a_1=1 and f|T_n=a_nf for all n≥ 1). For every prime ℓ, let α_ℓ and β_ℓ be such that(1-α_ℓ X)(1-β_ℓ X)=1-a_ℓ X + χ(ℓ)ℓ^k-1 X^2, where χ(ℓ)=0 if ℓ |M. Define L^imp(s, f)=∏_ℓ(1-α_ℓ^2/χ(ℓ)ℓ^k-1ℓ^-s)(1-α_ℓβ_ℓ/χ(ℓ)ℓ^k-1ℓ^-s)(1-β_ℓ^2/χ(ℓ)ℓ^k-1ℓ^-s), the product being taken over all prime numbers ℓ. The product converges for (s)>>0 and it has a meromorphic continuation to .When f is new, L^imp(s, f) coincides, up to Euler factors at certain primes ℓ|M, with the L-function L( f,s) attached to the adjoint lift of the automorphic representation of _2() generated by f. Assume f is a newform of level M and Nebentype χ, and let M_χ be the conductor of χ. Then L^imp(1, f)= 2^2kπ^k+1/(k-1)!MM_χφ(M/M_χ)⟨ f,f ⟩_Γ, where φ is Euler's totient function. This follows from <cit.>, using that M≥ 3 by assumption. Assume f is twist-minimal. It is perhaps nicer to reformulate Proposition <ref> in terms of primitive adjoint L-functions as ⟨ f,f ⟩_Γ = (k-1)!Mφ(M)2^2kπ^k+1∏_ℓ∈Σ_e(1+ℓ^-1) L( f,1) =Mφ(M)2^1-k∏_ℓ∈Σ_e(1+ℓ^-1) Γ( f,1) L( f,1), where Σ_e is the set of exceptional primes for f of the introduction and Γ( f,s)=Γ_(s+k-1)Γ_(s) (and Γ_(s)=2(2π)^-sΓ(s), Γ_(s)=π^-s/2Γ(s/2)) is the Gamma factor of the motive attached to f (see <cit.>). We recall the definition of Shimura periods attached to Hecke eigenforms. For a subring A of , consider the local system V_k(A) on X associated with the A-module TSym^k-2(A^⊕ 2). This latter module consists of homogeneous polynomials of degree k-2 of the form P(X,Y)=∑_ik-2ia_iX^iY^k-2-i with a_i∈ A for all 0≤ i ≤ k-2. For such a polynomial P, we let s∈ S=M_2()∩_2() act on P via (P|s)(X,Y)=P( (X Y) (s')^tr), i.e., (P|s)(X,Y)=P(dX-bY,-cX+aY)s=[ a b; c d ].As Γ is torsion-free,^1(X(),V_k(A)) can be identified with the parabolic cohomology group_P^1(Γ,TSym^k-2(A^⊕ 2)). It comes equipped with an action of the abstract Hecke algebra (S,Γ) (<cit.>). For a given f∈ S_k(M), we define classes in ^1(X(),V_k()) by the formulae δ(f)(γ)=∫_z_0^γ z_0(X-zY)^k-2f(z)z, δ^c(f)=-∫_z_0^γ z_0(X-zY)^k-2f^c(z)z,where z_0∈ X(), γ∈Γ and f^c(z)=f(-z). The cohomology classes do not depend on the choice of z_0. The action of the complex conjugation [ΓιΓ], where ι=([10;0 -1 ]), sends δ(f) to δ^c(f). The Eichler-Shimura period map is the -linear isomorphism [ S_k(M) ⊕ S_k(M)∼⟶ ^1(X(),V_k()); (f,g) ↦δ(f)+δ^c(g). ]Take f∈ S_k(M) and s=([ a b; c d ])∈ S^+. From (X-(s· z)Y)^k-2|s'=( s/cz+d)^k-2(X-zY)^k-2 f^c|s=(f|ι s ι)^c,one sees that δ(f)|[Γ s Γ]=δ(f|[Γ s Γ]) δ^c(f)|[Γ s Γ]=δ^c(f|[Γι sιΓ]).In particular, (<ref>) is equivariant with respect to the actions of T_n,T^*_n and ⟨ d ⟩ for n,d≥ 1, (M,d)=1. For any ring A⊆, we put ^1_k(M)_A:= ^1(X(),V_k(A))and we consider the Hecke algebras _k(M)_A=A[T_n,⟨ℓ⟩]_n,ℓ≥ 1, (ℓ,M)=1, ^*_k(M)_A=A[T^*_n,⟨ℓ⟩]_n,ℓ≥ 1, (ℓ,M)=1,as subspaces of _A(^1_k(M)_A). They are finitely generated commutative A-algebras. If 2 is invertible in A, we denote by ^1_k(M)_A^± the subspaces of ^1_k(M)_A where the complex conjugation acts by ±1. For A=, the map f ↦δ^±(f) identifies S_k(M) with ^1_k(M)_A^±, where we have put δ^±(f)=δ(f)±δ^c(f). As S_k(M) admits a basis consisting of modular forms with Fourier coefficients in , for every subring A⊆ we have S_k(M)=⊗_A S_k(M)_A as _k(M)_=⊗_A _k(M)_A-modules, where S_k(M)_A={f∈ S_k(M) f(q)=∑_n≥ 1a_nq^n∈ A⊗_[[q]] }.Recall that ⊂_p≃.For a normalized eigenform f(q)=∑_n≥ 1a_nq^n∈ S_k(M,χ), the ring _f=[a_n]_n≥ 1 is a subring of . We denote byλ_f _k(M)__f→_fthe system of Hecke eigenvalues of f given by λ_f(T_n)=a_n and λ_f(⟨ d⟩)=χ(d) for all n,d≥ 1, (d,M)=1.For any field K⊂ containing _f and any _k(M)_K-module M, let M[λ_f] be the λ_f-eigenspace of M.Using the maps δ^± in (<ref>), it is clear that _^1_k(M)^±_[λ_f]=1. Let _f be the completion of _f inside _p≃. The image of ^1_k(M)__f inside ^1_k(M)__p is a _f-lattice, whichwe still denote by ^1_k(M)__f with a slight abuse of notations. As long as we deal with ordinary eigenforms, this won't cause any problem as the ordinary part of ^1_k(M)__f is _f-free (see Proposition <ref> below).Let f∈ S_k(M) be a normalized eigenform. Let ξ_f^± be a _f-basis of^1_k(M)__f∩^1_k(M)^±_[λ_f]. The p-normalized periods of f with respect to ξ^± are the complex numbers Ω_f^±∈^× defined by the relationδ^±(f)=Ω^±_f·ξ_f^±. One can actually replace the coefficient ring _f in the above definition with any PID A⊆ (see e.g. <cit.>). When A is a subring of , one retrieves the usual definition of complex periods attached to normalized eigenforms. We now recall the cohomological interpretation of the Petersson norm of normalized eigenforms. Let A⊆ be a ring, and consider the nondegenerate A-linear pairing [·,·] on TSym^k-2(A^⊕ 2) defined in <cit.>. It is characterized by the identity [(uX+vY)^k-2,(u'X+v'Y)^k-2]=(uv'-u'v)^k-2 (u,u',v,v'∈ A).For s∈ S and P,Q∈TSym^k-2(A^⊕ 2), one checks that [P|s,Q|s]=( s)^k-2[P,Q]. Combining [·,·] with the cup-product in cohomology we obtain a new A-linear pairing (·,·)_π^1_k(M)_A ×^1_k(M)_A →^2(X(),A) ≃ A,where the last isomorphism is the cap-product with the fundamental class of X(), endowed with its natural orientation of complex manifold. For s∈ S, the adjoint of [Γ s Γ] under (·,·)_π is [Γ s' Γ], so for the modified anti-symmetric pairing (x,y) ↦ (x,y|W_M)_π,the Hecke algebras _k(M)_A and ^*_k(M)_A become autoadjoint. Let f,g∈ S_k(M). We have (δ(f),δ^c(g))_π = (2i)^k-1⟨ f,g^c⟩_Γ (δ^+(f),δ^-(g)|W_M)_π =-2(2i)^k-1⟨ f,g^c|W_M⟩_Γ. We start with the first equality. Over , the cap-product with the fundamental class of X() is given by the integration over X() of cohomology classes seen as 2-differential forms. Therefore, (δ(f),δ^c(g))_π = - ∫_X() f(z)g^c(z) [(X-zY)^k-2,(X-zY)^k-2] dz∧ dz= 2i ∫_X() f(z)g^c(z)(z-z)^k-2 dxdy =(2i)^k-1∫_X() f(z)g^c(z)y^k-2 dxdy.This proves the first formula. A similar computation as above yields(δ(h),δ(h'))_π =0=(δ^c(h),δ^c(h'))_π,(δ(h),δ^c(h'))_π =(-1)^k-1(δ^c(h'),δ(h))_π,for any h,h'∈ S_k(M), and from (<ref>) we find δ^-(g)|W_M=δ(g|W_M)+(-1)^k-1δ^c(g|W_M). Hence, by linearity,(δ^+(f),δ^-(g)|W_M)_π = (-1)^k-1( (δ(f),δ^c(g|W_M))_π + (δ(g|W_M),δ^c(f))_π) =(-2i)^k-1(⟨ f,(g|W_M)^c⟩_Γ+⟨ g|W_M,f^c⟩_Γ) = 2(-2i)^k-1⟨ f,(g|W_M)^c⟩_Γ=-2(2i)^k-1⟨ f,g^c|W_M⟩_Γ,since (g|W_M)^c=(-1)^kg^c|W_M. This proves the second formula.§.§ Hida families Let p be an odd prime and let M be of the form Np^r with (N,p)=1 and r≥ 1. We let X_r=X(Γ_1(Np^r))_/. We shall consider the ordinary and anti-ordinary parts of the (co-)homology of X_r, defined with the help of Hida's projectorse=lim_r (T_p)^r!,e^*=lim_r (T^*_p)^r!.In what follows, we drop for convenience the subscriptwhen we work with cohomology with -coefficients. The Atkin-Lehner map · |W_Np^r e^1_k(Np^r)→ e^*^1_k(Np^r) is a -linear isomorphism, and e^1_k(Np^r) and e^*^1_k(Np^r) are -free. See <cit.> and <cit.>.In particular, we have e(Np^r)=e^*^*(Np^r) via T_n ↔ T^*_n. Poincaré duality yields a canonical identification _1(X_r(),) = ^1 (X_r(),),which is compatible with the Hecke action, in the sense that the action of T_n and ⟨ d ⟩ on the left hand side corresponds to that of T_n^* and ⟨ d ⟩^-1 on the right hand side respectively. By functoriality of (co)homology, we obtain _r e _1(X_r(),) =_r e^* ^1 (X_r(),) =: e^* ^1 (Np^∞),as Hecke-modules, where the transition maps on cohomology are the trace maps. Since the adjoint Hecke operators are compatible with trace maps in cohomology, e^* ^1 (Np^∞) inherits a structure of module over e^*^*(Np^∞)=_r e^*_2^*(Np^r).We will primarily be interested in e^*^*(Np^∞), but note that it is canonically isomorphic to e(Np^∞)=_r e_2(Np^r). The action of inverse diamond operators endows the ring e^*(Np^∞) with a structure of algebra over Λ̃=_r[(/Np^r)^×]. Letting U_r=1+p^r for all r≥ 1, we have Λ̃=Λ[(/Np)^×] where we put Λ=[[U_1]]=_r [U_1/U_r],and e^*(Np^∞) is known to be finite free over Λ by <cit.>.Letbe a finite flat extension of Λ and let γ be a topological generator of U_1. We identify via ≃_p Dirichlet characters of p-power order and conductor with elements of U_1:=_cts(U_1,_p^×)=_-alg(Λ,_p). The set of arithmetic points ofis defined as ()={ν∈_Λ(,_p)|∃ k≥ 2,∃ε∈U_1, ν(γ)=γ^k-2ϵ(γ) }.We refer to the pair (k,ϵ) in the definition of ν∈() as the weight of ν, and we define a finite flat extension ofby letting _ν=ν(). We further denote by S(N,) the space of ordinary -adic cusp forms of tame level N. Recall that S(N,) is spanned overby formal q-expansions F(q)=∑_n≥ 1 a_nq^n∈[[q]] with the property that there exists a Dirichlet character θ of level dividing Np such that, for all ν∈() of weight (k,ε), the specialization of F at ν yieldsF_ν:=∑_n≥ 1ν(a_n)q^n∈ e S_k(Np^r_ν,θεω^2-k).Here, ω is the Teichmüller character, p^r_ν=max(p,C(ε)) (and C(ε) is the conductor of ε), and _ν is seen inside . On S(N,) there is an action of Hecke operators T_n which is compatible with specialization maps. More precisely, S(N,)=S(N,Λ)⊗ and the Λ-subalgebra of _Λ(S(N,Λ)) generated by the T_n's is canonically isomorphic to e(Np^∞)_. Furthermore, for all k≥ 2 and r≥ 1, e(Np^∞) ⊗_ΛΛ/(ω_k,r)≃ e_k(Np^r),by Hida's control theorem <cit.>, where ω_k,r∈Λ is defined by ω_k,r(γ)=[γ]^p^r-1-γ^(k-2)p^r-1. In particular, e(Np^∞) ≃_r e_k(Np^r) for all integers k≥ 2.A primitive Hida family is a q-expansion F as above which is a normalized eigenform (i.e., a_1=1, T_nF=a_nF for all n≥ 1) and such that F_ν_0 is a primitive newform for at least one ν_0∈(). The character θ in (<ref>), called the tame character of F, is necessarily even, and we have a_p∈^×. Moreover, given ν∈() of weight (k,ε), if F_ν^∘ denotes the newform associated with F_ν, then the following is known. If p divides C(θεω^2-k), then F_ν=F_ν^∘. Otherwise, ε=1, θω^2-k is of prime-to-p conductor and either F_ν=F_ν^∘ and k=2, or F_ν(q)=F_ν^∘(q)-β_ν F_ν^∘(q^p),where β_ν is the non-unit root of the polynomial X^2-ν(a_p)X+θω^2-k(p)p^k-1 (<cit.>).Every normalized eigenform F(q)=∑_n≥ 1a_nq^n ∈ S(N,) gives rise to a homomorphism λ_F e(Np^∞) → sending T_n to a_n. The kernelof λ_F is a minimal prime, and is contained in a unique maximal primeof e(Np^∞). One may assume that the ring of coefficients of F is an integrally closed local domain by takingto be the normalization of /, where :=e(Np^∞)_. Let ' be the subalgebra ofgenerated over Λ by T_n for all n≥1 coprime with Np. Then ' is reduced and it is finitely generated and torsion-free over Λ. Let ρ_ G_→_2('⊗_Λ(Λ))be Hida's big ordinary representation and let ρ_ be (the semisimplification of) its residual representation modulo . When ρ_ is absolutely irreducible, it is known by <cit.> that ρ_ is conjugate to a '-valued representation. In what follows it is more convenient to work with the anti-ordinary big Hecke algebra e^*^*(Np^∞). Since e^*^*(Np^∞)=e(Np^∞) canonically, F gives rise to a triple (λ_F^*,^*,^*) defined in the obvious way. Let ^*=e^*^*(Np^∞)_^* be a local component of the big Hecke algebra such that ρ_ satisfies (<ref>). Then e^*^1(Np^∞)^±_^* is free of rank 1 over ^*. This is a reformulation in terms of cohomology groups of <cit.>.§.§ Interpolation of Petersson normsWe keep the notations of the preceding sections. For any k≥ 2 and r≥ 1, we see e^*_k^1(Np^r) as a Λ-module by letting d∈ U_1 actby d^k-2⟨ d ⟩^-1.Then ω_k,r acts trivially on it, and <cit.> yields a Λ/ω_k,rΛ-linear isomorphisme^*^1(Np^∞) ⊗_ΛΛ/(ω_k,r) ≃ e^*_k^1(Np^r).In <cit.>, Ohta constructed a Λ-linear pairing on e^*^1(Np^∞) which interpolates the Poincaré pairings at level Np^r for all r≥ 1. Recall that e^*^*(Np^∞) is a complete semi-local ring, hence it is the finite product of its localizations at maximal ideals. It is enough for our purpose to fix one of its local components ^*=e^*^*(Np^∞)_^* which we see as a Λ-algebra. There exists a pairing (·,·)_Λ e^*^1(Np^∞)_^*× e^*^1(Np^∞)_^*→Λ fitting, for all k≥ 2 and r≥ 1, in the commutative diagram e^*^1(Np^∞)_^*× e^*^1(Np^∞)_^* Λ e^*_k^1(Np^r) × e^*_k^1(Np^r)[U_1/U_r], where the bottom horizontal map is given by (x,y)↦ (-1)^k·∑_d∈ U_1/U_r (dNp^r)^2-k· (x,y|(T_p^*)^r|W_Np^r|⟨ d ⟩^-1)_π· [d], the left vertical map is induced by (<ref>) and the right vertical map sends d∈ U_1 to d^k-2·[d]. Moreover, (·,·)_Λ is a Λ-bilinear perfect pairing satisfying (x|T_n^*,y)_Λ=(x,y|T_n^*)_Λ for all integers n≥ 1. See <cit.>. The factor (Np^r)^2-k comes from the difference of normalizations in the definition of W_Np^r (see <cit.>). The perfectness of Ohta's pairing follows from that of Poincaré's pairing on ^1_2(Np)=^1(X_1(Np)(),) and Hida's control theorem.Choose a finite flat extension ⊂(Λ) of Λ containing the Fourier coefficients of all the normalized eigenforms in S(N,(Λ)) with residual Galois representation isomorphic to ρ_. Replacingwith its integral closure in (Λ) if necessary, we assume thatis normal. Let _^*=e^*^*(Np^∞)_^*⊗_Λ, ^±:=e^*^1(Np^∞)^±_^*⊗_Λ.Given a normalized eigenform F(q)=∑_n≥ 1a_nq^n∈ S(N,) with residual Galois representation ρ_, the extension of scalars of λ_F^* toinduces a -linear surjection ^*_↠ which we still denote by λ_F^*. Letbe the fraction field ofand let _^*=_^* ⊗_. We make the following assumption on F: there exists an -algebra decomposition_^* ≃× X, prwhere the first projection, say e_F, is induced by λ_F^* ⊗. We further assume that^± is _^*-free of rank 1, frand, following the notations of Appendix <ref>, we let ^±_λ_F^*:=^±∩ e_F(^±⊗_). In particular, ^±_λ_F^* is -free of rank 1 by Lemma <ref>. Condition (<ref>) holds if F is primitive of level N by <cit.>. It also holds if _^* is semi-simple (i.e., if ^* is reduced), which happens for instance when N is cube-free <cit.>. Condition (<ref>) holds if ρ_ satisfies (<ref>) by Proposition <ref>.In the following definition, we write (·,·)_^+ ×^- → for the -linear extension of Ohta's pairing (<ref>). Assume (<ref>) and (<ref>) and let ξ_F^± be a -basis of ^±_λ_F^*. We define the p-adic adjoint L-function of F by letting L_p( F)=(ξ^+_F,ξ^-_F)_∈.For any ν∈(,_p), we shall write L_p( F,ν):=ν(L_p( F)). Assume (<ref>) and (<ref>). Then the congruence module of F is isomorphic to /L_p( F). The congruence module of F is, by definition, the module C_0(λ_F) over _=e(Np^∞)_⊗_Λ defined in Appendix <ref>, where λ_F _↠ is the -algebra homomorphism sending T_n to a_n. Under the identification =^*, we then have C_0(λ_F)=C_0(λ_F^*), which, in turn, is isomorphic to /L_p( F) by Corollary <ref>.Given a primitive Hida family F, we now express the special values of L_p( F) at arithmetic weights in terms of adjoint L-values. We still assume (<ref>). We first define p-normalized periods as follows. Let ν∈() be of weight (k,ε), let r=r_ν and fix a sign ±. Let ^*_ν be the local factor of e^*_k(Np^r)__ν through which λ^*_F_ν factors, and let ^±_ν be the corresponding direct summand of e^*^1_k(Np^r)__ν.Let also a be the sign of (-1)^k and ± a be the sign of ± (-1)^k. We have^±_λ^*_F⊗_,ν_ν≃ (^±_ν)_λ^*_F_ν≃ (e^1_k(Np^r)^± a__ν)_λ_F_ν,where the first map is the specialization map and the second one is the map W_Np^r^-1. The first isomorphism is Hida's control theorem <cit.>, while the second follows from Proposition <ref>. Note that the last module in (<ref>) coincides with that in (<ref>) with M=Np^r and f=F_ν. For each ν∈(), we define ξ_F_ν^± as the image of ξ^±_F under the map in (<ref>), and we let Ω^±_F_ν∈^× be the associated p-normalized canonical period of Definition <ref>. In other words, ξ_F_ν^±|W_Np^r_ν is the specialization of ξ_F^± a at ν (where a is the sign of (-1)^k), and δ^±(F_ν)=Ω^±_F_ν·ξ_F_ν^±. For a weight ν∈() such that F_ν∈ S_k(Np) is p-old, the p-normalized canonical period of F_ν^∘ can be defined by simply letting Ω^±_F_ν^∘=Ω^±_F_ν. Indeed, by <cit.> there is a “p-stabilization isomorphism” (-)^∘^1_k(Np)_≃^1_k(N)_ sending δ(F_ν) to δ(F_ν^∘) and identifying the lattices (^1_k(Np)__ν)_λ_F_ν and (^1_k(N)__ν)_λ_F_ν^∘. Therefore, the periods of F_ν and of F_ν^∘ with respect to ξ_F_ν^± and (ξ_F_ν^±)^∘ respectively are equal. Assume (<ref>) holds and suppose F is primitive as before. Let ν∈() be of weight (k,ε), let r=r_ν and write the conductor of F_ν^∘ as Np^r_0. Put α_ν=a_p(F_ν), β_ν=(θω^2-kε)(p)p^k-1/α_ν and _p( F_ν^∘)=α_ν(p-1)(1-β_ν/α_ν)(1-p^-1β_ν/α_ν) if F_ν≠ F_ν^∘ and _p( F_ν^∘)=1 otherwise. We have L_p( F,ν)=p^r-1α_ν^r w_ν_p( F_ν^∘)· i(-2i)^k⟨ F_ν^∘,F_ν^∘⟩_Γ_1(Np^r_0)Ω^+_F^∘_νΩ^-_F^∘_ν, where w_ν∈^× is given by F_ν^∘|W_Np^r_0=w_ν (F_ν^∘)^c and Ω_F^∘_ν^± is as in Definition <ref> and Remark <ref>. Let W=W_Np^r for simplicity. Let a be the sign of (-1)^k. Since ν factors through /ω_k,r, L_p( F,ν) equals(-1)^k·∑_d∈ U_1/U_r (dNp^r)^2-k· (ξ_F_ν^+a|W,ξ_F_ν^-a|W|(T_p^*)^r|W|⟨ d ⟩^-1)_π· d^k-2ε(d).Noticing that ξ_F_ν^-a|T_p=α_νξ_F_ν^-a and ξ_F_ν^-a|⟨ d⟩^-1=ε^-1(d)ξ_F_ν^-a, we obtain L_p( F,ν) = (-1)^k(Np^r)^2-k p^r-1α_ν^r· (ξ_F_ν^+a|W,ξ_F_ν^-a|W^2)_π= (-1)^k p^r-1α_ν^r· (ξ_F_ν^+a,ξ_F_ν^-a|W)_π= p^r-1α_ν^r· (ξ_F_ν^+,ξ_F_ν^-|W)_π,since the adjoint of W for (·,·)_π is Np^r· W^-1 and (<ref>) is anti-symmetric. Hence, L_p( F,ν)= -2(2i)^k-1 p^r-1α_ν^r·⟨ F_ν,F^c_ν|W⟩_Γ_1(Np^r)Ω^+_F^∘_νΩ^-_F^∘_ν,by Proposition <ref>. We are reduced to compute ⟨ F_ν,F^c_ν|W⟩_Γ_1(Np^r)/⟨ F^∘_ν,F^∘_ν⟩_Γ_1(Np^r_0)=c·⟨ F_ν,F^c_ν|W⟩_Γ_0(Np^r)/⟨ F^∘_ν,F^∘_ν⟩_Γ_0(Np^r_0), where c=1 if F_ν=F_ν^∘ and c=p-1 otherwise. This is already done in <cit.>, which yields:⟨ F_ν,F^c_ν|W⟩_Γ_0(Np^r)/⟨ F^∘_ν,F^∘_ν⟩_Γ_0(Np^r_0)= {[(-1)^kw_ν if F_ν=F_ν^∘; (-1)^kw_να_ν(1-β_ν/α_ν)(1-p^-1β_ν/α_ν) if F_ν≠ F_ν^∘. ]. §.§ A p-adic L-function for varying branchesWe keep the notations of the preceding sections and we fix in particular a local componentof e(Np^∞) with residual Galois representation ρ=ρ_. We let as before ^* denote the corresponding local component of e^*^*(Np^∞) under e(Np^∞)≃ e^*^*(Np^∞), and ^*_=^*⊗_Λ. Put also=e^*^1(Np^∞)_^*⊗_Λ.Assuming (<ref>), we discuss in this paragraph the existence of an element L_p(ρ)∈_ such that, for every normalized eigenform F of level N with residual representation ρ and satisfying (<ref>), we have λ_F(L_p(ρ))=L_p( F). We shall in particular explain how to make a consistent choice of canonical periods for every such eigenform F.Following Fukaya and Kato <cit.>, we define a -linear pairing on × by letting((x,y))_=∑_n≥ 1 (x,y|T_n^*)_· q^n∈ S(N,)_for all (x,y)∈×. The pairing ((·,·))_×→ S(N,)_ is perfect and it satisfies ((x|T_m^*,y))_=((x,y|T_m^*))_=((x,y))_|T_m for all m≥1 and (x,y)∈×. See <cit.>.Hida duality for Λ-adic cusp forms <cit.> implies S(N,Λ)_≃_Λ(,Λ). Hence, under the assumption thatis Gorenstein, we see that S(N,)_ is _-free of rank 1. This Gorensteinness condition is implied by (<ref>). Indeed, Ohta's pairing provides an isomorphism ^+≃_(^-,), hence _≃_(_,). Therefore, the perfectness of ((·,·))_ shows that((ξ^+_,ξ^-_))_ generates S(N,)_ as a _-module if ξ_^± generates ^± as a _^*-module. Assume (<ref>) and let ξ_^± be a generator of ^± as a _^*-module. Put G_:=((ξ^+_,ξ^-_))_. We define the adjoint p-adic L-function of ρ as the unique element L_p(ρ)∈_ such that ∑ F=L_p(ρ)· G_, where the sum runs over the set of all normalized eigenforms with ordinary residual representation isomorphic to ρ. Assume (<ref>) and let ξ_^± be a generator of ^± as a _^*-module. For any normalized eigenform F satisfying (<ref>), we let ξ_F^±= e_F· (L_p(ρ)ξ^±_), where e_F is the projector associated with F. Then ξ_F^± is a generator of ^±_λ^*_F, and λ_F(L_p(ρ))=L_p( F), where L_p( F) is computed with respect to ξ^±_F. Assume F(q)=∑_n≥ 1a_nq^n satisfies (<ref>). Let e_F,e'_F∈_=_⊗_ be the projectors of the algebra decomposition _≃× X, where =(). We put L_F=λ_F(L_p(ρ))∈ for simplicity. Hence, ξ_F^±= L_F e_F·ξ^±_. By multiplicity one, e_FS(N,)_=.F,so F= e_F· L_p(ρ)G_=L_F e_F· G_. Since S(N,)_ is _-free and generated by G_, we thus have C_0(λ_F)≃ (S(N,)_)^λ_F/(S(N,)_)_λ_F=(.e_F· G_)/(.F)=/(L_F). As C_0(λ_F)≃ (^±)^λ_F^*/^±_λ_F^*, we deduce that ξ^±_F generates ^±_λ_F^* since e_F·ξ^±_ generates (^±)^λ_F^*. We now prove that L_p( F):=(ξ^+_F,ξ^-_F)_ equals L_F. Denote by a subscriptthe extension of scalars toof the pairings (·,·)_ and ((·,·))_. Using e_F^2=e_F and Proposition <ref>, we see that e_F· G_ = ((e_F·ξ_^+,e_F·ξ_^-))_=∑_n≥ 1(e_F·ξ_^+,e_F·ξ_^-|T_n^*)_· q^n =∑_n≥ 1(e_F·ξ_^+,e_F·ξ_^-)_· a_n q^n=(e_F·ξ_^+,e_F·ξ_^-)_· F. Hence, multiplying the last equality by L_F^2 and combining it with (<ref>), one obtains L_F· F=(ξ^+_F,ξ^-_F)_· F = L_p( F)· F, so L_F= L_p( F) as claimed. § THE MAIN CONJECTURE §.§ Twists and change of tame levelFix a large enough integrally closed finite flat extensionof Λ as in <ref>. Let F∈ S(N,) be a primitive Hida family of tame character θ whose residual Galois representation satisfies (<ref>). Then there exists a continuous Galois representation ρ_FG_→_2() unramified outside Np such that ∀ℓ∤ Np,{[ ρ_F(σ_ℓ)= a_ℓ(F); ρ_F(σ_ℓ)= ℓ·θ_(ℓ), ].where σ_ℓ is the arithmetic Frobenius at ℓ and θ_ (/Np)^×→^× is such that ν(θ_(ℓ))=(θω^2-kε)(ℓ)ℓ^k-2 for any ν∈() of weight (k,ε). In particular, the specialization ρ_ν of ρ_F at any ν∈() is isomorphic to the usual Eichler-Shimura-Deligne representation attached to F_ν.Given a character ψ of prime-to-p conductor, there exists a unique primitive Hida family F_ψ underlying F ⊗ψ. In particular, F_ψ has tame character θψ^2 and if ψ is unramified at a prime ℓ∤ N, then a_ℓ(F_ψ)=ψ(ℓ)a_ℓ(F). We say that F is twist-minimal if F has minimal level among its twists. Let π=⊗'_ℓπ_ℓ be the automorphic representation of _2() generated by F^∘_ν for some arithmetic weight ν∈(). By <cit.>, the automorphic type of π_ℓ at a certain prime ℓ does not depend on the choice of ν. For any prime ℓ, we then defineE_ℓ( F)={[(ℓ-1)(a_ℓ(F)^2-θ_(ℓ)(1+ℓ)^2) if ℓ∤ N;1-ℓ^-1 if π_ℓ is a ramified principal series;1-ℓ^-2if π_ℓ is unramified special; 1if π_ℓ is supercuspidal.; ]. Let N_0 be the tame conductor of the residual representation ρ of F and let Σ be a finite set of primes containing {ℓℓ|(N/N_0)} but not p. Denote by V (resp. V_ν) the underlying space of ρ (resp. of ρ_ν over (_ν)), by V_I_ℓ the co-invariants of V under the action of the inertia group I_ℓ at ℓ and put m_ℓ=V_I_ℓ. Then, the invariance of the Swan conductor under reduction (<cit.> implies_ℓ(ρ_ν)-_ℓ(ρ)=V^I_ℓ- (V_ν)^I_ℓfor every ν∈() and every prime ℓ≠ p, where _ℓ refers to the local conductor at ℓ of Galois representations.In particular, N is a divisor of N_Σ=N_0∏_ℓ∈Σℓ^m_ℓ. Let Σ_sc be the set of supercuspidal primes of F in Σ. * If F is twist-minimal, then Σ={ℓℓ|(N_Σ/N)}∪Σ_sc. * Assume F=F'_ψ for some Dirichlet character ψ unramified outside Σ and F' a twist-minimal newform. Then {ℓℓ|(N_Σ/N)}∪Σ_sc={ℓ∈Σψ_(ℓ)=1 ψ_(ℓ)=θ_(ℓ)}∪Σ_sc, where the index (ℓ) stands for the ℓ-primary component of a Dirichlet character. Moreover, for any ℓ|(N_Σ/N), E_ℓ( F) and E_ℓ( F') are equal up to a unit in . Note that (1) follows from (2). Let N' and θ' be the exact level and the character of F' respectively. For ℓ∈Σ∖ S_sc,(<ref>) and the explicit description of the local Galois representation attached to F' at ℓ (<cit.>) shows that ℓ divides N_Σ/N if and only if N and N' share the same ℓ-adic valuation. This condition is clearly equivalent to ψ_(ℓ)=1 or ψ_(ℓ)=(θ'_(ℓ))^-1=θ_(ℓ). Regarding the last claim, if ℓ|(N_Σ/N), then E_ℓ( F')=E_ℓ( F) unless ℓ∤ N', in which case we have E_ℓ( F')=ψ(ℓ)^2E_ℓ( F). Assume ρ satisfies (<ref>). Let '(N_Σ p^∞) be the Λ-subalgebra of (N_Σ p^∞) generated by the Hecke operators T_n for all n≥ 1 coprime to N_Σ p. * There exists a unique local component '_Σ=e'(N_Σ p^∞)_'_Σ of e'(N_Σ p^∞) such that there exists an isomorphism ρ≃ρ_'_Σ respecting the ordinary filtrations. * There exists a unique maximal ideal _Σ⊂ e(N_Σ p^∞) lifting '_Σ and such that '_Σ≃_Σ for _Σ=e(N_Σ p^∞)__Σ. Moreover, T_ℓ∈_Σ for all ℓ∈Σ and the image of T_ℓ in the localization _Σ vanishes. This is the content of <cit.>.Denote by F_Σ=∑_na_n(F_Σ) the normalized eigenform of S(N_Σ,) sharing the same Hecke eigenvalues as F at primes not dividing N_Σ/N. We have a_n(F_Σ)=0 for n divisible by a prime in Σ and a_n(F_Σ)=a_n(F) otherwise. Note that by Theorem <ref> (2), _Σ is reduced, so F_Σ satisfies (<ref>) and it makes sense to consider its adjoint p-adic L-function. Fix a choice of canonical periods ξ^±_F for F as in Definition <ref>. There is a choice of periods ξ^±_F_Σ for F_Σ such that L_p( F_Σ)=L_p( F)∏_ℓ|(N_Σ/N)E_ℓ( F), where L_p( F_Σ) (resp. L_p( F)) is computed withrespect to ξ^±_F_Σ (resp. to ξ^±_F). We keep the notations ^*, ^*,and λ^*_F of <ref> and we denote by ^*_Σ, ^*_Σ, _Σ and λ^*_F_Σ their counterpart for F replaced with F_Σ. First note that F is ℓ-minimal at any divisor ℓ of N_Σ/N by Lemma <ref>, yielding a_ℓ(F)∈^×. Write N_Σ/N as a product of prime powers ∏_i=1^dℓ_i^e_i with 1≤ e_i ≤ 2 and ℓ_i∈Σ. Then e_i=2 only if ℓ_i∤ N. We define a -linear map ε=ε_1∘…∘ε_d _Σ→ as follows. Fix 1≤ i ≤ d, and let ℓ=ℓ_i, e=e_i and M=N ℓ_1^e_1…ℓ_i-1^e_i-1. At level Mp^r, letting Γ=Γ_1(Mp^r) and Γ'=Γ_1(Mℓ^e p^r) we have a map ε_M,ℓ^1(X_1(Mℓ^e p^r),) →^1(X_1(Mp^r),) given by ε_M,ℓ(x)= {[x|([ Γ' Γ]-ℓ^-1V_ℓ^* T_ℓ^*) if e=1; x|([ Γ' Γ]-ℓ^-1 V_ℓ^* T_ℓ^*+ℓ^-1 V_ℓ^2^* ⟨ℓ⟩^-1)if e=2, ]. where V_ℓ^m^*=[Γ'[ 1 0; 0 ℓ^m ]Γ]. Taking the limit over r≥ 1, localizing and extending the scalars to , one obtains the map which we denoted ε_i in (<ref>). Under Poincaré duality, ε coincides with (the extension of scalars of) the map on homology introduced in <cit.> and denoted ε_∞ therein. Denote by (-)^t the adjoint with respect to Ohta's pairings (<ref>) or their specializations (<ref>) at tame level M. For ℓ=ℓ_i as before, one finds that ε_M,ℓ^t(x)={[ x|(V_ℓ-ℓ^-1T_ℓ^* [ ΓΓ' ])if e=1; x|(V_ℓ^2-T_ℓ^*V_ℓ+ ℓ^-1⟨ℓ⟩^-1[ ΓΓ' ]) if e=2, ]. where V_ℓ^m=[Γ[ ℓ^m 0; 0 1 ]Γ']. In particular, we obtain (ε∘ε^t)_|_λ^*_F=∏_ℓ|(N_Σ/N)E'_ℓ( F) 𝕀, whereE'_ℓ( F)=-ℓ a_ℓ(F) E_ℓ( F) if π_ℓ is special or a ramified principal series and E'_ℓ( F)=-ℓ E_ℓ( F) if π_ℓ is spherical. Passing to the ±-part, the map ε^t injects _λ^*_F^± into (^±_Σ)_λ^*_F_Σ. Moreover, <cit.> implies that ε is injective on (^±_Σ)_λ^*_F_Σ so, in fact, ε^t(^±_λ^*_F)=(^±_Σ)_λ^*_F_Σ by duality. As the element u=∏_ℓ|(N_Σ/N)E_ℓ( F)/E'_ℓ( F) belongs to ^×, we may take ξ_F_Σ^+= u·ε^t(ξ^+_F) (resp.ξ_F_Σ^-=ε^t(ξ^-_F)) as a basis of (^+_Σ)_λ^*_F_Σ (resp. of (^-_Σ)_λ^*_F_Σ). Letting (·,·)_N, (resp. (·,·)_N_Σ,) be Ohta's pairing in tame level N (resp. tame level N_Σ), we therefore obtain L_p( F_Σ)= (u·ε^t(ξ^+_F),ε^t(ξ^-_F))_N_Σ,=u· ((ε∘ε^t)(ξ^+_F),ξ^-_F)_N,= L_p( F)∏_ℓ|(N_Σ/N)E_ℓ( F), as wanted. An easy adaptation of the proof of Proposition <ref> (arguing as in the proof of <cit.>) yields the following proposition. Assume F∈ S(N,) is a twist-minimal primitive Hida family, and let F_ψ be the primitive twist of F by a Dirichlet character ψ of conductor M coprime with p. Then we can make a choice of canonical periods for F and F_ψ such that L_p( F_ψ)= L_p( F) ∏_ℓ E_ℓ( F), where ℓ runs over the set of primes dividing M such that ψ_(ℓ)≠1,θ^-1_(ℓ). In the next theorem, we let vary F through the set of Hida families with fixed residual representation ρ which are minimally ramified outside a fixed finite set Σ. Assume ρ satisfies (<ref>). There exists a regular element L_Σ(ρ)∈_Σ such that, for all primitive Hida families F with residual representation ρ which are minimally ramified outside Σ, we have λ_F_Σ(L_Σ(ρ))=L_p( F)∏_ℓ|(N_Σ/N)E_ℓ( F) up to a unit in , where λ_F_Σ_Σ→ is the system of eigenvalues associated with F_Σ. We simply define L_Σ(ρ) as the adjoint p-adic L-function of Definition <ref> for _Σ and relative to some choice of canonical periods. Any primitive Hida family F of level N with ρ_F≃ρ which is minimally ramified outside Σ satisfies {ℓ|(N/N_0)}⊂Σ, so Theorem <ref> and Proposition <ref> apply to F_Σ, yielding λ_F_Σ(L_Σ(ρ))=L_p( F_Σ)=L_p( F)∏_ℓ|(N_Σ/N)E_ℓ( F) up to units in . The fact that L_Σ(ρ) is regular is clear, as L_p( F_Σ) does not identically vanish and _Σ is reduced.§.§ Selmer groupsLet F(q)=∑_n≥ 1a_n(F)q^n∈ S(N,) be a twist-minimal primitive Hida family of tame level N and character θ, and let V_F be the free -module of rank 2 on which ρ_F acts. We write ρ for ρ_F, that is, ρ=ρ_F ⊗_ k, where k=/_ is the residue field of the local Λ-algebra .Recall that V_F is ordinary in the sense that there exists an exact sequence of free [G_]-modules0 V_F^+ V_F V_F^- 0,where G_ acts on V_F^- via the unramified character _a_p(F) sending σ_p on a_p(F).On the traceless adjoint V_=^0(V_F,V_F) of V_F, there is a three-step G_-filtration0 ⊂ V_^+ ⊂ V_^†⊂ V_,where V_^+=(V_F^-,V_F^+) is the subspace of “upper-triangular” nilpotent endomorphisms and V_^† consists of the space of endomorphisms in V_ fixing V_F^+. We put V_^-=V_/V_^+. Let M^∨=_(M,/) for any profinite -module M and let Σ be a set of finite places ofnot containing p. We define Selmer groups à la Greenberg, using the discrete modules D_=V_⊗_^∨ and D^±_=V^±_⊗_^∨, by letting^Σ( F)= [^1(,D_) →∏_ℓ∉Σ∪{p}^1(I_ℓ,D_) ×^1(I_p,D^-_)], and ( F)=^∅( F). Standard arguments show that ^Σ( F)^∨ is a finitely generated -module. Assume (<ref>) holds for ρ. Then ^Σ( F)^∨ is a finitely generated torsion -module. If Σ contains all the places dividing N, then _^Σ( F)^∨=L_p( F_Σ)·.We may assume in the the first claim that Σ contains the primes dividing N. As L_p( F_Σ) is not identically 0 (it does not vanish at any ν∈() such that F_ν=F_ν^∘ by Thm. <ref>), the torsionness of ^Σ( F)^∨ will follow from the second claim which we now prove.Let =W(k), Λ_=[[U_1]] and let _Σ be the local component associated with ρ and Σ as in Theorem <ref>. Then F_Σ gives rise to a homomorphism of Λ_-algebras λ_Σ_Σ→. Consider the following deformation problem. Let Φ be the functor classifying p-ordinary lifts of ρ unramified outside S:=Σ∪{p} modulo strict equivalence.More precisely, let _S⊂ be the maximal extension ofunramified outside S and let ρ^± be the k-valued G_-representation acting on V_F^±⊗_ k. For any local complete noetherian -algebra A with residue field k, Φ(A) consists of classes [ρ]_∼, where ρ G_,S=(_S/) →_2(A) is such that ρ⊗_A k≃ρ and such that there exists a G_-equivariant exact sequence of free A-modules 0 →ρ^+ →ρ→ρ^-→ 0 with ρ^±⊗_A k≃ρ^± and ρ^- unramified. The equivalence relation ∼ here means that ρ∼ρ' if ρ' can be obtained from ρ by conjugation by a matrix in (_2(A)→_2(k)). Since ρ is absolutely irreducible, Φ is representable by a universal pair (ρ_Σ,R_Σ), [ρ_Σ]_∼∈Φ(R_Σ) by Mazur's theory <cit.>.Under our running hypotheses on ρ, by the work of Wiles, Taylor-Wiles and Diamond <cit.> (see also <cit.>), we then have an isomorphism R_Σ≃_Σ.In fact, Λ_ can be interpreted as the universal deformation ring associated with (ρ), so R_Σ comes equipped with a structure of Λ_-algebra, and (<ref>) is an isomorphism of Λ_-algebras. A deep consequence of (<ref>) is that _Σ is a local complete intersection Λ_-algebra. This provides us, by a theorem of Tate <cit.>, with an equality _ C_0(λ_Σ)=_ C_1(λ_Σ),where C_1(λ_Σ)=Ω__Σ/Λ_⊗__Σ,λ_Σ (and the symbol Ω here refers to continuous Kähler differentials). In light of Proposition <ref>, it is enough to show that C_1(λ_Σ)^∨≃^Σ( F) as -modules, which is a standard fact from Mazur's theory (see e.g. <cit.> and <cit.>), but we explain the main lines for sake of completeness. We have:C_1(λ_Σ)^∨=_(Ω__Σ/Λ_⊗__Σ,λ_Σ,/) ≃__Σ(Ω__Σ/Λ_,^∨) ≃_Λ_(_Σ,^∨),the last term referring to the module of Λ_-linear continuous derivations (and ^∨ being seen as a discrete _Σ-module via λ_Σ). For n≥ 1, let A_n=(/_^n)^∨ which we see as a _Σ-module, and note that _Λ_(_Σ,^∨)=∪_n≥ 1_Λ_(_Σ,A_n). The free -module [A_n] generated by elements of A_n also has a structure of _Σ-algebra if we let t· a=λ_Σ(t)a and a· b=0 for all t∈_Σ and a,b∈ A_n. There is a natural injective map _Λ_(_Σ,A_n) ↪_Λ_(_Σ,[A_n])sending a derivation _Σ→ A_n to ϕ_=λ_Σ+ι∘, where ι A_n ↪[A_n] is the natural injection. Further, the map (<ref>) has image _n:={ϕϕ≡λ_Σ A_n}⊆_Λ_(_Σ,[A_n]). By the universal property of _Σ, it holds that_n={[ρ]_∼∈Φ([A_n]):ρ≡ρ_FA_n}.Now, given a representative ρ of a class [ρ]_∼∈Φ([A_n]), the condition ρ≡ρ_F mod A_n shows that the map c G_,S→(V_F)⊗_ A_n defined by the relation ρ=ρ_F·(1+c) is a cocycle in ^1(_S/,V_⊗_A_n). Moreover, c is replaced by a cohomologous cocycle if ρ is replaced by a representation strict-equivalent to it. Therefore, the map_n →^1(_S/,V_⊗_A_n)induced by ρ↦ c_ρ=ρ·ρ_F^-1-1 is well-defined and injective, and its image consists of classes of cocycles c such that ρ=ρ_F·(1+c) is p-ordinary. We check that this condition on c_ρ amounts to c_ρ(I_p)⊆ V_^+⊗ A_n, that is, c_ρ(I_p) consists of upper-nilpotent matrices in an ordinary basis for ρ.If ρ is p-ordinary, then ρ_|I_p and ρ_F|I_p look like ([ * *; 0 1 ]) in their common ordinary basis, and hence, it is clear that c_ρ(I_p)⊆([ * *; 0 0 ])∩ (V_⊗ A_n)=V^+_⊗ A_n. Conversely, we obtain a p-ordinary basis for ρ by tensoring that of ρ_F by [A_n] over . This proves that _n≃[^1(_S/,V_⊗_ (/_^n)^∨) →^1(I_p,V_^-⊗_(/_^n)^∨))]for all n, and by taking the direct limit over n≥ 1, one sees that C_1(λ_Σ)^∨≃_n_n≃^Σ( F) as claimed. The aim of the rest of the section is to relax the assumption on Σ in Thm. <ref>. We introduce more notations and we let, for any prime ℓ, ^1_(_ℓ,D_)={[ [^1(_ℓ,D_) →^1(I_ℓ,D_)]; [^1(_p,D_) →^1(I_p,D^-_)] ].and we put ^1_(_ℓ,D_)=^1(_ℓ,D_)^1_(_ℓ,D_).In particular, for any finite sets of primes Θ⊆Σ not containing p, we have an exact sequence0^Θ( F)^Σ( F)∏_ℓ∈Σ-Θ^1_(_ℓ,D_). Let ℓ≠ p be a rational prime. Then ^1_(_ℓ,D_) is of -cotorsion. Moreover, _^1_(_ℓ,D_)^∨=E_ℓ( F)·, where E_ℓ( F) is as in (<ref>), unless ℓ belongs to the set Σ_e defined in (<ref>), in which case _^1_(_ℓ,D_)^∨=(1+ℓ)^-1·. Let G_ℓ=G__ℓ and let σ_ℓ∈ G_ℓ/I_ℓ be the arithmetic Frobenius. By the inflation-restriction exact sequence, ^1_(_ℓ,D_)≃^1(I_ℓ,D_)^G_ℓ/I_ℓ as -modules. Let ψ G_ℓ→^× be a continuous character and let D=^∨(ψ) be the cofree -module of rank one endowed with a G_ℓ-action given by ψ. We claim that :=^1(I_ℓ,D)^G_ℓ/I_ℓ={[ (/(1-ψ(σ_ℓ)ℓ^-1))^∨; 0 ]. Recall that I_ℓ fits into an exact sequence of G_ℓ/I_ℓ-modules 0 J_ℓI_ℓ (1) 0 with J_ℓ of prime-to-p order. This property of J_ℓ shows that the inflation map induces an isomorphism ^1(I_ℓ/J_ℓ,D^J_ℓ)≃^1(I_ℓ,D), and moreover D^J_ℓ is -divisible. Since D is -cofree of corank 1, either D^J_ℓ=0 or D^J_ℓ=D holds. In the first case, we then have =0 and ψ is ramified, so we may assume D^J_ℓ=D. This means that ψ_|I_ℓ factors through I_ℓ/J_ℓ, which is procyclic, so ^1(I_ℓ/J_ℓ,D^J_ℓ)≃ D/(t-1)D, where t=ψ(g) for a topological generator g of I_ℓ/J_ℓ. Of course, t≠ 1 if ψ is ramified, in which case ↪ D/(t-1)D=0 as D is divisible. In the unramified case, one has =(I_ℓ,D)^G_ℓ/I_ℓ≃((1),D)^G_ℓ/I_ℓ= D(-1)^G_ℓ/I_ℓ = D[1-ψ(σ_ℓ)ℓ^-1] =(/(1-ψ(σ_ℓ)ℓ^-1))^∨, as claimed. Now we turn back to the proof of the lemma. If ℓ∤ N, then D_≃^∨(χ_1χ_2^-1)⊕^∨⊕^∨(χ_2χ_1^-1) as [G_ℓ]-modules, where χ_1,χ_2 are the unramified characters such that 1-a_ℓ(F)X+θ_(ℓ)ℓ X^2=(1-χ_1(σ_ℓ)X)(1-χ_2(σ_ℓ)X). Up to a unit of , E_ℓ( F) is equal to (1-χ_1χ_2^-1(σ_ℓ)ℓ^-1)(1-ℓ^-1)(1-χ_2χ_1^-1(σ_ℓ)ℓ^-1), so the first part of the proof shows that _^1_(_ℓ,D_)^∨=E_ℓ( F)·, as wanted. In the case where every classical specialization of F is a ramified principal series at ℓ, a similar isomorphism as in (<ref>) with χ_1 ramified and χ_2 unramified holds. In particular, both χ_1χ_2^-1 and χ_2χ_1^-1 are ramified, so _^1_(_ℓ,D_)^∨=(1-ℓ^-1)·=E_ℓ( F)·. Assume now that F is a special at ℓ. By <cit.>, one has ρ_F|G_ℓ∼[ ψ *; 0 1 ], where ψ is the unramified character of G_ℓ sending σ_ℓ to ℓ and * is nontrivial on I_ℓ. In particular, ρ_F(J_ℓ)={1} and ρ_F(I_ℓ)=[1 .a;01 ] for some a∈∖{0}. Let g∈ I_ℓ be such that ρ_F(g)=[ 1 a; 0 1 ] and consider the adjoint action of g on V_. In the basis {e_1,e_2,e_3}={[ 0 1; 0 0 ],[10;0 -1 ],[ 0 0; 1 0 ]}, the matrix of ρ_F(g) is M=[1-2a -a^2;01a;001 ]. Hence, ^1(I_ℓ,D_)=(D_)_I_ℓ=D_/(M-𝕀)D_ =^∨.e_1 ⊕^∨.e_2 ⊕^∨.e_3a^∨.e_1⊕ a^∨.e_2⊕ 0.e_3 =^∨.e_3, as M-𝕀 has the same image as that of([ 0 a 0; 0 0 a; 0 0 0 ]) as an endomorphism of V_. Finally, the action of ρ_F on D_/(^∨.e_1 ⊕^∨.e_2) is given by ψ^-1, so we obtain ^1(I_ℓ,D_)^G_ℓ/I_ℓ= ^1(I_ℓ,^∨(ψ^-1))^G_ℓ/I_ℓ=(/(1-ℓ^-2))^∨=(/E_ℓ( F))^∨. Consider now the last case where every specialization of F is supercuspidal at ℓ. First notice that, as D_^J_ℓ≃ (D_)_J_ℓ canonically, ^1(I_ℓ,D_)≃^1(I_ℓ/J_ℓ,D_^J_ℓ)≃ (D_)_I_ℓ, which is -divisible. Assume now ℓ∉Σ_e. For any ν∈(), the local L-factor at ℓ of the adjoint lift of F^∘_ν is 1 by <cit.>. Thus, (V_)_I_ℓ=0 by the local Langlands correspondence and Nakayama's lemma. It follows that (D_)_I_ℓ is cotorsion, hence trivial, and ^1_/(_ℓ,D_) has characteristic ideal generated by E_ℓ( F)=1. Assume finally ℓ∈Σ_e and let G_ℓ^2=(_ℓ/_ℓ^2). Then ρ_F|G_ℓ≃_G_ℓ^2^G_ℓχ for some ramified character χ of G_ℓ^2. In particular, ρ_F|G_ℓ≃_G_ℓ^2^G_ℓχ/χ' ⊕η, where χ' is the conjugate of χ under (_ℓ^2/_ℓ) and η(_ℓ^2/_ℓ)≃{±1} is the unramified quadratic character. Note that χ/χ' is ramified, as every classical specialization of χ/χ' is. Using that I_ℓ⊂ G_ℓ^2, we obtain ^1_/(_ℓ,D_)=^1_/(_ℓ,^∨(η)), hence _^1_/(_ℓ,D_)=(1-η(σ_ℓ)ℓ^-1).=(1+ℓ^-1).. Let S=Σ∪{p,∞} and let _S be the maximal extension ofunramified outside S. The localization map ϕ^1(_S/,D_) →∏_ℓ∈Σ∪{p}^1_(_ℓ,D_) is surjective. Without loss of generality, we may assume that Σ contains all the places dividing N. We apply Greenberg's result <cit.> which proves the surjectivity of ϕ under certain hypotheses that we now check. By assumption, ρ_F is residually irreducible when restricted to G_(μ_p), so V_=D_[_] has trivial G_(μ_p)-invariants by Schur's lemma. Since V_ is self-dual, (V_)_G_(μ_p)=0, and in particular, V_ has no quotient isomorphic to μ_p. Moreover,being an integrally closed domain which is finite over Λ, it is reflexive as a Λ-module. But Λ has Krull dimension 2, soand T_ are Λ-free. This checks the validity of (b) of loc. cit..We now check the weak Leopoldt condition (LEO) and the cotorsionness of ϕ (CRK). For (LEO), it suffices to prove that ^2(_S/,D_) is of -cotorsion. Since ρ_F is odd, _((D_)^G_)=1, and the computation of global Euler characteristics (<cit.>) yields _(^1(_S/,D_))=_(^2(_S/,D_))+2.Note that ϕ=( F) is cotorsion by Thm. <ref>. Hence, if we let C be the codomain of ϕ, then the equality _(C)= 2, which we show now, will imply (LEO) and (CRK). By Proposition <ref>, we have to show that _(^1_(,D_))=2. Recall the filtration (<ref>) which comes from the ordinary filtration on V_F. Letting χ_1,χ_2 G_→^× with χ_2 unramified be such that ρ_F|G_ is an extension of χ_2 by χ_1, we have short exact sequences of [G_]-modules0^∨(χ_1χ_2^-1) D_D_^- 0,and0^∨D_^-^∨(χ_2χ_1^-1) 0,From the explicit description of (ρ_F), one sees that (χ_1χ_2^-1) is not isomorphic to (n) for any n∈. Thus, the cokernel of the map ^1(,D_) →^1(,D_^-) is of cotorsion, as ^2(,^∨(χ_1χ_2^-1))≃^∨(χ_2χ_1^-1)(1)^G_ is itself cotorsion. This shows that ^1_(,D_) has the same corank as (^1(,D_^-)→^1(I_p,D^-_))=^1(I_p,D^-_)^G_/I_p. In particular,_(^1_(,D_))=_(^1(_p,D^-_))-_(^1_(G_,D^-_)), where ^1_(G_,D^-_)=^1(G_/I_p,(D^-_)^I_p). Using the second exact sequence, one finds ^1_(G_,D^-_) has the same corank as ^1(G_/I_p,^∨) which is 1, whereas the Euler-Poincaré formula yields_(^1(_p,D^-_)) =_((D^-_)^G_)+_(^2(,D^-_))+2 =1+0+2=3.This proves that _(^1_(,D_))=2, and that ϕ is surjective. Assume ρ satisfies (<ref>) and that F is twist-minimal. Then the following equality of -ideals holds: _( F)^∨ = ∏_ℓ∈Σ_e(1+ℓ^-1)^-1L_p( F)·. Choose a finite set Σ containing the primes dividing N (but not p), so Σ_e⊆Σ. Note that ∏_ℓ∈ΣE_ℓ( F)=∏_ℓ|(N_Σ/N)E_ℓ( F) by Lemma <ref> (1). Proposition <ref> provides us with an exact sequence 0( F)^Σ( F)∏_ℓ∈Σ^1_(_ℓ,D_) 0. The multiplicativity of characteristic ideals with exact sequences, together with Proposition <ref>, Theorem <ref> and Proposition <ref>, then yields: _( F)^∨ = _^Σ( F)^∨·∏_ℓ∈Σ_e(1+ℓ^-1)^-1∏_ℓ∈ΣE_ℓ( F)^-1= L_p( F_Σ)·∏_ℓ∈Σ_e(1+ℓ^-1)^-1∏_ℓ∈ΣE_ℓ( F)^-1·=∏_ℓ∈Σ_e(1+ℓ^-1)^-1L_p( F)·, as wanted.§ CONGRUENCE MODULESWe recall some standard definitions and results on congruence modules in a slightly different setting than in <cit.>, <cit.>, or in <cit.>. Let Λ be a local domain, and letbe a commutative ring containing Λ. We further assume thatis a finite flat Λ-algebra. We put K=(Λ) and we write M_K=M ⊗_Λ K for any Λ-module M. Assume that we are given * M,N two free -modules of rank 1, and * λ↠Λ a surjective Λ-homomorphism. Assume further that we have a decomposition as K-algebras of _K as K × X, where the projection of _K onto the first factor is induced by λ. We let e_1,e_2 be the corresponding idempotents of _K, and we letM^λ=e_1 M ⊂ e_1 M_K ⊂ M_K,M_λ=M ∩ e_1 M_K ⊂ e_1 M_K ⊂ M_K,and similarly for N. Note that M_λ⊆ M^λ, as m=e_1· m ∈ M^λ for any m∈ M_λ. We further define the congruence module associated to M and λ asC_0^M(λ)=M^λ/M_λ.The congruence ideal associated to M and λ is the annihilator of C^M_0(λ) as a Λ-module. Letting C_0(λ):=C_0^(λ), we then have C_0^M(λ) ≃ C_0(λ), as M≃ as -modules. M_λ is Λ-free of rank 1, and it is a direct summand of M.Fix an isomorphism M≃. We seeas a sub-Λ-module of _K=e_1_K⊕ e_2_K. The projection _K ↠ e_1 _K is λ⊗ K, so we have _λ=(→ e_1 _K)=(λ)=Λ as Λ-modules. Hence, M_λ is Λ-free of rank 1.Since λ_|Λ=𝕀, we have = (λ) ⊕(λ), i.e., =_λ⊕ (∩ e_2_K) inside _K. Therefore, we obtain M=M_λ⊕ (M∩ e_2 M_K). Suppose we are given a Λ-linear pairing (·,·)M× N →Λ which is perfect (i.e., it inducesisomorphisms N ≃_Λ(M,Λ) and M ≃_Λ(N,Λ)) and such that ∀ t ∈, ∀ (m,n)∈ M× N,(tm,n)=(m,tn).Note that the orthogonal of e_1M_K (resp. of e_2M_K) is e_2N_K (resp. e_1N_K). As a last piece of notation, we define for any M'⊆ M_K (M')^*:={n∈ N_K ∀ m∈ M',(m,n)∈Λ},and we define similarly (N')^* for N'⊆ N_K. We have (M_λ)^*=N^λ⊕ e_2 N_K, and (M^λ)^*=N_λ⊕ e_2 N_K. Moreover, (·,·) induces an isomorphism M^λ≃_Λ(N_λ,Λ) (and also N^λ≃_Λ(M_λ,Λ)). We first check that (M_λ)^*⊇ N^λ⊕ e_2 N_K. Take n_1∈ N, n_2∈ N_K and let n=e_1n_1+e_2n_2. For any m∈ M_λ, we have m=e_1m, so (m,e_2n_2)=(m,e_1e_2n_2)=0 by (<ref>). Hence, (m,n)= (m,e_1n_1) = (e_1m,n_1)=(m,n_1)∈Λ, as m∈ M and n_1∈ N. This shows that n∈ (M_λ)^*. We now prove (M_λ)^*⊆ N^λ⊕ e_2 N_K and we take n∈ (M_λ)^*. The map m ↦ (m,n) defines a Λ-linear form M_λ→Λ, which can be extended to a linear form M →Λ by Lemma <ref>. By perfectness of (·,·), there exists p∈ N such that (·,n)=(·,p)_|M_λ. Moreover, as Λ≃ M_λ⊂ e_1 M_K≃ K by Lemma <ref>, we have M_λ⊗_Λ K=e_1 M_K. In particular, n-p is orthogonal to e_1M_K by linearity of the pairing, that is, n-p∈ e_2N_K. Therefore, n∈ e_1p+e_2N_K ⊂ N^λ⊕ e_2N_K, as wanted. Since e_1N_λ=N_λ⊂ N, we have (as sets) (M^λ,N_λ⊕ e_2 N_K)=(M^λ,N_λ)=(e_1M,N_λ)=(M,N_λ)⊂Λ, which implies (M^λ)^*⊇ N_λ⊕ e_2 N_K. We check the reverse inclusion and we let n∈ (M^λ)^*. To prove that e_1n∈ N, we remark that, for all m∈ M, we have (m,e_1n)=(e_1m,n)∈Λ, so the map m↦ (m,e_1n) belongs to _Λ(M,Λ). By the perfectness of (·,·), this means that e_1n∈ N, as claimed. It remains to prove that m↦ (n↦ (m,n)) yields an isomorphism M^λ≃_Λ(N_λ,Λ). We already know that N_λ⊗_Λ K=e_1N_K, and it is clear from the above arguments that (·,·) restricts to a perfect pairing e_1M_K × e_1N_K → K. We thus only have to check that the inverse image of _Λ(N_λ,Λ) under e_1M_K ≃_Λ(e_1N_K,K), say M', is equal to M^λ. But we have M'=(N_λ)^*∩ e_1M_K by definition, so we conclude that M'=(M^λ⊕ e_2M_K)∩ e_1M_K=M_λ. Let m_0 and n_0 be generators of M_λ and N_λ over Λ respectively (see Lemma <ref>). The Λ-linear map φ{[ M^λ → Λ; m ↦ (m,n_0) ].is an isomorphism. Consequently, C_0(λ)≃ C_0^M(λ) ≃Λ/(m_0,n_0)Λ. The proof of Proposition <ref> shows that (·,·) restricts to a non-degenerate pairing M^λ× N_λ→Λ, so φ is well-defined and injective. Again by Proposition <ref>, the element of _Λ(N_λ,Λ) sending n_0 to 1 is of the form (m,·) for some m∈ M^λ, and this element thus satisfies φ(m)=1. This shows that φ is an isomorphism. It is then clear that φ induces an isomorphism M^λ/M_λ≃Λ/φ(m_0)Λ=Λ/(m_0,n_0)Λ. amsalpha
http://arxiv.org/abs/2312.16706v1
{ "authors": [ "Alexandre Maksoud" ], "categories": [ "math.NT", "11R27" ], "primary_category": "math.NT", "published": "20231227200732", "title": "A canonical generator for congruence ideals of Hida families" }
Capacity Enhancement of n-GHZ State Super-dense Coding Channels by Purification and Quantum Neural Network Rong Zhang, Xiaoguang Chen, Yaoyao Wang and Bin Lu Department of Communications Science and Engineering School of Information Science and Technology, Fudan University Shanghai, 200433, China (e-mails:[email protected]) =================================================================================================================================================================================================================================================================================== A super-dense coding protocol based on the n-GHZ state is proposed to enable the two communicating parties to choose the number of transmitted code words according to their demand and to adapt the quantum super-dense coding protocol to multiple transmitted code word scenarios. A method is proposed that combines entanglement purification and Quantum Neural Network (QNN) to improve the channel capacity of super-dense coding. By simulating a realistic quantum communication noise environment in the Cirq platform, the effect of purification and QNN on the enhancement of fidelity and channel capacity in super-dense coding communication scenarios with different dimensions under unitary and non-unitary noise conditions is analyzed. The experimental results show that the channel capacity of super-dense coding is improved in different degrees when purification and QNN are applied separately, and the combination of purification and QNN has a superimposed effect on the channel capacity enhancement of super-dense coding, and the enhancement effect is more significant in different dimensions.quantum communication, quantum neural network, entanglement purification, quantum channel capacity, super-dense coding § INTRODUCTIONQuantum communication is a communication method based on quantum mechanics, which makes use of the special properties of quantum states, such as quantum superposition states, quantum entanglement states, the uncertainty principle and unclonability<cit.><cit.>, to ensure the security and reliability of communication. Compared with traditional classical communication, quantum communication adopts the special properties of the quantum state, which means the information in the communication process cannot be eavesdropped or tampered with, thus ensuring the security of communication. Secondly, quantum communication takes advantage of the redundancy of quantum states, i.e. multiple quantum states can transmit the same piece of information at the same time, thus improving the reliability of communication.Quantum dense coding was first proposed by Bennett and Wiesner <cit.> in 1992 and successfully verified by Mattle and his team <cit.> in physical experiments in 1996 and is now a widely used quantum communication protocol for QKD<cit.>. Quantum dense coding based on the Bell state can transmit 2 bits of classical information while transmitting 1 qubit, which in a sense greatly increases the channel capacity of the communication channel. The process of quantum dense coding involves the process of entanglement distribution, and the current methods to improve the efficiency of quantum entanglement distribution mainly include quantum entanglement concentration<cit.><cit.>, quantum error correction coding, and entanglement purification<cit.>. Among them, entanglement purification improves the fidelity of one pair of entangled states by discarding and retaining another pair of entangled states and improves the entanglement of shared quantum states after entanglement distribution, which is the main method studied in this paper.Kerstin and his team proposed in 2020 a quantum neural network (QNN) based on the Verdon training method<cit.>, which creates a quantum feed-forward neural network by quantum simulation of classical neurons. This quantum neural network is trained to obtain a unitary operator, which can be used as a component of a quantum circuit to deal with the effects of specific noise.This paper is organized as follows. In Section <ref>, the theoretical basics of this chapter will be introduced. In Section <ref>, we propose a super-dense coding protocol based on n-GHZ state. In Section <ref> the algorithm of capacity enhancement of n-GHZ state super-dense coding channels based on purification and QNN will be proposed, and simulation and result analysis. In Section <ref>, we will reach a conclusion.§ PRELIMINARY §.§ Quantum GateQuantum gates are a fundamental element of quantum computing and their role in quantum circuits is somewhat similar to that of logic gates in classical computers.In this paper, we use σ_x, σ_y, σ_z and CNOT gates. The matrix expressions are as followsσ_x=[ 0 1; 1 0 ],σ_y=[0 -i;i0 ],σ_z=[10;0 -1 ]CNOT=[ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ] The quantum circuit diagram symbol for the CNOT gate is shown in Fig. <ref>§.§ GHZ stateA GHZ (Greenberger-Horne-Zeilinger) state is a quantum state in which three qubits are fully entangled, or an n-GHZ state if it contains n(n≥ 3) qubits. The GHZ state is represented as follows|Ψ_1⟩=1/√(2)( |000⟩+ |111⟩), |Ψ_2⟩=1/√(2)( |000⟩- |111⟩)|Ψ_3⟩=1/√(2)( |001⟩+ |110⟩), |Ψ_4⟩=1/√(2)( |001⟩- |110⟩)|Ψ_5⟩=1/√(2)( |010⟩+ |101⟩), |Ψ_6⟩=1/√(2)( |010⟩- |101⟩)|Ψ_7⟩=1/√(2)( |011⟩+ |100⟩), |Ψ_8⟩=1/√(2)( |011⟩- |100⟩) The n-GHZ state is expressed as follows{ |Ψ_1 ⟩ =1/√(2)( | 00… 0⟩^n+ | 11… 1⟩^n)|Ψ_2 ⟩ =1/√(2)( | 00… 0⟩^n- | 11… 1⟩^n)|Ψ_3 ⟩ =1/√(2)( | 00… 1⟩^n+ | 11… 0⟩^n)|Ψ_4 ⟩ =1/√(2)( | 00… 1⟩^n- | 11… 0⟩^n) …|Ψ_2^n-1⟩ =1/√(2)( | 01… 1⟩^n+ | 10… 0⟩^n)|Ψ_2^n⟩ =1/√(2)( | 01… 1⟩^n- | 10… 0⟩^n) .§.§ FidelityFidelity is an important metric for assessing the accuracy of quantum computing results which is influenced by noise, it measures how similar two quantum states are to each other. Before introducing fidelity, we introduce the concept of the density operator. For an arbitrary quantum state | ϕ⟩ with the following definition of the density operator:ρ =∑_i=1^N p_i | φ _i⟩⟨φ _i|where { N|N∈𝑁_+,N>2 }, p_i is the probability that a quantum state | φ _i⟩ exists in | ϕ⟩, and ∑_i=1^Np_i =1.Then the fidelity F can be defined asF=√(⟨ψ| ρ | ψ⟩) where ρ is the density operator of the resulting quantum state after the measurement and | ψ⟩ is the quantum state that should be obtained from the measurement in the ideal case. If the fidelity is closer to 1, it means that this measurement is closer to the ideal correct measurement. §.§ Super-dense Coding Protocol Based on GHZ stateThe communication process of super-dense coding is shown in Fig. <ref>. Assume that Alice and Bob share a pair of GHZ states in advance via entanglement distribution | Ψ^+⟩ =1/√(2) ( | 000⟩_BAA + | 111⟩_BAA), Bob holds the 1st qubit, Alice holds the 2nd and 3rd qubits, and the following is the super-dense coding protocol based on the GHZ state<cit.><cit.>.Step 1According to a prior mutually agreed protocol, Alice combines the quantum gates { I,σ _x,iσ _y,σ _z} into a unitary operator U as the encoding of the classical bits of information to be transmitted, acting on the 2 held qubits and transmitting them to Bob via the quantum channel.Step 2Bob jointly measures the qubits transmitted by Alice with the qubits he holds, obtains Alice's encoded information, and translates it into classical information.The specific coding protocols are shown in Table  <ref>. Traditional communication methods can only encode one classical message into one binary bit, while quantum-dense coding can encode multiple classical bits into one qubit, greatly improving the efficiency and speed of information transmission.§.§ Entanglement Purification Based on GHZ StateBennett et al. <cit.> first proposed entanglement purification in 1997, and developed a standard entanglement purification protocol. Later, Deutsch et al. improved entanglement purification<cit.> by using quantum privacy amplification to transform phase-flip noise into bit-flip noise for subsequent processing to improve entanglement purification performance. The entanglement purification method in this paper is developed mainly based on the work of Deutsch. A simplified circuit diagram of an entanglement purification process based on GHZ states is shown in Fig. <ref>. Assuming that the sender holds two pairs of entangled GHZ states, the sender sends half of the two pairs of entangled states to the receiver via entanglement distribution. Next, each side performs a CNOT gate operation on the qubits it holds and measures the target group qubits. If the qubits of the target group are the same, the qubits of the control group are kept and the qubits of the target group are discarded. If the qubits of the target group are different, then both groups are discarded. It is worth mentioning that since the real channel contains not only bit-flip noise but also phase-flip noise, the quantum channel identification technique in <cit.> can be used to identify the type of noise first, and for phase-flip noise, it can be converted to bit-flip noise using H-gate before and after transmission.An entanglement purification circuit dealing with n-dimensional entangled states is shown in Fig. <ref>. Where A_i is the qubits of the control group quantum states and B_j is the qubits of the target group quantum states. The decision to retain the control group A is made by measuring the target group B. It is a waste of qubit resources and does not provide a satisfactory fidelity improvement if only one entanglement purification is carried out. Therefore, there is a need to find a way to improve the efficiency of entanglement purification while saving qubit resources. §.§ Quantum Neural NetworkThe basic neuron of the QNN is the quantum perceptron, which can be regarded as a unitary operator that can process the input quantum state to obtain the output quantum state. In the process of training the QNN, the network iterates continuously and the final model obtained is also a unitary operator, using which the input data to be processed can be computed directly.The general structural diagram of the quantum neural network QNN used is shown in Fig. <ref>.As can be seen from Fig. <ref>, the QNN network structure is exactly adapted to the structure of the purified circuit. We can use the known ideal entanglement distribution results to improve the performance of the purification circuit by using the QNN to train a unitary operator that can express the effects of noise.As shown in Fig. <ref>, both the input and output layers are of dimension n. The output of the n-dimensional purification circuit will be used as the training set input to the QNN. There are L hidden layers in between, where the unitary operator U^l of each layer is obtained by multiplying multiple quantum perceptrons in sequence, i.e.U^l=U_n+1^l… U_2^lU_1^l Assuming that the output quantum state of the n-dimensional purification circuit is ρ _Ψ, and after L-layer hidden layer iterations, the output quantum state ρ _Ψ^' is obtained, we haveρ _Ψ^'=tr_in,hid(U(ρ _Ψ⊗ | 0… 0⟩_ hid,out⟨0… 0| )U^† )where U is the unitary operator circuit obtained from the training. The training set of the QNN is assumed to be { ( | Ψ _1^in⟩, | Ψ _1^out⟩),( | Ψ _2^in⟩, | Ψ _2^out⟩),…,( | Ψ _N^in⟩, | Ψ _N^out⟩) }, QNN uses fidelity as a loss function to measure how well the network is trained, i.e. with training input | Ψ _x^in⟩(x=1,2,..., N), the output of the quantum neural network ρ_x^out is similar to | Ψ _x^in⟩. The loss function C is calculated as follows:C=1/N∑_x=1^N⟨Ψ _x^out| ρ _x^out | Ψ _x^out⟩ §.§ Quantum Channel CapacityFor a general quantum channel, assume any quantum state | Ψ⟩ in Hilbert space with density operator ρ and eigenvalue λ_i, where i=1,2,..., N and N is the number of dimensions in Hilbert space. Corresponding to the average mutual information in the classical channel is the quantum coherent information I_coh(ρ, N). The classical capacity in a quantum channel is defined by Quantum mutual information, which measures the ability of the quantum channel to transmit classical information. Quantum capacity is defined by Quantum coherent information, which measures the ability of a quantum channel to transmit quantum information.Assuming that the input to the quantum channel is X_in={ |Ψ _1⟩, |Ψ _2⟩,… |Ψ _N⟩}, when the input is |Ψ _i⟩, the output quantum state ρ_i is obtained with probability π_i. The classical average mutual information I_c(ρ ,N) of the quantum channel can be derived from the Holevo bound<cit.>I_c(ρ ,N)=S(π _iρ _i)-∑_i=1^Nπ _iS(ρ _i)where S(·) is the von Neumann entropy<cit.>S(ρ)=-Tr(ρlogρ)=-∑_i=1^Nλ_ilogλ_i The classical capacity of a quantum channel is defined asC=max_π _i,ρ _i I_c(ρ,N)The quantum coherent information of a quantum channel<cit.> is defined asI_coh=S(π _iρ _i)-S_ewhere S_e can be seen as the amount of information exchanged with the environment as the system interacts with it <cit.><cit.><cit.>, and it characterizes the amount of quantum noise during the evolution of the system. the expression for S_e is as followsS_e=S(∑_i,j𝒩( | ψ_i⟩⟨ψ_j| )⊗ | ψ_i⟩⟨ψ_j|)where 𝒩 denotes the channel noise, | ψ_i⟩=π_i^1/4 |Ψ _i⟩ is the eigenvector of the input quantum state density matrix, and | ψ_i⟩ is its complex conjugate vector. This gives the quantum capacity C_Q of the quantum channel asC_Q=max_π _i,ρ _iI_coh§ SUPER-DENSE CODING PROTOCOL BASED ON N-GHZ STATETo transmit more quantum states, this paper proposes an ultra-dense coding protocol based on n-GHZ states. Assuming that Alice needs to transmit n bits of classical information, Alice transmits one qubit of the n-GHZ state to Bob via entanglement distribution, and encodes the classical information X=x_n-1 x_n-2… x_0 into the qubits Alice held using the unitary operator U_sdc in (<ref>). U_sdc={(1-x_0)[(1-x_n-1)I+x_n-1σ_x ]+x_0[(1-x_n-1)σ_z . .-ix_n-1σ_y] }⊗[(1-y_k )I+y_kσ_x]^⊗(n-2) ⌊X/2⌋ = y_n-2y_n-3… y_0 where ⌊·⌋ is the round-down symbol, k=n-3,n-2,...,0. U_sdc is a n-1 dimensional unitary operator.Alice transmits the encoded qubits through the quantum channel to Bob, who then makes a joint measurement of the qubits he holds with the qubits transmitted by Alice to restore the classical information Alice wants to transmit.§ EXPERIMENTAL SIMULATION AND ANALYSISIn this paper, the input of the training set of the QNN network is the shared quantum state after entanglement distribution in the actual noise environment, and the output is the ideal shared quantum state after entanglement distribution. Depolarization noise which is unitary and amplitude damping noise which is non-unitary are chosen as the channel noise to train the network respectively, and the results are analysed and compared according to the results.The input to the training set is generated by drawing the quantum circuit in Fig. <ref> by the Cirq platform provided by Google, which can simulate a realistic quantum noise environment. The channel capacity simulation results for super-dense coding with amplitude damping noise and depolarization noise are shown in Fig. <ref> and Fig. <ref>, where p is the noise error probability.According to the simulation results, under both amplitude damping noise and depolarization noise conditions, the channel capacity decreases and then increases slightly with the increase of noise probability p. However, after p>0.2, the channel capacity is less than 1/2 of the ideal channel capacity value. In contrast, when the quantum states are purified after the entanglement distribution and then transmitted by super-dense coding, the channel capacity improves with the decrease of p and slows down, but the channel capacity value is still low at higher p values. If the shared quantum state after entanglement distribution is processed using the trained QNN network operator U_Q, the channel capacity remains close to the ideal channel capacity value for noise probability p close to 1 under amplitude damping noise conditions, which is a great improvement to the situation where the channel capacity is degraded by noise, but in the interval of 0.2≤ p≤0.8, the channel capacity still drops to less than half of the ideal channel capacity value. For depolarization noise, the QNN effect on channel capacity enhancement is reflected in a different region from that of purification, where purification has a more significant effect on channel capacity enhancement in the region with lower p values, whereas the QNN effect on channel capacity enhancement is reflected in the region with higher p values. If the purified quantum states are processed using U_Q, combining purification and QNN, the enhancement effect on channel capacity is more significant, and the curves show the superposition of the enhancement effect of both purification and QNN.From Fig. <ref> and Fig. <ref>, it can be seen that this scheme is not as effective in improving the capacity of the super-dense coding channel under depolarization noise as under amplitude damping noise, presumably because depolarization noise, as unitary noise, will affect the fidelity of the shared quantum state after the entanglement distribution, but will not disentangle the shared quantum state, whereas the essence of both entanglement purification and QNN is to further improve the fidelity of the shared quantum state by maintaining the entanglement of the shared quantum state as much as possible, thus the effect of depolarization noise on channel capacity is not only due to fidelity but also for other reasons.The convergence of the loss function of the QNN network training for different numbers of input qubits is shown in Fig. <ref>,and the training time of the QNN network and the convergence values of the loss function under different noises are shown in Table  <ref>. Where C_a is the convergence value of the loss function under amplitude damping noise and C_d is the convergence value of the loss function under depolarization noise.It can be seen that the convergence value of the loss function of the QNN network drops sharply when the input qubit count reaches 5 under both noises, and the network is not trained well. As the number of input qubits increases, the convergence of the loss function gradually becomes slower, and when the number of qubits is greater than 5, the loss function tends to fail to converge because of the gradient explosion. The depolarization noise relative to the amplitude damping noise results in a lower convergence value of the final loss function, which is even less effective in training. § CONCLUSIONSIn this paper, we derive a super-dense coding protocol based on the n-GHZ state based on the dense protocol of the GHZ state. The n-GHZ state super-dense coding protocol applies to the transmission of n+1 bit classical bit information with n-bit qubits in a two-party dense coding communication scenario. It is also focused on using purification and QNN to improve the fidelity of the shared quantum state after entanglement distribution. Based on the channel capacity calculation method of super-dense coding, it is reasoned that the improvement of the fidelity of the shared quantum state after entanglement distribution can bring a certain improvement to the channel capacity of super-dense coding, and this conclusion is proved experimentally. The combination of purification and QNN has a superposition effect on the improvement of the fidelity of the entanglement distribution and therefore can increase the channel capacity of super-dense coding to a certain extent under both unitary and non-unitary noise.However, there are still some problems in this paper. Firstly, the gradient explosion in QNN network training at input qubit numbers greater than 5 has not yet found a way to be solved, and further research and improvement of the network structure of QNN may be needed to adapt to higher dimensional training set inputs. Secondly, the effect of entanglement purification fidelity enhancement on the capacity of superdense coded channels cannot be verified in a real super-dense coding communication environment due to the lack of practical measurement devices.IEEEtran
http://arxiv.org/abs/2312.15892v1
{ "authors": [ "Rong Zhang", "Xiaoguang Chen", "Yaoyao Wang", "Bin Lu" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226054703", "title": "Capacity Enhancement of n-GHZ State Super-dense Coding Channels by Purification and Quantum Neural Network" }
#1#1 #1 ichinisanyon å A&AA&AA&A ReviewA&ASA&ASActa Astron.Adv. Space Res.AJAust. J. Phys.Aust. J. Phys. Astr. Supp.Ann. d'Ap.ApJApJLApJSAp&SSARA&AAstro. Lett. and CommunicationsBAASBANBul. Astron. Inst. Neth.Contemp. Phys.Curr. Sci.JA&AJ. de PhysiqueMem. Soc. Astron. Ital.MessengerMNRASNatNature Phys. Sci.New Astron.ObservatoryProc. Astron. Soc. Aust.PASPPubl. Astr. Soc. JapanProc. Cambridge Phil. Soc.Pub. David Dunlap Obs.Phil. Trans. R. Soc. London APhysRepPhys. Rev. Lett.Physica ScriptaPhysics TodayProc. Nat. Acad. Sci.Phys. Rev. D.QJRASR. Obs. AnnalsR. Obs. Bull.ScienceSvASpace Sci. Rev.Zeitschr. f. Astroph.Zeiss Inform. with Jena Rev. Multimessenger Connection in NGC 4151, NGC 4945 and Circinus GalaxyMurase et al. 1.001Department of Physics; Department of Astronomy & Astrophysics; Center for Multimessenger Astrophysics, Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA2School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540, USA3Center for Gravitational Physics and Quantum Information, Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, Kyoto 606-8502, Japan4NASA Postdoctoral Program Fellow, NASA Goddard Space Flight Center, Greenbelt, MD, 20771, USA5Frontier Research Institute for Interdisciplinary Sciences; Astronomical Institute, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan6Department of Physics and Astronomy, Clemson University, Clemson, SC 29634, USA7Fakultät für Physik und Astronomie, Julius-Maximilians-Universität Würzburg, Würzburg D-97074, Germany Recent observations of high-energy neutrinos by IceCube and γ rays by theFermi Large Area Telescope (LAT) and the MAGIC telescope have suggested that neutrinos are produced in γ-ray opaque environments in the vicinity of supermassive black holes. In this work, we present 20 MeV – 1 TeV spectra of three Seyfert galaxies whose nuclei are predicted to be active in neutrinos, NGC 4151, NGC 4945 and the Circinus Galaxy, using 14.4 years of theFermi LAT data.In particular, we find evidence of sub-GeV excess emission that can be attributed to γ rays from NGC 4945, as was also seen in NGC 1068. These spectral features are consistent with predictions of the magnetically-powered corona model, and we argue that NGC 4945 is among the brightest neutrino active galaxies detectable for KM3Net and Baikal-GVD. On the other hand, in contrast to other reported results, we do not detect γ rays from NGC 4151, which constrains neutrino emission from the accretion shock model. Future neutrino detectors such as IceCube-Gen2 and MeV γ-ray telescopes such as AMEGO-X will be crucial for discriminating among the theoretical models. § INTRODUCTIONSince the IceCube Collaboration announced the first evidence for high-energy cosmic neutrinos in 2013 <cit.>, the detection of many astrophysical TeV–PeV neutrinos have been reported.They are produced by hadronuclear (pp) and photohadronic (pγ) interactions, inevitably producing a similar amount of γ rays. These γ rays can be absorbed by the Breit-Wheeler process (γ+γ→ e^++e^-) and reprocessed to lower-energy γ rays, which can contribute to the cosmic γ-ray background measured byFermi LAT <cit.>. If all the neutrino sources were γ-ray transparent, γ rays accompanied by the large all-sky neutrino flux at neutrino energies of E_ν∼1-10 TeV <cit.> would overshoot the observed diffuse isotropic γ-ray background.This necessitates the existence of hidden cosmic-ray (CR) accelerators that are opaque to GeV-TeV γ rays, and the vicinity of supermassive black holes (SMBHs) is among the most promising sites <cit.>.The existence of hidden neutrino sources has further been supported by the recent IceCube observation of neutrino signals from the nearby Seyfert galaxy, NGC 1068 <cit.>. The GeV-TeV γ-ray fluxes seen by LAT <cit.> and MAGIC upper limits <cit.> are much lower than the TeV neutrino flux measured with IceCube. The region has to be compact for the system to be opaque to GeV γ rays <cit.>, as expected in disk-corona models of active galactic nuclei (AGNs) <cit.>. This is the main consequence of the GeV-TeV γ-ray and TeV-PeV neutrino observations. It has been predicted that X-ray bright AGNs are promising targets for neutrino observations <cit.>, while TeV γ rays are cascaded down to sub-GeV energies, which will be ideal targets for MeV–GeV γ-ray telescopes. In this work, we investigate multimessenger implications of three nearby Seyfert galaxies, , , and the Circinus Galaxy, usingFermi LAT data. We analyze 14.4 years ofFermi LAT data to obtain the γ-ray fluxes from these galaxies. Then, we compare the sub-GeV γ-ray data with CR-induced electromagnetic cascade emission and discuss corresponding neutrino fluxes.We use Q_x=Q/10^x in cgs units and assume cosmological parameters with Ω_m=0.3, Ω_Λ=0.7, and h=0.7. § NEUTRINO AND Γ-RAY EMISSION FROM THE HEARTS OF AGNSMultimessenger data fromsuggest that high-energy neutrinos from Seyfert galaxies are likely to originate from compact regions with a size of R≲30-100 R_S<cit.>, and X-ray emitting coronae or possible shock regions, located at the vicinity of SMBHs, are the most plausible site of the neutrino production <cit.>. In this work, we consider neutrinos and γ rays from such coronal regions with R=10-30 R_S. The spectral energy distribution (SED) of AGNs is modeled empirically <cit.>, based on the parameterization as a function of the SMBH mass M_ BH and the intrinsic X-ray luminosity L_X. We use M_ BH=1.0×10^7 M_⊙ and L_X=2.6×10^42  erg  s^-1 for <cit.>, M_ BH=1.4×10^6 M_⊙ and L_X=3.3×10^42  erg  s^-1<cit.> for , and M_ BH=1.7×10^6 M_⊙ and L_X=10^42.5  erg  s^-1 for the Circinus Galaxy <cit.>, respectively.We checked that the adopted SED model <cit.> agrees with the observed SEDs of the AGNs considered in this work.High-energy neutrinos are produced through meson production by pγ and pp interactions. We calculate neutrino and γ-ray emission, considering CR-induced cascades, with the AGN module of the Astrophysical Multimessenger Emission Simulator ( AMES) <cit.>.Details of our steady-state corona model are described in <cit.>, but we use the detailed cross sections of the photomeson production and Bethe-Heitler pair production processes as in <cit.>. The model predicts that TeV neutrinos must be accompanied by sub-GeV γ rays, predicting γ-ray and neutrino energy fluxes of E_γ F_E_γ∼ (0.1-1)E_ν F_E_ν within 1 order of magnitude. This is a consequence of the multimessenger connection, which is insensitive to model parameters. §.§ Magnetically-powered coronaeMagnetohydrodynamic (MHD) simulations for accretion flows revealed that the magnetorotational instability (MRI) and subsequent dynamos amplify the magnetic field and develop strong turbulence in accretion flows <cit.>. It is believed that magnetic dissipation in the accretion flows heat the plasma and form a hot region called a corona. The observed X-ray emission with a power-law spectrum originates from the Comptonization of disk photons by thermal or bulk electrons.The MHD turbulence scatters CRs, and the scattered particles randomly change their energies, and effectively gain energy from the turbulence.In the coronal region of AGNs, Alfvén and turbulent velocities are mildly relativistic, enabling the turbulence to accelerate CRs up to TeV-PeV energies. In this process, the resulting CR spectrum can be hard with a spectral index of s≲2 (for dN_ CR/dε_p∝ε_p^-s, where ε_p is the CR proton energy in the source frame), and the maximum CR energy typically lies in the range of ∼0.1-1 PeV with a reasonable acceleration efficiency (η_ tur∼10-100). The stochastic acceleration by large-scale turbulence has been studied by test-particle simulations with MHD<cit.> and large-scale particle-in-cell (PIC) simulations <cit.>. Alternatively, magnetic reconnections accelerate non-thermal particles efficiently, especially in the relativistic regime (σ_ mag=B^2/(8π mc^2)>1) <cit.>.PIC simulations have demonstrated that the development of MRI turbulence triggers magnetic reconnections, leading to non-thermal particle production <cit.>. <cit.> suggested such a magnetic reconnection model to explain the neutrino data of  <cit.>.In this work, we consider the magnetically-powered corona model with stochastic acceleration <cit.>, where we solve the Fokker-Planck equation to obtain the CR spectrum and calculate the resulting neutrino and γ-ray spectra taking into account electromagnetic cascades through γγ→ e^+e^-, synchrotron and inverse-Compton emission processes. The coronal plasma is assumed to be geometrically thick and advected to a SMBH with the infall velocity V_ infall=α√(GM_ BH/R), where α=0.1 is the viscosity parameter. The magnetic field is obtained through the plasma beta set to β=1, where synchrotron cascade emission is important for magnetized coronae with β≲1 and the cascaded ∼0.1–10 MeV γ-ray flux is a robust signature <cit.>. Then, the remaining principal parameters of the model are for CR inputs regarding the CR normalization and maximum energy. We consider two specific cases. For Model A we assume the CR injection efficiency[The corresponding CR pressure is 50% of the thermal pressure with the virial temperature for L_X=3×10^43  erg  s^-1 and d=10 Mpc <cit.>.] and η_ tur=70 that are used to explain the IceCube data of <cit.> with R=30 R_S.For Model B, the CR pressure P_ CR is set to 10% of the thermal pressure with the virial temperature (P_ vir). Because the coronal pressure would be dominated by magnetic fields, this assumption is reasonable. The emission radius is assumed to be R=10 R_S, in which more neutrinos and γ rays may be produced due to pγ interactions. For the three AGNs studied in this work, Model B gives more optimistic predictions for the neutrino and γ-ray fluxes. §.§ Accretion shocksAccretion shocks have been discussed as a CR acceleration site for a long time <cit.>.In this model, a spherical accretion flow, aside from the accretion disk flow, is assumed to form a shock with a velocity of V_s≈ V_ ff=√(GM_ BH/R) in the vicinity of the SMBH, and s≳2 is expected based on diffusive shock acceleration theory <cit.>. However, the neutrino data ofsuggest that the particle acceleration efficiency, η_ sho, has to be much larger than the canonical value in the Bohm limit (η_ sho∼1-10) not to violate the constraint on the neutrino break/cutoff <cit.>. Thus, in this work, wead hoc fix the proton maximum energy to 100 TeV (implying η_ sho≫10) and consider the largest CR luminosity L_ CR allowed by the LAT data. Then, we calculate neutrino and γ-ray spectra assuming the steady-state injection of CRs for given R and SEDs.In the accretion shock model, magnetic fields are expected to be weaker than the magnetically-powered corona model, and we set B=30 G throughout this work <cit.>. For such low magnetizations, the results on γ-ray spectra are largely insensitive to CR spectra as long as electromagnetic cascades proceed mainly through γγ→ e^+e^- and inverse-Compton emission.Note that we only consider CR-induced cascade γ rays without any primary non-thermal electron component, unlike <cit.> that focused on γ rays from primary electrons only with attenuation (without cascades). The primary electron component would add an additional sub-GeV gamma-ray component, which would result in a lower neutrino flux in order to be consistent with the given sub-GeV gamma-ray data. § DATA ANALYSIS We study three nearby Seyfert galaxies, , , and the Circinus Galaxy. These objects were all reported as LAT sources and are also very bright in the hard X-ray band. In the magnetically-powered corona model, the neutrino luminosity is roughly proportional to the X-ray luminosity <cit.>, and thusand( and Circinus) are predicted to be among the brightest sources in the northern (southern) neutrino sky among Seyfert galaxies in theSwift BASS catalog with LAT detection[With theSwift BASS catalog considering the 0.2-10 keV and 14-195 keV bands for the intrinsic X-ray flux, the brightest Seyfert galaxies in the northern sky including the near-horizon areand , andis expected to be the most promising neutrino source especially with theNuSTAR flux. In the southern sky, ,and Circinus are the brightest. We focus onand Circinus, which were reported as LAT sources, and the high luminosity ofwould suppress coronal γ-ray emission more severely.]. Prior to this work, the results of(being the brightest in the IceCube sky) are presented in <cit.>. We analyze data collected by the LAT <cit.> from 2008 August 04 to 2023 January 5 (14.4 years). We select an energy range ofto , and bin the data using 8 energy bins per decade. The pixel size is 0.08^∘. We use a 10^∘× 10^∘ region of interest (ROI) centered on each respective galaxy. The standard data filters are used: DATA_QUAL>0 and LAT_CONFIG==1. The analysis is performed using Fermipy (v1.2)[Available at <https://fermipy.readthedocs.io/en/latest/>], which utilizes the underlying Fermitools (v2.2.0). We select photons corresponding to the P8R3_SOURCE_V3 instrument response. In order to optimize the sensitivity of our analysis we implement a joint likelihood fit with the four point spread function (PSF) event types available in the Pass 8 data set[For more information on the different PSF types see <https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Data/LAT_DP.html>.]. The data are divided into quartiles corresponding to the quality of the reconstructed direction, from the lowest quality quartile (PSF0) to the best quality quartile (PSF3). Each sub-selection has its own binned likelihood instance that is combined in a summed likelihood function for the ROI. This is easily implemented in Fermipy by specifying the components section in the configuration file. We include different event types depending on the energy interval. This is motivated by the energy-dependence of the LAT instrument response. Additionally, in order to reduce the contamination from the Earth's limb, we apply an energy-dependent cut on the maximum zenith angle. These selections are similar to those used for the 4FGL-DR3 catalog <cit.>, and they are summarized in Table <ref>. Note, however, that the 4FGL-DR3 did not use photon energies below 50 MeV. Each PSF type also has its own corresponding isotropic spectrum, namely, iso_P8R3_SOURCE_V3_PSF i_v1, for i ranging from 0 to 3. The isotropic spectrum file is defined fromto , and so we use a power-law extrapolation for energies outside this range. The Galactic diffuse emission is modeled using the standard component (gll_iem_v07). It is only defined betweento , and so likewise we use a power law extrapolation. The point source emission is modeled using the 4FGL-DR3 catalog (gll_psc_v28) <cit.>. In order to account for photon leakage from sources outside of the ROI due to the PSF of the detector, the model includes all 4FGL sources within a 15^∘× 15^∘ region. The energy dispersion correction (edisp_bins=–1) is enabled for all sources except the isotropic component.Before calculating SEDs, we first perform an initial fit for each of the three ROIs, in which we optimize the model, freeing the normalization and index of the Galactic diffuse, the normalization of the isotropic, and all point sources (normalization and index) within 3^∘. We then find new sources using the Fermipy function find_sources, which generates TS maps and identifies new sources based on peaks in the TS. The TS maps are generated using a power-law spectral model with a photon index of 2.0. The minimum separation between two point sources is set to 0.5^∘, and the minimum TS for including a source in the model is set to 16. Finally, we perform a second fit in a similar way as the first, with the exception that we free all point sources within 5^∘, having a TS>25.Among the three Seyfert galaxies, the LAT observations ofare complicated by the presence of two nearby BL Lacs,and . In order to determine the most likely origin of the corresponding γ-ray emission, we re-localize the point source closest to . This is done following a similar procedure as the ROI optimization described above. We use the same data selection, with the exception that we use a pixel size of 0.04^∘. In the first step of the optimization, we only include point sources from the 4FGL-DR3, i.e. we do not include a point source at the location of . Then, before finding new sources, we removefrom the model, thus allowing the fitting algorithm to find the best-fit location of the associated source. The resulting localization is shown in Figure <ref>. The pink contour shows the 95% localization uncertainty, based on the peak in the TS map. Red contours show the 95% localization uncertainty from the 4FGL-DR3, and the blue circles show the BL Lacs which are associated with the 4FGL sources. As can be clearly seen, the γ-ray source nearis most likely associated with(and hence, the BL Lac), consistent with the 4FGL catalog. We also verified that the results remain consistent when using energies . Additionally, we checked with the preliminary version of the 4FGL-DR4, also finding consistent results. We therefore keepin the model when calculating the SED for .After optimizing the ROIs, SEDs are calculated for each source using the Fermipy SED analysis. This method computes an SED by performing independent fits for the flux normalization in each energy bin. For the calculation, we combine the original energy bins into larger bins, using 2 bins per decade between 100 MeV to 100  GeV, as well as a single bin between 20-100 MeV, and a single bin between 100 - 1000 GeV. The normalization in each bin is fit using a power-law spectral model with a fixed index of 2.0. The parameters of the background components are held fixed at their best-fit values from the baseline fit. Further details of the SED analyses for the three Seyfert galaxies are provided in Appendix <ref>. The resulting SEDs of ,and Circinus are shown in Figures <ref> and <ref>.§ IMPLICATIONS AND DISCUSSIONSGeV γ rays mainly interact with X rays from the corona. The two-photon annihilation optical depth for a photon index of Γ_X≈2 is <cit.> τ_γγ(ε_γ) ≈ η_γγσ_TR n_X(ε_γ/ε̃_γγ-X)^Γ_X-1≃ 24 [R/(30R_S)]^-1M_ BH,7^-1L̃_X,42.4(ε_γ/1  GeV),where η_γγ∼0.1 is a numerical coefficient depending on Γ_X, σ_T≈6.65×10^-25  cm^2, ε̃_γγ-X=m_e^2c^4/ε_X≃0.26  GeV (ε_X/1  keV)^-1, ε_X is the reference X-ray energy, and n_X≈L̃_X/(2π R^2 c ε_X) is used. Here L̃_X is the differential X-ray luminosity.The numerical results shown in Figures <ref> and <ref> are consistent with equation (<ref>), and we may expect γ rays.Forand Circinus, while GeV emission is detected, its origin should be different, e.g., star-forming activities, as seen in <cit.>. Note that hadronic emission from the starburst region predicts a low-energy break in the γ-ray spectrum due to the decay of neutral pions, although there could be leptonic contributions from inverse-Compton and bremsstrahlung emission.However, detailed studies of the GeV emission are beyond the scope of this work. is known to be a Seyfert 1.5 galaxy, showing the characteristic features of both types 1 (dominated by broad-line components) and 2 (dominated by narrow-line components). NGC 4151 is Compton thin unlike the other two Seyferts and . The escape of GeV γ rays is easier than indue to its lower intrinsic X-ray luminosity, which makes the strong limits in the GeV range important. As shown in Figures <ref> and <ref> left, we find that the magnetically-powered corona and accretion shock models are consistent with the upper limits obtained in this work. Nevertheless, the data give interesting constraints on neutrino emission. In the magnetically-powered corona model, Model B predicts a neutrino flux that can explain the possible neutrino excess emission found in the IceCube data <cit.>. In the case of , the effective maximum CR energy[In the diffusive-escape-limited case, the maximum energy found in the CR distribution is effectively ∼10-30 higher than the energy inferred from equating the acceleration time with diffusive escape time <cit.>.] is limited by the diffusive escape rather than the Bethe-Heitler cooling process that is relevant for luminous AGNs including . On the other hand, due to the LAT upper limits in the GeV range, the neutrino flux is unlikely to be explained in the accretion shock model even if magnetic fields change. This demonstrates that observations of -like galaxies are useful for discriminating between the magnetically-powered corona and accretion shock models. When we include not onlybut alsodata, the neutrino brightness ofis next to , where the ranking is higher than that in <cit.>, andwill be an important target for IceCube-Gen2 which can reach a sensitivity of E_ν F_E_ν∼10^-9  GeV  cm^-2  s^-1 <cit.>, especially in the magnetically-powered corona model (see Figure <ref> left). For NGC 4945, as shown in the middle panels of Figures <ref> and <ref>, we find that GeV and higher-energy emission has a break at , which is consistent with a pionic γ-ray component from π^0→γγ, as expected in models relying on wind-torus interactions <cit.> and/or star-forming activities <cit.>. In addition, the LAT data around ∼30 MeV shows a clear excess compared to the pionic γ-ray component. However, it is important to note that the LAT performance below 50 MeV quickly deteriorates, and the instrument response at these low energies has not been well characterized. We have performed a number of tests to validate the robustness of the observed low-energy spectral feature in(summarized in Appendix <ref>), and none of them have shown any indication of the signal being spurious. But nevertheless, this feature should still be interpreted with caution due to the limitations of the LAT performance below 50 MeV.That said, this signal is very interesting because a similar excess flux over the pionic γ-ray component was reported for  <cit.>. More intriguingly, these sub-GeV γ-ray signatures of the two most promising neutrino-bright AGNs are consistent with predictions of the magnetically-powered corona model, and the corresponding LAT data can be explained if the CR pressure is 10% of the thermal pressure with the virial temperature (see Figure <ref> middle). For Circinus, as shown in Figures <ref> and <ref> right, we only have upper limits at sub-GeV energies, which are also consistent with both of the corona and accretion shock models. The corona model predicts thatand the Circinus galaxy are Seyfert galaxies that are among the most promising for neutrino detection, and the all-flavor neutrino flux reaches E_ν F_E_ν∼10^-8-10^-7  GeV  cm^-2  s^-1 as seen in Figure <ref>. They are promising targets for northern-hemisphere neutrino detectors such as KM3Net <cit.>, Baikal-GVD <cit.>, P-ONE <cit.> and Trident <cit.>. For example, the sensitivity of KM3Net is E_ν F_E_ν∼10^-9  GeV  cm^-2  s^-1. We also encourage IceCube searches for starting tracks and showers in the southern sky[<cit.> gave E_ν F_E_ν≲2×10^-8  GeV  cm^-2  s^-1 for an E_ν^-2 spectrum but it is not sensitive to energies below 10-30 TeV due to the severe muon background.]. § SUMMARYWe studied γ-ray emission from three nearby Seyfert galaxies, ,and the Circinus Galaxy. The theoretical models predict that the neutrino flux is approximately proportional to the intrinsic X-ray luminosity.andare the most promising neutrino sources in the IceCube sky, whileand Circinus are the best in the southern sky. We analyzed the 20 MeV – 1 TeV spectra of the three Seyfert galaxies, ,and Circinus, using 14.4 years of theFermi LAT data. We derived upper limits on γ-ray emission from , which gives strong constraints on the models of high-energy neutrino emission from this source. The recent hint of a neutrino excess toward , reported by the IceCube Collaboration <cit.>, appears to be consistent with the magnetically-powered corona model. Future observations ofand other Compton thin AGNs with next-generation neutrino detectors such as IceCube-Gen2 are crucial for discriminating the different models.We also found evidence of sub-GeV γ-ray emission in the direction of . If it originates from , a component other than pionic γ-ray emission is needed. Intriguingly, this excess is consistent with CR-induced electromagnetic cascades expected in the magnetically-powered corona model, although other interpretations such as high-energy emission from jets would also be possible. Cascade signatures expected in the MeV and sub-GeV γ-ray bands are important for future MeV γ-ray detectors such as AMEGO-X <cit.> e-ASTROGAM <cit.>, and GRAMS <cit.>. We thank the Topical Workshop:as cosmic laboratory sponsored by SFB1258 and Cluster of Excellence ORIGINS, at which we started this project. The work of K.M. is supported by the NSF grants No. AST-1908689, No. AST-2108466 and No. AST-2108467, and KAKENHI No. 20H01901 and No. 20H05852. CMK's research was supported by an appointment to the NASA Postdoctoral Program at NASA Goddard Space Flight Center, administered by Oak Ridge Associated Universities under contract with NASA. S.S.K. acknowledges the support by KAKENHI No. 22K14028, No. 21H04487, No. 23H04899, and the Tohoku Initiative for Fostering Global Researchers for Interdisciplinary Sciences (TI-FRIS) of MEXT’s Strategic Professional Development Program for Young Researchers.The Fermi LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat à l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucléaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'Études Spatiales in France. This work performed in part under DOE Contract DE-AC02-76SF00515. Work at NRL is supported by NASA.§ DETAILS OF Γ-RAY DATA ANALYSES§.§ NGC 4151is a nearby Seyfert 1.5 galaxy, located at a distance of d=15.8 Mpc <cit.>. X-ray observations ofindicate that it contains an ultra-fast outflow (UFO) <cit.>. Evidence for high-energy γ-ray emission from UFOs, including , was first reported in <cit.>, based on Fermi-LAT observations. In that work, a stacking analysis was performed using a small sub-sample of the UFO population, resulting in a 5.1σ detection. Althoughitself was below the LAT detection threshold, it had an individual significance of 4.2σ, which was the highest of the sample. It should be noted that when excludingfrom the UFO sample, the population was still detected at the level of 3.5σ, with similar spectral properties. This showed that the signal was not due toalone. It should also be noted that in the stacking analysis the source locations are fixed at their optical positions.Following the detection of the sub-threshold UFO sample, the individual detection ofwas recently reported in <cit.>. Importantly,is located at a distance of only 0.18^∘ from the fourth most significant hot spot in the search for northern neutrino point sources <cit.>, and the γ-ray detection could potentially have significant implications for the nature of the IceCube neutrinos. The LAT observations ofare complicated by the presence of two nearby BL Lacs. In fact, one of these sources (4FGL J1211.6+3901, located 0.43^∘ away) was carefully analyzed in <cit.>, and it was determined that the detection ofwas indeed robust against the presence of this nearby γ-ray-bright blazar. The point source model in <cit.> was originally based on the 4FGL-DR2 catalog; however, in the most recent catalog (4FGL-DR3) there is now another source (4FGL 1210.3+3928) which is located even closer, at a separation of 0.08^∘, associated with the BL Lac, 1E 1207.9+3945. It therefore seems that the γ-ray emission thought to be coming fromis in fact dominated by a nearby blazar. Based on the localization results (presented in Section <ref>), we keep the nearby source (4FGL 1210.3+3928) in the model. We add another source at the optical center ofand refit the data. Unsurprisingly,is no longer detected (TS ≈ 0), and so we calculate upper limits (ULs), shown in the left panel of Figure <ref>, with corresponding data provided in Table <ref>. The bins have TS=0, and so we calculate the ULs using a Bayesian approach (as opposed to a frequentist approach). Specifically, we use the calc_int method from the IntegralUpperLimit class, available in the Fermi Science Tools. This method calculates an integral upper limit by integrating the likelihood function up to a point which contains a given fraction of the total probability. We verified that the field is well-modeled, with good data-model agreement seen in the fractional count residuals and spatial residuals. We note that in a revised version of their work, <cit.> exclude on the basis of SED modeling that 1E 1207.9+3945 contributes to the emission of the source they associate to NGC 4151. In our work (and in the upcoming 4FGL-DR4) we clearly find that theposition of the γ-ray source 4FGL 1210.3+3928 is compatible with 1E 1207.9+394. lccccNGC 4151 SED Data Energy Energy Low Energy High 95% Flux UL (×10^-7) TS [GeV] [GeV] [GeV] [MeVcm^-2 s^-1]0.0447 0.02 0.110.3 0.00.173 0.1 0.32.6 0.00.548 0.3 1.01.1 0.01.78 1.0 3.160.7 0.05.62 3.16 101.2 0.017.8 10 31.60.8 0.056.2 31.6 1003.0 0.0316.0 100 10005.3 0.0 §.§ NGC 4945We calculate the SED for , located at the distance d=3.6 Mpc <cit.>, using the same fitting approach as was used for(in this case there is no need to re-localize). Results for this are shown in the middle panel of Figure <ref>, and the corresponding data are provided in Table <ref>. The source is detected with a TS=830. Overall, the SED is consistent with the values from the 4FGL-DR3. However, compared to the 4FGL, the two lowest energy bins now have significant signals (TS>9), as opposed to ULs. For the second energy bin, this is quite reasonable given the additional exposure. Interestingly, extending the lowest energy bin down to 20 MeV also results in a significant signal. The low-energy detection ofhas important implications for the corona model, and so we perform additional tests to check the robustness of the emission. The 68% containment angle atfor PSF3 events is ∼ 13^∘, and the 95% containment angle is ∼ 30^∘. However, our ROI size is only 10^∘× 10^∘, and the model includes all sources within 15^∘× 15^∘. Although the ROI is smaller than the PSF at , it should be sufficient for the analysis, given the following consideration. The lowest energy bin spans the energy range , and belowthe effective area quickly falls off. Specifically, the (total) effective area atis ∼ 4.4× higher compared to . More of the events will therefore be coming towards the upper part of the bin, where the 68% containment angle is ∼ 3.3^∘. Moreover, for a Gaussian PSF, the distribution peaks towards the center. On the other hand, the behaviour of the LAT PSF belowhas not been well characterized. We therefore test the robustness of the emission with respect to the ROI size.We repeat the analysis forusing two different regions, motivated by the approximate 68% and 95% containment radii. For one test we use a 20^∘× 20^∘ ROI, and include all sources within a 40^∘× 40^∘ region. For another test we use a 26^∘× 26^∘ ROI, and include all sources within a 60^∘× 60^∘ region. Note that for the latter test, the data cover the entire 68% containment region, and the model sources cover the entire 95% containment region. Figure <ref> shows the resulting SEDs, significance (√(TS)), and fractional count residuals for all three fits. Additionally, we overlay the SED and corresponding TS from the 4FGL-DR3. We find consistent results for all three fit variations. The flux in the first bin is compatible with the 95% ULs from the 4FGL-DR3. The flux for the second energy bin is slightly lower for the 10^∘ ROI, but it is still consistent with the other fits within uncertainties. The fractional count residuals show that the data are well modeled, with agreement better than 5% for the full energy range. The data are specifically well modeled in the first two energy bins. The spatial residuals are shown in Figure <ref> (calculated with the fermipy tool, residmap), for both the entire energy range (top row) and the first energy bin (bottom row). For the full energy range, the 10^∘ ROI is very well modeled, whereas the larger ROIs show some excesses/deficits towards the edges of the field. For the first energy bin, all fit variations show good data-model agreement. In order to better understand the reason for the discrepancy in the first energy bin with respect to the 4FGL-DR3, we divide it into two smaller bins to match the 4FGL binning, and re-calculated the SED. The lower part of the bin spans the energy range 20 - 45 MeV, and the upper part of the bin spans the energy range 45 - 100 MeV. We make the split at 45 MeV (compared to 50 MeV for the lower edge of the 4FGL bin), as required by our initial energy binning. This test is performed for all three ROI variations, and the results are shown in Figure <ref> with open markers, corresponding to the respective ROI size.For our nominal ROI size of 10^∘ we find a significance (√(TS)) of 5.8 and 2.6 for the two bins, respectively. The significance of the upper part of the bin is now reasonable given that of the 4FGL, and the flux is also compatible with the 4FGL upper limits. Additionally, we find a significant signal in the lower portion of the bin. However, the corresponding count residuals in the lower portion of the bin are slightly elevated at the level of ∼ 5 %. This can plausibly be attributed to some mis-modeling of either the isotropic emission or the Galactic diffuse emission (which at these energies is dominated by inverse-Compton radiation). Note that the isotropic model uses a power law extrapolation below 34 MeV, and the Galactic diffuse model uses a power law extrapolation below 50 MeV. For the 20^∘ ROI, the flux of the two smaller bins is found to be consistent with that of the combined bin, and likewise the count residuals are close to zero. For the 26^∘ ROI, the upper portion of the bin is also compatible with the combined bin, however, the lower portion of the bin is over-modeled by ∼ 3 %, and the SED calculation results in only an upper limit, albeit at the same level as the observed signal. As another test, we have made the SED calculation using a spectral index of 3.0 (instead of the commonly used value of 2.0), consistent with our model predictions, and also freeing all point sources within 5^∘, having a TS>16. This test was ran for the 20^∘ ROI. We have found that the low-energy bin (E_γ<100 MeV) is still detected with a similar flux, having a TS of 19.0. Finally, we have tested the SED calculation when freeing all sources within 7^∘, having a spectral index >2.5. In this case, we have found consistent results for the low-energy bin, having a TS of 27.5.Overall, these tests show that the emission between 20-100 MeV appears to be robust, although there is clearly a systematic uncertainty in the measured flux of a factor of ∼2-3. However, this result should still be interpreted with caution, due to the generally poor performance of the LAT below 50 MeV.lccccccNGC 4945 SED Data Energy Energy Low Energy High Flux (×10^-7) 1σ Flux Error (×10^-7) 95% Flux U.L (×10^-7) TS [GeV] [GeV] [GeV] [MeVcm^-2 s^-1] [MeVcm^-2 s^-1] [MeVcm^-2 s^-1]0.0447 0.02 0.131.36.5 42.1 23.00.173 0.1 0.37.8 2.1 11.3 14.00.548 0.3 1.016.71.2 18.7 251 1.78 1.0 3.1611.10.9 12.6 270 5.62 3.16 107.6 1.0 9.3 171 17.8 10 31.64.7 1.1 6.9 73.356.2 31.6 1004.1 1.9 7.9 21.1316.0 100 10001.5 1.6 5.8 7.6 §.§ CircinusWe also calculate the SED for Circinus, located at d=4.2 Mpc <cit.>, and results are shown in the right panel of Figure <ref>, and the corresponding data are provided in Table <ref>. The source is detected with a TS=130. Overall, the SED is consistent with the values from the 4FGL-DR3. The fractional count residuals are very good, being close to zero for all energies.lccccccCircinus SED Data Energy Energy Low Energy High Flux (×10^-7) 1σ Flux Error (×10^-7) 95% Flux U.L (×10^-7) TS [GeV] [GeV] [GeV] [MeVcm^-2 s^-1] [MeVcm^-2 s^-1] [MeVcm^-2 s^-1]0.0447 0.02 0.122.813.9 45.8 2.70.173 0.1 0.36.5 4.0 13.1 2.70.548 0.3 1.06.81.6 9.5 18.61.78 1.0 3.165.21.0 6.9 32.55.62 3.16 104.3 0.9 5.8 40.617.8 10 31.62.6 1.0 4.5 15.656.2 31.6 1002.7 1.5 5.8 14.7316.0 100 10002.3 2.0 7.0 3.5 aasjournal
http://arxiv.org/abs/2312.16089v1
{ "authors": [ "Kohta Murase", "Christopher M. Karwin", "Shigeo S. Kimura", "Marco Ajello", "Sara Buson" ], "categories": [ "astro-ph.HE", "astro-ph.GA", "hep-ph" ], "primary_category": "astro-ph.HE", "published": "20231226152710", "title": "Sub-GeV Gamma Rays from Nearby Seyfert Galaxies and Implications for Coronal Neutrino Emission" }
1 .001 Can ChatGPT Read Who You Are? Erik Derner et al. mode=title]Can ChatGPT Read Who You Are? 1,3]Erik Derner[orcid=0000-0002-7588-7668] [1] [email protected] Conceptualization, Data Curation, Investigation, Methodology, Software, Visualization, Writing – Original Draft 2]Dalibor Kučera[orcid=0000-0002-7023-8140] [email protected] Conceptualization, Formal Analysis, Methodology, Resources, Validation, Writing – Original Draft 1]Nuria Oliver[orcid=0000-0001-5985-691X] [email protected] Conceptualization, Investigation, Supervision, Writing – Review & Editing 3]Jan Zahálka[orcid=0000-0002-6743-3607] [email protected] Methodology, Visualization, Writing – Review & Editing [1]organization=ELLIS Alicante, addressline=Parque Científico de Alicante, Campus de San Vicente, s/n, postcode=03690, city=San Vicente del Raspeig (Alicante), country=Spain[2]organization=Department of Psychology, Faculty of Education, University of South Bohemia, addressline=Jeronýmova 10, postcode=37115, city=České Budějovice, country=Czech Republic[3]organization=Czech Institute of Informatics, Robotics, and Cybernetics, CTU in Prague, addressline=Jugoslávských partyzánů 1580, postcode=16000, city=Prague, country=Czech Republic [cor1]Corresponding authorThe interplay between artificial intelligence (AI) and psychology, particularly in personality assessment, represents an important emerging area of research. Accurate personality trait estimation is crucial not only for enhancing personalization in human-computer interaction but also for a wide variety of applications ranging from mental health to education. This paper analyzes the capability of a generic chatbot, ChatGPT, to effectively infer personality traits from short texts. We report the results of a comprehensive user study featuring texts written in Czech by a representative population sample of 155 participants. Their self-assessments based on the Big Five Inventory (BFI) questionnaire serve as the ground truth. We compare the personality trait estimations made by ChatGPT against those by human raters and report ChatGPT's competitive performance in inferring personality traits from text. We also uncover a `positivity bias' in ChatGPT's assessments across all personality dimensions and explore the impact of prompt composition on accuracy. This work contributes to the understanding of AI capabilities in psychological assessment, highlighting both the potential and limitations of using large language models for personality inference. Our research underscores the importance of responsible AI development, considering ethical implications such as privacy, consent, autonomy, and bias in AI applications.Large Language Models Natural Language Processing Psychology Personality Traits Big Five Inventory [ [ January 14, 2024 ====================§ INTRODUCTION Advancements in artificial intelligence, particularly in the analysis and generation of natural language, have revolutionized the way humans interact with technology. The most recent stage in this AI-based revolution has been spearheaded by large language models (LLMs) which have enabled the development of intelligent assistants (chatbots) that support humans in a variety of complex tasks. ChatGPT, developed by OpenAI, is the most popular LLM-based chatbot to date with an unprecedented rate of adoption and an outstanding ability to engage in coherent and contextually relevant conversations with users across a great variety of knowledge domains. Beyond conversations, LLMs have shown great potential for tasks such as creative writing, question-answering, programming, language translation, text summarization, and problem-solving <cit.>. As the intensity and frequency of human interaction with chatbots increases, LLMs have the potential to become valuable tools for user modeling and comprehensive communication analysis.In fact, previous work has shown that personality traits can be reliably inferred from individual linguistic styles <cit.>. However, the use of LLMs in the domain of personality assessment through language analysis remains under-explored. Our work aims to fill this gap by investigating ChatGPT's abilities to infer personality characteristics from written text. We report results from a user study (N = 155) where participants were asked to write four types of short texts or letters in their native language, Czech, together with their personality assessment by means of the Big Five Inventory, which characterizes personality according to five dimensions (extraversion, agreeableness, conscientiousness, neuroticism, and openness). In the subsequent analysis, we compare the accuracy of human raters and ChatGPT. We reflect on the strengths and weaknesses of ChatGPT to perform this task and emphasize the need for a critical analysis of the capacities of leveraging LLMs for potential user modeling. Since chatbots are used by hundreds of millions of people daily, it is essential to study and reflect on both the possibilities and ethical challenges posed by their capabilities to automatically infer the personality of their users. § RELATED WORK The intersection between automatic natural language processing methods and psychology is an emerging field of study, focusing on understanding and interpreting different aspects of human traits and behavior through language technologies <cit.>. Exemplary applications include using AI algorithms to assess sentiment from text (social media posts or personal blogs) and identify potential indicators of mental health issues <cit.>, to identify language patterns that may predict risky behaviors <cit.>, to analyze the dynamics of social interactions <cit.>, to support in the diagnosis of mental health disorders <cit.> or to automatically assess personality traits <cit.>. Regarding personality, prior research in psychology has established the link between individual linguistic patterns and personality traits: language has been found to provide significant insights into an individual's personality <cit.>, suggesting the potential of leveraging natural language processing (NLP) tools to automatically infer personality. Recent research has analyzed short communication intentions <cit.>, consumed textual content <cit.>, or typing dynamics <cit.> to infer personality traits.However, applying LLMs to automatically assess personality traits is still in its incipient stages. Early work in this domain has primarily focused on straightforward text analysis tasks, such as sentiment analysis <cit.>. Preliminary studies indicate that LLM-based chatbots, such as ChatGPT, exhibit promising potential in inferring personality traits from English text <cit.>.In this paper, we contribute to the interdisciplinary research on personality analysis using language-based AI models in four ways. First, we explore how well a generic chatbot (ChatGPT) built on top of a large language model (GPT-3.5) can infer its users' personality from short textual input. The ability to tailor responses based on inferred personality traits can enhance user experience by enabling personalization. Second, we contribute to the growing body of literature at the intersection of artificial intelligence and psychology. Such a multi-disciplinary approach holds promise for refining AI models to better align with human cognitive processes and for building AI tools to support psychology researchers and practitioners. Third, we extend the body of research on low-resource languages by performing the study in Czech, a West Slavic language with approximately 13.2 million speakers <cit.>. Since the majority of studies involving large language models do not address the linguistic diversity of low-resource languages, with this research, we contribute to the broader goal of promoting inclusivity and fairness in AI applications. Finally, we draw considerations on the potentialities and pitfalls of deploying large language models in real-world scenarios. While automatically inferring personality from text enables personalized and engaging user experiences, it also raises ethical concerns related to human autonomy consent, privacy, and biases. Through this study, we aim to shed light on these considerations and stimulate discussions on responsible AI development.§ METHOD In this section, we detail the dataset used, the experimental setup, and the evaluation metrics that underpin our analysis. §.§ DatasetWe analyze the data collected by the psychological-linguistic project CPACT <cit.> focused on identifying personality markers in human-written text. Quota sampling ensured that the sample was comparable with the characteristics of the population in the Czech Republic in the categories of gender, age, and education level. The data of N = 155 individuals over 15 years of age (77 men, 78 women) were analyzed in this study.The participants were administered the self-report Big Five Inventory (BFI-44) <cit.> to assess their characteristics. The BFI-44 is a 44-item questionnaire measuring five personality dimensions: extraversion (8 items), agreeableness (9 items), conscientiousness (9 items), neuroticism (i.e., emotional lability, 8 items), and openness to experience (10 items). It has favorable psychometric properties, represented by adequate scale reliability and corresponding retest reliability <cit.>. The Czech BFI-44 version was analyzed in adolescent and adult populations with a reliability spanning between 0.65 and 0.83. The approximate test–retest stability of BFI-44 dimensions after two months is r = 0.79 <cit.>. On the same day of collecting self-report questionnaires, the BFI-44 questionnaire was also filled out by another person, referred to as the partner in this paper, with whom the participant reported having a frequent and sincere relationship. The partner-assessment scores provides a valuable complement to the self-assessment and adds information on possible trends and asymmetries in the human assessment <cit.>.Subsequently, the participants were asked to compose four short texts (letters) in their native Czech language with an overall length of 180–200 words each. All letters were typed on a computer on the same day, with mandatory breaks between writing texts. Participants were required to follow the described scenarios (L1–L4) summarized in Table <ref>, with an emphasis on the authenticity and realism of the communication. Two human text raters were asked to assess the personality of all participants based on the provided texts. The raters were a female, aged 65 (rater A) and a male, aged 35 (rater B). Both hold a university degree (non-psychology), and they were trained to understand the construct of Big Five personality traits. They were instructed to read each text and then use a scale to estimate the degree of presence of each of the five personality characteristics, which they attribute to the author of the text. For the estimate, they used a three- or five-point scale ranging from 0 (characteristic not present) to 2 or 4 (dominant characteristic). The details on the range for each dimension are reported in Table <ref>. For example, for the agreeableness dimension, 0 corresponds to `very aloof' and 4 corresponds to `very warm-hearted' <cit.>.As a result, the dataset consists of a total of 4 × 155 = 620 texts together with the assessment of the Big Five personality traits for each of the 155 participants provided by themselves (ground truth), their partner, and two human raters.In the following, we will refer to the scores from this study as follows:* S – personality self-assessment of the study participant using the standardized BFI-44 test (self-report variant);* P – participant's personality assessment by their partner using the standardized BFI-44 test (other-report variant);* A, B – personality estimation score based on the text evaluation by human raters A and B, respectively.§.§ Automated Personality EstimationIn our experimental evaluation, we asked ChatGPT to score the letters exactly in the same way as the human raters were asked to score them. We created a set of Python scripts leveraging OpenAI's API access to the GPT-3.5 chat model (commonly known as ChatGPT), using the March 1, 2023 model version. The Hugging Face Transformers library was used to facilitate the interaction with ChatGPT.To assess the BFI personality traits from the participants' letters without fine-tuning the model, we employed zero-shot prompting <cit.>. The principle of zero-shot prompting consists in providing all the needed context directly in the prompt. Zero-shot prompting allows us to leverage the power of ChatGPT's language comprehension abilities without modifying the model through additional training.We experimented with four distinct prompt variants, each written in the Czech language, to investigate their impact on ChatGPT's performance in personality trait assessment.Each prompt consisted of at most three elements:* Task (T) that specifies the output ChatGPT should provide, such as estimating the author's extraversion on a scale from 0 to 4, where 0 represents a strongly introverted person and 4 indicates a strongly extraverted person, requesting a single integer response;* Letter (L), which consists of the original letter written by the participant; * Dimension description (D), explaining the BFI trait according to its psychological definition <cit.>. Using these components, the prompts were constructed in four different ways:* TL: Task + Letter; * DTL: Dimension description + Task + Letter; * LT: Letter + Task; * DLT: Dimension description + Letter + Task.These prompts aim to leverage ChatGPT's language comprehension capabilities to infer personality traits based on the provided letters (and the descriptions of the personality dimensions, where applicable). The four variants of the prompts were evaluated to understand the impact that different prompts have on the performance of the chatbot. As the prompts and the letters were in Czech, we empirically evaluated ChatGPT's capability to perform the task in a low-resource language.Each of the 620 letters was treated by ChatGPT as a separate case, with no information about their authors or possible interconnection (e.g., such that each participant produced four texts). ChatGPT was asked to assess only one personality dimension at a time. To mitigate the influence of ChatGPT's stochastic nature, each prompt execution was performed five times. We report the mean of the five responses rounded to the nearest integer as a score assigned to the letter by ChatGPT. Occasionally (in less than 0.1 % cases), ChatGPT did not follow the instructions to return a single integer and delivered a wordier response instead. Such results were discarded, and the mean was calculated on the remaining valid values. §.§ Evaluation MetricsThe starting point in evaluating the success rate of the ChatGPT personality estimation was the premise that the scores obtained through the standardized self-assessment questionnaire (S) are the most accurate in characterizing the actual personality of the text authors, and hence we consider them as the ground truth. This premise is extensively supported by previous work <cit.>. This basis was also applied to the partner-assessments (P), which we present in this study as supporting information about the validity of the test method.To allow for a comparison of all evaluation scores, we re-scaled the data as follows. The human ratersassigned an integer score to each letter using the range reported in Table <ref>. ChatGPT was instructed the same way as the human raters, i.e., it was given the same scale for each personality trait. The scale used by the human raters (and ChatGPT) served as the reference scale and all other personality assessment scales were transformed to match that scale. To that end, the original scores of the BFI-44 self-assessment S and partner-assessment P were transformed into equally-sized bins, corresponding to the aforementioned integer scale for each personality trait.To determine the similarity between two or more assessments, a combination of methods and procedures is commonly used <cit.>. For the purpose of this study, we chose several methods: the root mean square error (RMSE), the mean absolute error (MAE), the hit rate, the F1 score, and Spearman's correlation coefficient (ρ). These methods are complemented by descriptive statistics and visual transformation of values (score spectra).The RMSE is the square root of the mean squared differences between one type of assessment (human rateror ChatGPT ) and the self-assessment S. The MAE is the average of the absolute differences between one type of assessment (human rateror ChatGPT ) and the self-assessment S. To compute the hit rate, we first labeled the personality score in each dimension as low, neutral, or high, as per Table <ref>. The hit rate measures the agreement between one type of assessment (human rateror ChatGPT ) and the self-assessment S. When comparing two low/neutral/high scores, this metric aims to simplify the match result into a binary form of divergence/congruence of both assessments. For example, a participant who scored 3 in the BFI-44 extraversion dimension would be considered to be high in the extraversion dimension. If ChatGPT reported a value of 3 or 4, there is a hit because both assessments score the person as high in extraversion. We report both the absolute number of matches or hits and the hit rate as a percentage for low and high spectra. The hit rate is an easy-to-understand representation of the agreement between two methods <cit.>.The F1 score, a widely utilized metric in binary classification, was adopted to measure the precision and recall balance between the assessments of the human rateror ChatGPTagainst the self-assessment S. Precision is the ratio of correctly identified positive cases to all cases identified as positive and recall is the ratio of correctly identified positive cases to all actual positive cases. Specifically for our application, the high spectrum of each dimension, as defined in Table <ref>, represents the positive class (e.g., an extraverted person), while the neutral and low spectra correspond to the negative class (e.g., a person not characterized as extraverted). The F1 score captures the harmonic mean of precision and recall, providing a single measure of a method's accuracy in identifying high scores in each of the personality dimensions. Finally, the Spearman's correlation coefficient (ρ) was used to determine the degree of association between the assessments of personality traits given by the human rateror ChatGPTon one side and the self-assessment S on the other side. This non-parametric measure is particularly suitable for our analysis as it measures how well the relationship between two assessments can be described using a monotonic function, thus providing insight into the consistency of orderings between different types of assessments. It is appropriate for comparing ordinal data, like the personality scores in our study.§ RESULTSIn this section, we report the general descriptives and the RMSE and MAE metrics, the score spectra, the hit rate, and the F1 score. This descriptive part is followed by the results of inferential statistics – Spearman's correlation coefficients. §.§ General DescriptivesGeneral descriptives of the self-assessment and all the evaluation data are presented in Table <ref>. Key to the interpretation of the table is the first column S, which contains the participants' self-assessment that we consider as ground truth. Scores obtained by other methods should be as close as possible to these values. An interesting observation can be made with respect to the coefficient of variation. It shows how the values vary across all assessments (evaluations). For example, in the extraversion dimension, the variability of the ChatGPT evaluation () is considerably lower than that of the P/A/B assessments. This means that the ChatGPT assessments do not cover the full spectrum of values that characterize this dimension in the ground truth (self-assessment scores S). §.§ RMSE and MAETable <ref> reports the RMSE and MAE metrics that measure the proximity of the assessment scores to the self-assessment mean. To interpret the metrics correctly, note that conscientiousness and openness use a smaller scale than the other dimensions (see Table <ref>).As expected, the personality estimation of the participant's partner P was the most accurate, followed by that of ChatGPT. Interestingly, both human raters were less successful than ChatGPT in inferring the author's personality from the letter, especially A. ChatGPT outperforms both human raters by the most significant margin in the TL variant. Only for the conscientiousness dimension,was outperformed by B in terms of RMSE. Overall, the other threevariants also achieved better average results than human raters.'s estimations were the most accurate in the agreeableness dimension. In terms of RMSE and MAE, allvariants perform better than the human raters on this trait and even outperform the partner's assessment P. §.§ Score SpectraThe relative frequencies of scores in all three spectra (low, neutral, and high) of a given personality dimension are shown in Figure <ref>. This evaluation aims to map which spectrum is generally preferred by each assessment method. Evaluation scores following the distribution of self-assessment (S) can be considered as a substantial condition for achieving accurate evaluation.The bar charts indicate that ChatGPT exhibits a `positivity bias' in all dimensions: it tends to evaluate people as extraverted, agreeable, conscientious, emotionally stable (i.e., with low neuroticism scores), and open to experience. This tendency will be further discussed in Section <ref>. Another noteworthy observation is that ChatGPT tends to use the neutral score much less frequently when the task is given at the end of the prompt (in variants LT and DLT) as compared to specifying the task before the letter (in variants TL and DTL). In other words, ChatGPT appears to be more confident in assessing personality if the task is provided at the end of the prompt.Complete descriptive statistics of values in three spectra are presented in Tables <ref>–<ref>, see Appendix. Each of the personality dimensions in the three spectra (low, neutral, and high score) is described in terms of central tendency and variability. The table provides valuable information about how far a given assessment method is from the values of referential self-assessment scores, from which we can infer the tendency and degree of specific bias. §.§ Hit RateNext, we evaluate the accuracy of all personality assessment methods by means of the hit rate. This metric analyzes the agreement between the self-assessment S and each evaluation method. Table <ref> reports the absolute number of matches and the corresponding percentage for the low and high spectra of each personality trait. It shows considerable variability in the accuracy of the assessments for all methods (, ). When comparing the performance, it is important to consider both ends of the spectra and their size. Similarly as with the RMSE and MAE metrics,yields the best results on the agreeableness dimension in the high spectrum, which is almost ten times larger than the low spectrum, with a hit rate ranging from 84 % to 91 % depending on the prompt variant. It outperforms the human ratersand even the partner's assessment P. In the openness dimension, the results vary substantially between thevariants, implying that this personality trait is difficult for ChatGPT to infer. Prompt variants with the task specified at the end (LT, DLT) show better performance. For the remaining dimensions, the results are mixed:performs in some cases better than other evaluators and in other cases worse.These results support again the presence of a positivity bias in ChatGPT. Nevertheless, it still reflects the specificity of the author/text, which is particularly noticeable in the dimensions of agreeableness and conscientiousness. It is also noteworthy that among the successfulvariants, the ones that yield the best results are in most cases those that include the personality trait description in the prompt (DTL, DLT). §.§ F1 ScoreThe F1 scores for the various evaluation methods across different personality dimensions are presented in Figure <ref>. This metric, a harmonic mean of precision and recall, offers insights into the balance between the accuracy and completeness of each evaluation method.In the extraversion and agreeableness dimensions, allvariants show consistently good results, outperforming both human raters . Furthermore, allvariants outperform even the partner-assessment P in the agreeableness dimension. Regarding conscientiousness,exhibits performance comparable or superior to the performance of human raters , except for the TL variant. ChatGPT's positivity bias is evident when inferring neuroticism which yields low F1 scores as the chatbot avoids providing high scores for this trait. Finally, 's performance for the openness dimension significantly depends on the variant. Specifying the task at the end of the prompt (LT and DLT variants) helps ChatGPT substantially improve its performance in terms of the F1 score. The lower performance of the human evaluators and particularly of B underscores the challenges in the human judgment of openness from short texts. §.§ Correlation CoefficientsThe results of inferential statistics in the form of Spearman's correlation coefficient were performed both for all text types (L1–L4) and for each text type individually (L1, L2, L3, and L4). Table <ref> shows the correlation coefficients ρ and their significance levels.The results indicate that the covariance of self-assessment scores (S) with ChatGPTscores () varies across personality dimensions and texts. To summarize the results, we report only relations that can be considered at least weakly correlated, which corresponds to ρ > 0.2 <cit.>.Significant positive correlations were found for letter L4 (apology letter) and extraversion, for letter L3 (complaint letter) and agreeableness, and for letter L2 (letter from vacation) and conscientiousness. No significant correlations were found between the letter types and the ChatGPT scores for the neuroticism and openness dimensions.If we were to compare ChatGPT's assessments with the human evaluations, ChatGPT achieved results comparable to human rater A and outperformed human rater B. However, note that the correlation values are low and/or non-significant for both human raters and ChatGPT. Thus, the results rather indicate that neither form of evaluation was very successful in terms of their correlation with the self-assessments.§ DISCUSSIONIn this paper, we have empirically evaluated the capabilities of a general-purpose LLM-based chatbot, ChatGPT, to infer personality from short texts, and we have compared its performance with that of two human raters. Surprisingly, ChatGPT's assessments outperformed human assessments according to most metrics (RMSE, MAE, hit rate, F1 score, and correlation) in several personality dimensions, yet we also uncovered interesting findings that reveal the strengths and limitations of chatbots in inferring personality from text.Positivity bias: We have identified a positivity bias in ChatGPT's assessments, i.e., its tendency to assign socially desirable scores across key personality dimensions. Social desirability is defined in psychology as the bias or tendency of individuals to present themselves in a manner that will be viewed favorably by others <cit.>. Most authors agree that there are at least two levels of social desirability: (1) the level of self-deception (a reporter has a distorted self-image) and (2) the level of other-deception, i.e., deliberate deception of others or so-called impression management <cit.>. Both levels are related to typical motivational patterns <cit.>. If we dare to speculate and project these psychological constructs into the ChatGPT processes, we could attribute the bias to (1) its pro-social naivety, i.e., ChatGPT is unintentionally mistaken, or to (2) a strategic pandering to the user. From a technical perspective, this positivity bias aligns with the inherent design of language models to favor more positive or neutral rather than negative content in their responses, particularly when they are fine-tuned by means of human feedback <cit.>. Furthermore, a similar trend is found in human-to-human interaction with the so-called friendship bias <cit.>, i.e., a situation where humans underestimate the undesirable characteristics of others as a manifestation or confirmation of the positivity of their relationship <cit.>. Prompt and text dependency: ChatGPT's performance is sensitive to the formulation of the prompt. Including the descriptions of the personality dimensions in the prompt (DTL and DLT variants) enhanced ChatGPT's accuracy, which suggests that providing an explicit context within the prompt can guide the LLM towards more accurate evaluations. Specifying the task at the end of the prompt (LT and DLT variants) improves ChatGPT's performance in terms of the score spectra, hit rate, and F1 score. This can be attributed to the attention mechanism used in transformer-based LLMs <cit.>, which could prioritize information toward the end of the prompt.In addition, we identified a dependency on the type of letter for the assessment of different personality traits: statistically significant correlations between the self-assessments and ChatGPT's assessments were found for apology letters in the case of extraversion, for complaint letters in the case of agreeableness and for vacation letters in the case of conscientiousness. There were no significant correlations in the neuroticism and openness dimensions, which suggests that these two personality traits are more difficult to infer from text. Previous work has also reported that there is an interaction between personality traits and different types of information in zero-acquaintance settings such that different personality traits may be accurately judged from different types of information <cit.>. Openness has been reported to be related to what people tell about themselves <cit.>. Accuracy in agreeableness judgments has been seldom found in the literature <cit.>, and scholars have claimed that it is not possible to accurately infer neuroticism from text when no self-related content is provided <cit.>, which is aligned with our findings. However, while previous work has argued that extraversion and conscientiousness are not revealed in the style of linguistic expression <cit.>, we find that they manifest themselves in some types of text, such as apology (extraversion) and vacation (conscientiousness) letters. Variability in performance: The variability in ChatGPT's performance to infer different personality traits illustrates the complexity of inferring nuanced human characteristics from text alone. ChatGPT's success in assessing agreeableness and extraversion, for instance, contrasts with its difficulties in accurately evaluating neuroticism or openness, highlighting the challenge of capturing the full spectrum of human personality through automated linguistic analysis.Ethical considerations: The interplay between machine learning and psychology, as demonstrated in this study, has significant implications for advancing our understanding of human behavior and cognition. The ability of ChatGPT to mirror human-like personality assessments from text opens new areas of research and applications in cognitive science. This integration offers insights into how language reflects underlying personality traits and psychological conditions. Moreover, it provides a framework for developing more personalized and adaptive AI systems that can better understand and interact with users based on their unique characteristics.However, this capability also introduces the potential for manipulation. When AI systems understand and predict the personalities of their users, there exists a risk of exploiting these insights for manipulative purposes, such as targeted advertising, political campaigning, or social engineering attacks. Furthermore, such a capability raises additional concerns about user privacy and the potential for misuse of personal data. Thus, it is imperative to establish ethical guidelines and robust safeguards, such as explicit user consent for personality analysis, strict privacy controls, transparency, and clear boundaries on how personality insights might be used. AI systems that interact with humans, such as chatbots, should always be designed from a human-centric perspective, with a focus on ethical personalization, prioritizing user well-being and autonomy over commercial or political gains. The collaboration between AI researchers and cognitive scientists is crucial in this context. It can lead to the development of AI systems that are not only technically advanced but, more importantly, aligned with and respectful of human values, unlocking the potential of AI to support and augment—not replace—humans.§ CONCLUSIONThis study explores ChatGPT's ability to assess personality traits from short texts in the Czech language. It offers valuable insights into the performance of general-purpose LLM-based chatbots for psychological profiling. ChatGPT demonstrates a promising capability to automatically infer several personality dimensions from certain types of text, particularly extraversion, agreeableness, and conscientiousness. However, we have also identified limitations in ChatGPT's performance, such as a positivity bias, a dependency on the formulation of the prompt, and varying accuracy levels across different personality traits and text types. Our results highlight the potential of AI in supporting psychological assessments and interventions, emphasizing the importance of incorporating multi-disciplinary perspectives from AI and cognitive science.We underscore the need for cautious and responsible use of AI in personal and psychological assessments. The ethical implications related to privacy, consent, autonomy, and the potential for biases in automated personality analysis require further exploration and regulation. Ensuring transparency, safeguarding user data, preserving and honoring human autonomy, and mitigating biases are critical considerations as we integrate AI more deeply into personal and psychological domains. At the same time, ChatGPT's capabilities in inferring personality from text open the path for personalized interactions and enhanced user experience. Such personalization could help chatbots adapt to the preferred communication style of their users and increase user trust. These aspects could play a key role in the application of AI systems in psychological counseling and delivering psychological care.In sum, while ChatGPT represents a significant step forward in AI's ability to analyze and interpret human language, its application in psychology-related domains requires careful consideration and further research. The interconnection of machine learning and cognitive science presents a promising direction that needs to be explored with caution and a commitment to ethical principles.§ ACKNOWLEDGEMENTSE.D. and N.O. are supported by the Valencian Government (Convenio Singular signed with Generalitat Valenciana, Conselleria de Innovacion, Industria, Comercio y Turismo, Direccion General de Innovacion) and by Intel Corporation.§ APPENDIX A: DESCRIPTIVES OF THE SCORE SPECTRADescriptives of the evaluation scores in three spectra in relation to the self-assessment reference score S are shownin Table <ref> for extraversion,in Table <ref> for agreeableness, in Table <ref> for conscientiousness, in Table <ref> for neuroticism, and in Table <ref> for the openness dimension. cas-model2-names
http://arxiv.org/abs/2312.16070v1
{ "authors": [ "Erik Derner", "Dalibor Kučera", "Nuria Oliver", "Jan Zahálka" ], "categories": [ "cs.CY", "cs.CL", "cs.HC" ], "primary_category": "cs.CY", "published": "20231226144304", "title": "Can ChatGPT Read Who You Are?" }
Physics, Faculty of Science, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UK School of Mathematics, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UK Centre for Photonics and Quantum Science, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UKPhysics, Faculty of Science, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UK Centre for Photonics and Quantum Science, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UKSchool of Mathematics, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UK Centre for Photonics and Quantum Science, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UKPhysics, Faculty of Science, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UK Centre for Photonics and Quantum Science, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, UK Symmetry-breaking quantum phase transitions lead to the production of topological defects or domain walls in a wide range of physical systems. In second-order transitions, these exhibit universal scaling laws described by the Kibble-Zurek mechanism, but for first-order transitions a similarly universal approach is still lacking. Here we propose a spinor Bose-Einstein condensate as a testbed system where critical scaling behavior in a first-order quantum phase transition can be understood from generic properties. We generalize the Kibble-Zurek mechanism to determine the critical exponents for: (1) the onset of the decay of the metastable state on short times scales, and (2) the number of resulting phase-separated ferromagnetic domains at longer times, as a one-dimensional spin-1 condensate is ramped across a first-order quantum phase transition. The predictions are in excellent agreement with mean-field numerical simulationsand provide a paradigm for studying the decay of metastable states in experimentally accessible systems. Dynamics of a NonequilibriumDiscontinuous Quantum Phase Transitionin a Spinor Bose-Einstein Condensate Magnus O. Borgh ==========================================================================================================§ INTRODUCTIONClassical and quantum nonequilibrium phase transitions arise in many areas of physics, ranging from cosmology <cit.>, to condensed matter <cit.>, and to ultracold atomic gases <cit.>.For a second-order (continuous) phase transition, a correlation length and time scale can be identified that characterise the coherence and dynamical response of the system.As the critical point is approached,both of these exhibit power-law divergences described by critical exponents <cit.>. In non-equilibrium phase transitions, this implies that close to the critical point, the system is no longer able to adiabatically follow the ground state <cit.>.Causally disconnected regions then choose the new broken symmetry state independently. This results in the formation of topological defects or domain boundaries at a density related to the quench rate of the control parameter.The Kibble-Zurek mechanism (KZM) provides a theoretical framework that can predict the density of these defects or domain walls for a finite quench rate from universal properties of the continuous phase transition. First introduced by Kibble in the context of cosmology as a mechanism for the formation of cosmic strings in the early universe <cit.>, it was subsequently extended by Zurek to condensed matter systems <cit.>. It has since been successfully verified in many settings, including thermally driven transitions <cit.> and quantum phase transitions (QPTs) <cit.>, and has been demonstrated to apply to quantum-annealing implementations of quantum computation <cit.>.Recently, there has been increasing interest in studying first order QPTs <cit.>, where metastability plays a crucial role, including in cold-atom systems <cit.>. A classical example of such metastability is the transition of supercooled water which remains liquid below the freezing point. For first-order quantum phase transitions, such `supercooling' like behaviour can lead to a zero temperature false vacuum. This state plays an important role in particle physics and cosmology <cit.>, but an understanding of how the metastable state decays is hampered by the lack of a general theoretical approach dealing with first-order quantum phase transitions.Here we propose an experimentally accessible nonequilibrium first-order QPT where the decay of such metastable states can be understood through the KZM. In particular, we study the persistence of the metastable state following a finite quench of the quadratic Zeeman shift in a spin-1 Bose-Einstein condensate (BEC) with ferromagnetic (FM) interactions. We demonstrate that the onset of decay of the metastable state representing the false vacuum agrees with the critical scaling law predicted by a generalisation of the KZM to our first order QPT.A key feature of this phase transition in a ferromagnetic spinor BEC is the formation of phase-separated domains at long times past the transition point. Therefore, aside from the short time behaviour characterising the decay of the metastable state, we also apply the KZM to determine the scaling of the number of phase separated domains at latter times. In contrast to the short time behaviour, the KZM accurately predicts the scaling of the number of domains for intermediate quench rates whereas at very slow quenches, a scaling in agreement with Landau-Zener <cit.> predictions emerges.Focusing on an ultracold atomic system has the advantage that the QPT is easier to control for isolated systems. Atomic BECs in particular are pristine systems and offer a highly controllable platform where the strength of inter-atomic interactions and the confining trapping potentials can be tuned.Consequently, they are already popular example systems for phase-transition experiments <cit.>, as well as nonequilibrium physics, even in low dimensions, ranging from relaxation dynamics <cit.> to quantum quenches <cit.>. Unlike scalar BECs,the spin degrees of freedom are not frozen out in spinor BECs. These additional degrees of freedom give rise to a non-trivial phase diagram even at zero temperature <cit.> and a correspondingly rich array of topological defects and textures <cit.>.For these reasons, studying nonequilibrium QPTs with spinor BECs has attracted much attention <cit.>.In contrast to previous studies on QPTs, the first-order QPT between the broken-axisymmetry (BA) phase and a ferromagnetic (FM) phase in a spin-1 BEC with FM interactionscorresponds to a discontinuous quantum critical point (DQCP) <cit.>, the quantum analogue of the classical discontinuous critical point <cit.>. As it does not meet the general criteria of applicability of the standard KZ theory, the KZM has not hitherto been studied in this context. We consider, in particular, a one-dimensional (1D) spin-1 BEC with FM interactions in a ring-trap geometry. By quenching the quadratic Zeeman shift, the system can transition from the BA phase into a phase-separated FM phase where domains of atoms with opposite condensate-spin projection form <cit.>. Unlike the single-mode scenario in an antiferromagnetic condensate <cit.>, where there is no domain formation, here phase separation is a consequence of the FM interactions under conservation of longitudinal magnetisation in a spatially extended BEC. Its 1D nature, however, has the advantage that once the domains form, they are frozen in and cannot undergo any coarsening dynamics <cit.>. § RESULTS AND DISCUSSIONAs our example system, we consider a spin-1 BECdescribed by the mean-field condensate spinor wave function Ψ = (ψ_1, ψ_0, ψ_-1)^T. The Hamiltonian density then reads <cit.>H = H_0 + c_0/2n^2 + c_1/2n^2^2 - pn⟨F̂_̂ẑ⟩ + qn⟨F̂_z^2⟩,where H_0=(ħ^2/2M)|∇Ψ|^2 + nV(z) for atomic mass M and external trapping potential V(z). Here, n=∑_mψ_m^*ψ_m is the total atomic density. The condensate spin operator F̂≡ (F̂_x, F̂_y, F̂_z) is the vector of spin-1 Pauli-type matrices such that ⟨F̂_μ⟩=1n∑_mm'ψ_m^*(F̂_μ)_mm'ψ_m' for μ = x,y,z. Conservation of angular momentum in s-wave scattering means that the longitudinal magnetisation M_z = ∫⟨F̂_z⟩dz is conserved on experimental time scales. The spin-independent and -dependent interaction strengths arise from the s-wave scattering lengths a_ℱ in the spin-ℱ channels of colliding spin-1 atoms as c_0=4πħ^2(a_0+2a_2)/3M and c_1=4πħ^2(a_2-a_0)/3M, respectively. Linear and quadratic Zeeman shifts of strengths p and q, respectively, may arise from an applied magnetic field along the z-direction, or in the latter case be induced by an AC Stark shift <cit.>, which gives precise control over both strength and sign. Due to conservation of M_z, a uniform linear Zeeman shift only causes precession of the spin, under which the Hamiltonian is invariant. We therefore only consider effects from the quadratic Zeeman shift.We consider atoms with c_1<0, such as ^87Rb <cit.>, which provides an interesting phase diagram arising from the competition between the third and last term in Eq. (<ref>).A three-component BA phase with zero longitudinal magnetisation occurs for 0 < Q = q / (|c_1|n_0) < 2 <cit.> where n_0 is the background density in a uniform system:ψ_± 1 = √(2n_0)/4√(2-Q), ψ_0 = √(n_0)/2√(2 + Q).In addition, a FM state occurs for Q<0: Ψ = (√(n_0), 0, 0)^T or Ψ = (0, 0, √(n_0))^T. Since M_z is conserved, a BA initial condition with M_z=0 implies that the FM phase results in the formation of phase-separated domains with opposite spin projection as Q is ramped across the phase transition.The associated instability that leads to the emergence of phase separated domains when c_1<0 is not captured in the single-mode approximation <cit.>.Moreover, the transition point between the BA and FM phases at (q_c,p_c)=(0,0) is a DQCP. Such a discontinuous phase transition satisfies five conditions <cit.>, which permit scaling arguments to be applied. A critical point at q=q_c and p=p_c, where p functions as a symmetry-breaking field, is a DQCP of this kind if (1) the energy density ϵ(q,p) across the transition is continuous, ϵ(q_c^+, p_c) - ϵ(q_c^- , p_c) = 0, but (2) its derivative is discontinuous, ∂ϵ(q_c^+, p_c)/∂ q - ∂ϵ(q_c^-, p_c)/∂ q0. Here, q_c^+ and q_c^- correspond to approaching q_c from above or below,respectively. Further, (3) the order parameter m=-∂ϵ(q, p)/∂ p must exhibit a discontinuous jump with respect to q as the critical point is crossed, |m(q_c^- , p_c)| > |m(q_c^+, p_c)| = 0, and additionally, (4) the order parameter is also discontinuous with respect to p: |m(q_c,p_c^±)| > 0. Finally,we require (5) that the derivative of the energy be bounded as the critical point is approached: |∂ϵ(q_c^±,p_c)/∂ q| < ∞.The energy densities per particle for the BA and FM states are, respectively <cit.>, ϵ_BA = (-p^2+q^2+2qc_1 n_0)^2/8c_1 n_0 q^2 + 1/2c_0 n_0,ϵ_FM_± = ∓ p + q + 1/2 n_0(c_0 + c_1), where the subscript +(-) denotes the FM phase with spin pointing up (down). These energies are continuous at the critical point (q_c,p_c)=(0,0). The derivatives, however, are discontinuous, but remain bounded. Meanwhile, the relevant order parameter for the BA and FM states is m_BA=p(p^2-q^2-2qc_1n_0)/8c_1 n_0 q and m_FM_±=± 1, respectively, which is precisely the local magnetisation F_z=|ψ_1|^2-|ψ_-1|^2 in both phases <cit.>. For p_c=0 this order parameter is zero in the BA phase and becomes non-zero in the FM phase. A non-zero p, however, causes the order parameter to be non-zero in both phases. This means that the BA to FM transition satisfies all the conditions for a DQCP.Next, we recall the key arguments in the KZM.A continuous second-order phase transition can be characterised by the divergence of an instantaneous correlation length ξ∼ |q(t)-q_c|^-ν and an instantaneous relaxation time τ∼ [ξ(t)]^z, where ν and z are the correlation-length and dynamical critical exponents, respectively. In the standard QPT scenario, the relaxation time is set by the inverse of the energy gap Δ between the ground state and the first excited state of a gapped mode <cit.>: τ≃Δ^-1. A system initially prepared in the ground state follows this state adiabatically as long as the relaxation time remains small. However, as the critical point is approached,theenergy gap vanishes and the relaxation time diverges, which leads to the breakdown of the adiabatic regime. This divides the dynamics into three stages: adiabatic, frozen, and adiabatic again as the system crosses the critical point. Assuming a quench of the form |q(t)-q_c| ∼ |t/τ_Q|,where τ_Q^-1 is the quench rate, the freezing time is estimated to be | t | = τ(t ). This leads to a scaling for the freezing time given by| t | ∼τ_Q^zν/(zν + 1).It follows from this and the scaling of the correlation length that at the freezing time,ξ∼ξ( t ) ∼τ_Q^ν/(zν + 1). This provides an estimate for the density of defects or domains: N_d ∼ξ^-d∼τ_Q^-dν/(zν + 1).Of the three Bogoliubov modes, E_k, f_z and E_k, ±, for the BA phase <cit.>(see also Appendix), only E_k, + is gapped in the long-wavelength limit. This mode determines the scaling associated with the KZM for the transition from the polar to the BA phase <cit.>. By contrast, the relevant mode for the BA to FM transition isE_k, f_z(k) = √(ϵ_k(ϵ_k + q)),where ϵ_k = ħ^2k^2/2M is the kinetic energy. This spectrum is gapless in the long-wavelength limit. The imaginary part of E_k, f_z together with that of E_k, + is shown in Fig. <ref> and clearly illustrates that an instability can occur at Q=0 only for modes with k0. These unstable modes are responsible for the formation of phase separated domains in the FM phase.To derive a KZ scaling when the relevant mode is gapless, consider the more general spectrum, E_k^2 ∼ |q(t)-q_c|^αϵ_k^η+ϵ_k^2z, of which Eq. (<ref>) is a special case. To find scaling solutions consistent with the KZM where E_k ∼ |q(t)-q_c|^z, we make the ansatz k ∼ |q(t)-q_c|^ν (corresponding to k∼ξ^-1) and equate the two terms in the dispersion relation to derive the condition α = ν(2z - η). For our gapless mode, the adiabatic-impulse approximation states that the impulse regime begins when E_k^2 = Ė_k. This yields| t | ∼τ_Q^α/(2+α)k^-η/(2+α)∼τ_Q^ν z/(1+ν z) ,for the freezing time upon using the above scaling assumed for k. This immediately implies the characteristic momentum scale k∼τ_Q^-ν/(zν + 1) from which we extract the defect density where d is the dimensionality of the system (see also Appendix). Hence,the scaling relations of the KZM for a gapped mode is recovered for a gapless spectrum.Specifically for our 1D system (d=1 and q_c = 0), Eq. (<ref>) implies α=1, η=2 and z=2. This is equivalent to setting z=2 and ν=1/2corresponding to a defect-density scalingN_d ∼τ_Q^-1/4.Therefore, despite originating in the same model Hamiltonian,this scaling is clearly different from that reported for the KZM in continuous phase transitions through a QCP in spinor BECs <cit.>. Our results indicate a new scaling regime that is associated with the DQCP of this system.To check our prediction,we numerically evolve the time-dependent spin-1 Gross-Pitaevskii equations (GPEs) obtained from Eq. (<ref>) using a symplectic algorithm <cit.> (see Appendix).Typical results for τ_Q=1000 are shown in Fig. <ref>. We see the clear formation of FM domains after crossing the critical point at t=0.Figure <ref> shows the normalised atom number for the ψ_0 component as the system is quenched for various τ_Q. Initially, the system tracks the BA-phase ground state <cit.>, Eq. (<ref>). After passing the critical point, the system is no longer able to adiabatically track the true ground state. Rather, it evolves ina metastable BA state, even for t/τ_Q>0, until it emerges from the impulse regime at a time clearly dependent on the quench rate. At this point, the metastable state decays with an associated abrupt drop in the density of the ψ_0 component, signalling a discontinuous phase transition to the FM phase.This coincides with an increase in the ψ_± 1 components, where the FM domains start to form (Fig. <ref> inset).To determine the freezing time as well as the short time scaling behaviour, we introduce â_k, ± 1, defined as the Fourier transforms of the ψ_± 1 components, respectively. Since the transition to the FM phase results in phase-separated domains, driven by an instability associated with a Bogoliubov mode related to â_k, f_z = (â_k, 1 - â_k, -1)/√(2) (see Appendix), we extract the critical time t such that |â_k, f_z(t )| = 0.01 ×max{|â_k, f_z(t)| : t}. The critical Zeeman valueis defined as Q_a = |Q(t )|. The inset in Fig. <ref> shows the typical growth of |â_k, f_z|, which demonstrates that it remains zero until the system passes the critical point. Thereafter, it undergoes growth with strong oscillations. The choice of 0.01 when extracting t is arbitrary, but we find qualitatively similar results in tests with values up to 0.1.Figure <ref> reveals a clear Q_a∼τ_Q^-1/2 power law for a large range of τ_Q. Despite the discontinuous nature of the phase transition,scaling behaviour is still observed <cit.>. However, this scaling is different from that observed in numerical and experimental results concerning the continuous phase transitions in spin-1 BECs <cit.>.The observed scaling is consistent with the Kibble-Zurek scaling presented above for our system. Taking ν = 1/2 and z=2,and using Eq. (<ref>), we obtain t∼τ_Q^1/2.Combining with the relation Q_a=|t/τ_Q|, we also recover the Q_a ∼τ_Q^-1/2 scaling seen in our simulations. To reinforce our conclusions, we recover the scaling laws by directly linearising the GP equations around the critical point. In this case, the temporal dependence of the quadratic Zeeman terms is treated directly.Following <cit.>, we begin with a wave function close to the BA ground state, Ψ^T = (ψ_1 + δψ_1(t), ψ_0 + δψ_0(t), ψ_-1 + δψ_-1(t))exp(-iμ t) where ψ_± 1, ψ_0 are defined in Eq. (<ref>) with Q=Q_0. Here, μ=c_0+c_1(2 - Q_0)/2 is the chemical potential, 0 ≤ Q_0 ≤ 2 is a constant, and |δψ_m(t)| ≪ 1. The noise terms have to satisfy ∫∑_m δψ_m + δψ_m^* z = 0 to ensure the proper normalisation of the wave function and ∫ (δψ_1 + δψ_1^* + δψ_-1 + δψ_-1^*)z = 0 to enforce conservation of magnetisation.Linearizing the spin-1 GPEs about the state corresponding to Q=Q_0 (see Appendix),we obtaini ħtG_y = [-ħ^2/2M[2]z - c_1n_0 (Q - Q_0/2)]G_y - c_1n_0 Q/2G_y^*,where G_y=δψ_1-δψ_-1. Next, we transform to momentum space and split G_y into real and imaginary parts,where a_y=∫Re(G_y)e^ikz dz and b_y=∫Im(G_y)e^ikz dz. We then solve for Q=-t/τ_Q, by deriving the equation for d^2a_y/dt^2 across a DQCP.Rescaling time as t→ tλ with λ = √(τ_sτ_Q), we arrive at[2]a_yt = -1/( 2κ^2 - t)a_yt-1/4( κ^4 - 2κ^2 t + 3 t^2/4) a_y,where κ^2 = ξ_s^2 k^2 √(τ_Q/τ_s). This scaling ensures that the last term is independent of τ_Q. The remaining dependence on τ_Q is eliminated if we require that κ is constant, which implies k ∼τ_Q^-1/4.Only under these conditions can we expect scaling solutions, and they are also consistent with the scaling of the correlation length and the dynamical exponents derived earlier based on the KZM. The number of domains formed in the transition is a measurable quantity,which has been investigated in other works on the KZM <cit.>. We numerically count the number of density peaks at the end of the simulation (Fig. <ref>).Fig. <ref> shows the total number of FM domains for a broad range of τ_Q. For sufficiently fast quenches (τ_Q < 1000),a clear power-law scaling N_d∼τ_Q^-1/4 emerges, which agreeswell with Eq. (<ref>) as well as the scaling obtained with Eq. (<ref>).As with t, the scaling for the DQCP is again different from that in analogous transitions across a continuous critical point <cit.>.Unlike Q_a, for slow quenches (τ_Q > 1000), there is a clear deviation from the predicted KZ scaling. Such differences in the scaling of observables measured at muchlater times from the critical point have also been observed in <cit.>. In our context, this deviation can be attributed to a cross-over into a different regime that follows predictions of a Landau-Zener model <cit.>.Finally, we tested the robustness of the scaling by considering a quench that crosses two phase transitions. Starting from the polar state with Q=2.5, we simulate a quench through the BA and then the FM states. Results are qualitatively similar (Fig. <ref> inset), with a τ_Q^-1/4 scaling again emerging for the same range of τ_Q, and with similar deviation for slow quenches.In conclusion, we have shown that the KZM can be generalised to this discontinuous phase transition, leading to scaling laws that differ from those observed for phase transitions across continuous quantum critical points for the same spin-1 BEC model. We find excellent agreement with numerical simulation for both the short-time growth of the unstable excitations and the subsequent number of domains formed on longer time scales. Our results hold for experimentally accessible parameter regimes allowing these extensions of the KZM to be realized in current experiments on spinor BECs, which therefore emerge as prime candidates for testbed systems for investigating critical scaling in first-order quantum phase transitions, including as laboratory emulators for understanding false-vacuum decay <cit.>. The numerical simulations were carried out on the High Performance Computing Cluster supported by the Research and Specialist Computing Support service at the University of East Anglia.MOB acknowledges support from EPSRC under grant number EP/V03832X/1. *§ §.§ Numerical Simulation.We measure length and time in units of the spin healing length ξ_s=ħ/√(2M|c_1|n_0) and the spin time τ_s = ħ/2|c_1|n_0, respectively.Our simulations are performed on a 1D grid of N_x = 16384 points with a spacing of Δ_x=0.125ξ_s, considering a ring-shaped geometry by assuming periodic boundary conditions and V(z) = 0. We start from Eq. (<ref>), adding small noise terms, δψ_m, to each component, where |δψ_m| ≪ 1. The real and imaginary parts of δψ_m are drawn from the probability distribution p(z) = exp(-z^2/2σ^2)/(√(2π)σ),with σ=10^-4 to remain close to the BA ground state. We vary the quadratic Zeeman shift as Q(t) = -t/τ_Q for a range of quench times τ_Q, starting at Q=1 and ending the simulation at Q=-2.5. §.§ Bogoliubov modes for the broken-axisymmetry phase.The broken-axisymmetry (BA) phase of a spin-1 BEC exhibits three Bogoliubov modes <cit.>. Here we rederive each mode explicitly from the relevant Bogoliubov transformations and explain why E_k, f_z is the relevant mode for the BA-to-ferromagnetic (FM) transition.The broken-axisymmetry phase can be parameterised asζ^BA = (sinθ/√(2), cosθ, sinθ/√(2)),where sinθ = √(1/2+q/(4nc_1)). The fluctuation operators for this state are then defined as <cit.>:â_k, d = sinθ/√(2)(â_k, 1 + â_k, -1) + cosθâ_k, 0,â_k, f_z = 1/√(2)(â_k, 1 - â_k, -1),â_k, θ = cosθ/√(2)(â_k, 1 + â_k, -1) - sinθâ_k, 0,where on the right-hand side â_k, m is the annihilation operator for a spin-1 boson in magnetic level m (for m=-1,0,+1), determined by expanding the wave-function field operator asψ̂_m(x) = 1/√(V)∑_kâ_k, me^ik·x,where V is the volume of the system.The sub-Hamiltonian for the spin fluctuation mode â_k, f_z can be diagonalized using the transformationb̂_k, f_z = √(ϵ_k + q/2 + E_k, f_z/2E_k, f_z)â_k, f_z + √(ϵ_k + q/2 - E_k, f_z/2E_k, f_z)â_-k, f_z^†,where ϵ_k = ħ^2|k|^2/2M is the kinetic energy and the Bogoliubov spectrum is given byE_k, f_z = √(ϵ_k(ϵ_k + q)).The sub-Hamiltonians for the density fluctuation mode â_k, d and the θ mode â_k, θ can be similarly diagonalized using operators b̂_k, + and b̂_k, +,which yields the remaining two Bogoliubov modes <cit.>: E_k, ±= √(ϵ_k^2 + n(c_0-c_1)ϵ_k + 2(nc_1)^2(1 - q̃^2) ± E_1(k)),where q̃ = -q/2c_1n andE_1(k) = √([n(c_0 + 3c_1)ϵ_k + 2(c_1n)^2(1-q̃^2)]^2 - 4c_1(c_0+2c_1)(nq̃ϵ_k)^2). The final, diagonalized Hamiltonian then readsĤ^BA =E_0^BA + ∑_k≠ 0[E_k, f_zb̂^†_k, f_zb̂_k, f_z+ E_k, -b̂^†_k, -b̂_k, - + E_k, +b̂^†_k, +b̂_k, +],where E_0^BA is the ground state energy for the BA phase, which is explicitly derived in Ref. <cit.>.We now consider our 1D system. In the long-wavelength limit, k → 0, the only non-zero (gapped) mode is E_k, + = √(4(c_1n)^2(1-q̃^2)) which has the form E_k, +∼√(q_c^'^2 - q^2) with q_c^'=2c_1n. The relevant mode of the BA to FM transition is found from the imaginary parts of the Bogoliubov energies (Fig <ref>). For |Q| > 2, where Q ≡ q / (|c_1|n), E_k, + has a positive imaginary part, indicating instability. The critical point Q = 2 (q=q_c^') corresponds to the second-order transition between the polar and BA phases, for which E_k, + therefore is the corresponding Bogoliubov energy.However,for Q < 0 the imaginary part of E_k,f_z mode becomes non-zero and positive, and thus unstable. This corresponds precisely tothe transition from the BA to the FM phase at Q=0 that we are interested in here, and therefore E_k, f_z is the relevant mode to study. Note that the E_k, f_z mode does not give rise to instability at k = 0. Therefore, studies focusing on this mode at k=0 only do not capture the phase transition that occurs at Q=0 <cit.>. In contrast, the k=0 mode corresponds to the most unstable mode for E_k, +, and thus it suffices to choose this Bogoliubov energy to capture the phase transition that occurs at Q=2. In practice, the Q=-2 transition is not realized since the instability of E_k,f_z at any k0 will typically arise at Q=0 when Q is quenched from positive to negative values.§.§ Scaling of Density of Defects from the Kibble-Zurek Mechanism.Given the form of our dispersion relation together with the scaling relation for the relaxation time provided by the KZM, which assumes τ∼ξ^z, we will consider a dispersion relation of the general form <cit.>ω^2 ∼ |q-q_c|^αk^η + k^2zfor a critical point at q=q_c. For a scaling solution to arise that is consistent with the scaling of the relaxation time, we require ω∼ k^z with k∼ξ^-1∼ |q-q_c|^ν. Combining these relations, Eq. (<ref>) implies thatα = ν (2z-η) .In our system q_c = 0 and q = -t/τ_Q. Therefore, Eq. (<ref>) simplifies toω^2 ∼|t|^α/τ_Q^αk^η . Now let us consider the adiabatic-impulse approximation for a gapped spectrum, with an energy gap given by Δ(t). In this case, τ∼ 1/Δ and so far from the critical point where Δ is large, the relaxation time is small and the evolution is adiabatic, meaning that the system is able to adjust quickly enough and can therefore track the true ground state of the system. As the critical point is approached, Δ vanishes and the relaxation time diverges. At some instant t, the reaction time becomes comparable to the transition time, Δ/Δ̇ and the system is no longer able to adiabatically track the evolution. This point is the onset of the impulse regime, where the state becomes frozen. The freezing time is therefore often evaluated from the condition1/Δ(t)∼Δ(t)/Δ̇(t) . For a gapless spectrum, which is the case for the BA-to-FM transition,we work with the form of the dispersion relation as given by Eq. (<ref>) and determine the onset of the impulse regime with the condition ω^2 = ω̇.This yieldsα/2τ_Q^α/2|t|^(α/2 - 1)k^η/2∼|t|^α/τ_Q^αk^η,which leads us directly to the scaling relation for the freezing time|t| ∼τ_Q^α/(2+α)k^-η/(2+α).To obtain the characteristic momentum scale, we substitute Eq. (<ref>) back into Eq. (<ref>), which yieldsk∼τ_Q^α/(η-(2+α)z).§.§ Extracting the freezing timeIn order to extract the freezing time t̃ of the system, an appropriate quantity must be chosen. Since the transition to the FM phase causes the formation of phase-separated domains, a natural choice is to measure fluctuations in the difference of the populations of ψ_± 1 <cit.>. To do this, we first construct the Fourier transforms of the ψ_± 1 components as â_± 1(k) = ∫ψ_± 1 e^-2π ik ·xdx. After passing through the critical point into the FM phase, the difference â_k, f_z(k) = â_1(k) - â_-1(k) / √(2) generates measurable fluctuations as FM domains with opposite spin start to form [see inset of Fig. <ref>(a)], while before the transition it remains zero (due to the absence of domains). To measure the freezing time, we extract the time at which |â_k, f_z(k)| exceeds some appropriately chosen value. In our numerical simulations, we take this value to be 1% of the maximum value of |â_k, f_z(k)| over the entire simulation.An alternative choice is the population of the ψ_0 component. Here, instead of measuring the growth of a quantity, we now extract the freezing time as the time required for the ψ_0 component to deviate from its analytically calculated value in the (metastable) BA phase ψ_0 = √(2+Q) / 2 (Fig. <ref>). In particular, we choose the freezing time to be the time at which the deviation reaches 1% of the analytically predicted value, yielding Fig. <ref>(b). We see that, despite using an entirely different quantity to measure the freezing time, the resulting scaling of Q_a is the same as when calculated from |â_k,f_z(k)|. §.§ Deriving scaling near the critical point.The spin-1 Gross-Pitaevskii equations (GPEs) are given as <cit.>:iħΨt = [ -ħ^2∇^2/2M -pF̂_̂ẑ + qF̂_z^2 +c_0n +c_1n⟨F̂⟩·F̂]Ψ.Recall that we start from a BA phase of the form Ψ^T = (ψ_1 + δψ_1(t), ψ_0 + δψ_0(t), ψ_-1 + δψ_-1(t))exp(-iμ t). Substituting this expression into the GPEs and keeping leading order terms in δψ_m yields the following equations for δψ_± 1 (p=0) iħδψ_1t =[-ħ^2/2M[2]z+q-μ + n_0(10c_0+6c_1-(c_0-c_2)Q)/8]δψ_1 + n_0√(2(4-Q^2))/8[ (c_0 + 3c_1)δψ_0 + (c_0+c_1)δψ_0^* ] + n_0(2-Q)/8[(c_0-c_1)δψ_-1 + (c_0+c_1)δψ_1^*] + n_0/8[(2-Q)c_0 + (2+3Q)c_1]δψ_-1^*, iħδψ_-1t =[-ħ^2/2M[2]z+q-μ + n_0(10c_0+6c_1-(c_0-c_2)Q)/8]δψ_-1 + n_0√(2(4-Q^2))/8[ (c_0 + 3c_1)δψ_0 + (c_0+c_1)δψ_0^* ] + n_0(2-Q)/8[(c_0-c_1)δψ_1 + (c_0+c_1)δψ_-1^*] + n_0/8[(2-Q)c_0 + (2+3Q)c_1]δψ_1^*. Subtracting Eq. (<ref>) from Eq. (<ref>) results in the differential equation for G_y = δψ_1 - δψ_-1:iħG_yt= [ -ħ^2/2M[2]z+q-μ+n_0(c_0+c_1)]G_y -c_1n_0Q/2G_y^*.Additionally, to calculate the chemical potential, we take the ψ_0 component of Eq. (<ref>) keeping lead order terms and assuming a stationary state, which leads to μ = c_0n_0 + c_1n_0(2-Q_0)/2 where Q_0 is a constant. Substituting this expression and q(t)=-c_1n_0Q(t) into Eq. (<ref>) yieldsiħG_yt = [ -ħ^2/2M[2]z -c_1n_0(Q-Q_0/2)]G_y - c_1n_0Q/2G_y^*. To progress, we split G_y into real and imaginary parts and take the Fourier transform: a_y=∫Re(G_y)e^ikz dz and b_y=∫Im(G_y)e^ikz dz. Substituting into Eq. (<ref>) yields thematrix equationt[a_y b_y] = (0ħ k^2/2M - c_1n_0/2ħ(Q - Q_0)c_1n_0/2ħ( 3Q-Q_0 )-ħ k^2/2M0)[a_y b_y].To solve the above equation, we construct the equation for [2]a_yt and take Q_0=0, which yields[2]a_yt = c_1n_0/2ħτ_Qb_y + (ħ^2k^2/2M - c_1n_0Q/2ħ)b_yt.Expressions for b_y and db_y/dt are found from Eq. (<ref>). Substituting these in yields the following equation for d^2a_y/dt^2:[2]a_yt = 1/τ_Q(ħ^2k^2/Mc_1n_0 - Q)a_yt - (ħ^2k^4/4M^2 - k^2c_1n_0Q/M + 3c_1^2n_0^2Q^2/4ħ^2)a_y.To simplify the above expression, we use the spin healing length ξ_s = ħ/√(2|c_1|n_0) and the spin time τ_s=ħ/|c_1|n_0:[2]a_yt = -1/(2ξ_s^2 k^2 τ_Q- t )a_yt -( ξ_s^4 k^4/4τ_s^2 -ξ_s^2 k^2 t/2τ_s^2 τ_Q + 3 t^2/16τ_s^2 τ_Q^2) a_y.Rescaling time as t→ tλ with λ = √(τ_sτ_Q), leads to the differential equation[2]a_yt = -1/( 2κ^2 - t)a_yt-1/4( κ^4 - 2κ^2 t + 3 t^2/4) a_y,where κ^2 = ξ_s^2 k^2 √(τ_Q/τ_s).
http://arxiv.org/abs/2312.16555v1
{ "authors": [ "Matthew T. Wheeler", "Hayder Salman", "Magnus O. Borgh" ], "categories": [ "cond-mat.quant-gas", "cond-mat.stat-mech", "hep-th", "quant-ph" ], "primary_category": "cond-mat.quant-gas", "published": "20231227123923", "title": "Dynamics of a Nonequilibrium Discontinuous Quantum Phase Transition in a Spinor Bose-Einstein Condensate" }
[email protected] Universidade Federal do ABC, Centro de Ciências Naturais e Humanas, Avenida dos Estados 5001- Bangú, CEP 09210-580, Santo André, SP, [email protected] CONICET, Godoy Cruz 2290, Buenos Aires, ArgentinaDepartamento de Física, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, (1429) Buenos Aires, ArgentinaWe present a significant extension of the quark mass density-dependent model (QMDDM), initially revised in our prior study <cit.>, where thermodynamic inconsistencies were addressed. Our current work enriches the QMDDM by incorporating excluded volume effects, as a step towards a more realistic representation of the quark matter equation of state (EOS) at zero temperature. We introduce the concept of “available volume” in the Helmholtz free energy formulation, accounting for the space excluded by each quasiparticle due to its finite size or repulsive interactions. We present a methodology to modify the EOS for point-like particles, allowing for a simple and direct incorporation of excluded volume effects. This is first addressed in a simple one-flavor model and then extended to a more realistic three-flavor system, incorporating both mass and volume dependencies on the baryon number density. We examine various ansatzes for the excluded volume, ultimately adopting one that aligns with the asymptotic freedom behavior of Quantum Chromodynamics (QCD). The EOS for electrically neutral systems in chemical equilibrium is computed, focusing on self-bound and hybrid matter scenarios. We show that the incorporation of excluded volume effects renders the EOS stiffer and that excluded volume effects are essential to align the mass-radius relation of self-bound and hybrid stars with modern astrophysical constraints. Excluded volume effects in the quark-mass density-dependent model: implications for the equation of state and compact star structure A. G. Grunfeld January 14, 2024 ======================================================================================================================================§ INTRODUCTIONIn recent years, the study of quark matter at extreme densities has attracted significant attention within the field of nuclear physics and astrophysics, primarily due to its implications in understanding the properties of neutron stars and heavy ion collisions. At the core of these investigations is the EOS, which provides a crucial link between microscopic theories of strong interactions and macroscopic observables. Unfortunately, first-principle calculations for deconfined quark matter are unavailable at present in the high-density, low-temperature conditions expected within the interiors of neutron stars (NS). Consequently, while perturbative QCD imposes certain indirect constraints on the EOS at NS densities <cit.>, most insights into quark matter in the NS regime rely on phenomenological models. These models, drawing inspiration from QCD, incorporate key properties of quarks, such as color confinement, asymptotic freedom, and chiral symmetry breaking/restoration, into the EOS. Within this context, the QMDDM has been a topic of discussion in the literature for decades (see e.g. <cit.> and references therein). In the baseline version of the model,the system is conceptualized as a noninteracting gas of quasiparticles, each characterized by a mass that varies depending on the baryon number density, n_B. This approach hypothesizes that key features of the non-perturbative regime of QCD can be effectively encapsulated through these density-dependent variations in quark masses. One aspect of the model that has caused considerable debate in the literature concerns its thermodynamic consistency. Over the years, the model has undergone several reformulations, yet its thermodynamic consistency has never been satisfactorily achieved. However, in a recent article <cit.>, we have demonstrated that the QMDDM can be constructed with thermodynamic consistency when formulated within the canonical ensemble instead of the grand canonical ensemble.The model presented in Ref. <cit.> is a baseline formulation focusing solely on the variation of quark masses with the density. However, it is necessary to incorporate several other aspects for the model to realistically represent known features of cold and dense matter, such as its remarkably stiffness, as suggested by the observation of very high-mass compact stars<cit.>. This feature is related to the potential existence of repulsive interactions in quark matter that would render the EOS stiffer.One way to account for this is by considering the excluded volume of particles, akin to the Van der Waals equation of state <cit.>. The concept of excluded volume arises when particles in close proximity effectively “exclude" a certain volume around them due to their finite size or repulsive interactions with each other. This reduction in available space for other particles leads to deviations from the ideal behavior predicted by models assuming point-like particles.In this study, we introduce the effect of excluded volume into the QMDDM. To ensure thermodynamic consistency, we incorporate the excluded volume within the framework of the canonical ensemble. As our focus is on cold compact stellar objects, we proceed with our analysis at zero temperature (T=0). This paper is structured as follows: In Sec. <ref>, we develop a methodology for incorporating the effects of excluded volume into a zero-temperature EOS, initially formulated for point-like particles. We focus on a system consisting of a single particle species, with the key assumption that the excluded volume around each particle, is quantified by a function b(n), where n is the particle number density. We show how the standard equations for energy density, pressure, and chemical potential, applicable to point-like particles, can be adapted to include volume exclusion effects. For a quick reference, the summary of our findings in this context can be found in Sec. <ref>.In Sec. <ref>, we apply the insights from the previous section to reformulate the 1-flavor QMDDM within the canonical ensemble, incorporating excluded volume effects using various ansatzes for b = b(n). This section is primarily pedagogical, aimed at identifying which prescriptions for b(n) are physically viable.Sec. <ref> extends the formalism from Sec. <ref> to a generic EOS involving multiple particle species. The method for incorporating excluded volume effects into an EOS for a mixture of point-like particles is summarized in Sec. <ref>.In Sec. <ref>, we expand the 3-flavor QMDDM, as presented in Ref. <cit.>, to take into consideration excluded volume effects, assuming a generic formula for the excluded volume b = b(n_B).Finally, in Sec. <ref>, we present our numerical results using an ansatz that is consistentwith the asymptotic freedom behavior of QCD. We calculate the EOS under the assumption that matter is electrically charge neutral and in equilibrium under weak interactions, for two different choices of the EOS' parameters: one representing self-bound matter and the other hybrid matter. The structures of self-bound and hybrid stars are studied using these EOS.§ INCORPORATING EXCLUDED VOLUME EFFECTS IN A GENERIC 1-FLAVOR EOS In this section, we develop a methodology for incorporating the effects of excluded volume into a zero-temperature EOS, originally formulated for point-like particles. For simplicity, we consider a system comprising a single particle species. Our key assumption is that the excluded volume surrounding each particle, attributed to its finite size or repulsive interactions, is quantified by a function b(n), where n is the particle number density. We will show that the equations for energy density, pressure, and chemical potential, applicable to point-like particles, can be straightforwardly adapted to encompass the effect of volume exclusion. For those seeking a concise overview, we direct attention to the summary of our findings in Sec. <ref>. §.§ EOS for point-like particlesBelow, we provide a summary of some standard definitions and results of thermodynamics that will be useful throughout the text. We will examine a system of N particles within a volume V at absolute zero temperature.Let us assume that without excluded volume effects, the system can be described by a Helmholtz free energy function denoted as F_pl(V, N), where the subindex “pl” refers to point-like particles. As we are considering the case of T = 0, the internal energy is equivalent to F, and thus, the energy density is identicalto the Helmholtz free energy per unit volume:ϵ_pl = F_pl(V, N)/V .All other thermodynamic quantities of the system of point-like particles are derived directly from F_pl(V, N). The pressure is given by: p_pl (V, N) =-∂ F_pl(N,V)/∂ V|_N , and the chemical potential by: μ_pl(V, N) = ∂ F_pl(N, V)/∂ N|_V.Taking advantage of the homogeneity properties of thermodynamic functions, the aforementioned quantities can be expressed solely as a function of the particle density n ≡ N/V. On one hand, the Helmholtz free energy is a first-order homogeneous function of the extensive parameters. This means that if all extensive parameters of a system are scaled by a factor α, the free energy of the resulting system will be scaled by the same factor, according to <cit.>: α F(V, N)=F(α V, α N). By substituting α = V^-1 into Eq. (<ref>), the energy density can be rewritten as: ϵ_pl(n) = F_pl(V, N)/V=F_pl(V/V, N/V) = F_pl(1, n ). On the other hand, pressure and chemical potential are intensive quantities, meaning they are zero-order homogeneous functions, as expressed by the following relations <cit.>: p(α V, α N)=p(V, N),μ(α V, α N)= μ(V, N). Introducing α = V^-1 in Eqs. (<ref>) and (<ref>), we obtain: p_pl(n) = p_pl(1, NV) , μ_pl(n) = μ_pl(1, NV) . §.§ EOS with excluded volume effects To incorporate the effects of excluded volume, one substitutes the system's volume V with the “available volume” Ṽ, defined as: Ṽ=V-b(n) N, in the Helmholtz free energy F_pl(V, N) of point-like particles. In the above equation, b represents the volume that would be excluded by each particle if it were treated as a rigid sphere. To maintain generality, we allowed b to depend on the particle's number density n.Once we replace V with Ṽ, the resulting function F_pl(Ṽ, N) becomes the starting point for the thermodynamic description that incorporates excluded volume effects. §.§.§ Excluded volume effects in the energy densityAt T = 0, the energy density is equivalent to the Helmholtz free energy per unit volume: ϵ = F_pl(Ṽ, N)/V .By applying the homogeneity property from Eq. (<ref>) with α = Ṽ^-1 to Eq. (<ref>), the energy density can be reformulated as: ϵ = Ṽ/ṼF_pl(Ṽ, N)/V = Ṽ/V F_pl( Ṽ/Ṽ, N/Ṽ).We now introduce the available volume fraction q, defined as: q(n) ≡Ṽ/V = 1 - n b(n), which depends solely on the postulated ansatz for the excluded volume per particle, b(n). Note that q satisfies 0 < q < 1 since 0 < Ṽ < V. The ratio N/Ṽ in Eq. (<ref>) can be rewritten as: N/Ṽ = N/VV/Ṽ = n/q(n) . Thus, Eq. (<ref>) can be reformulated as: ϵ(n) = q(n) F_pl(1, n/q(n)), which is exclusively a function of n. In Eq. (<ref>), we recognize the energy density for point-like particles, as described by Eq. (<ref>), as a function of n/q. Consequently, we can express the energy density accounting for excluded volume effects in terms of ϵ_pl: ϵ(n) = q(n) ϵ_pl(n/q(n)).This equation provides a clear method to obtain the energy density with excluded volume corrections. The procedure involves initially expressing ϵ_pl for point-like particles, which is assumed to be known, as a function of the adjusted variable n/q(n). Subsequently, by multiplying this expression by the correction factor q(n), defined in Eq. (<ref>), one can straightforwardly derive the energy density with excluded volume effects. It is important to note that q(n) is dependent solely on the selected ansatz for the excluded volume per particle, denoted by b(n). §.§.§ Pressure As noted earlier, we account for excluded volume effects by substituting Ṽ = V - b(n) N into the Helmholtz free energy formula for point-like particles. With this substitution,F continues to be a function of V and N, but the dependency on volume is now exclusively represented through the auxiliary function Ṽ = Ṽ(V, N). To determine the pressure of the system, we must compute the following derivative: p = - ∂ F_pl(Ṽ, N)/∂ V|_N. This derivative can be expressed as: p = - ∂ F_pl/∂Ṽ|_N ∂Ṽ/∂ V|_N - ∂ F_pl/∂ N|_Ṽ, N∂ N/∂ V|_N. Using the fact that ∂ N/∂ V |_N = 0 and defining: δ≡∂Ṽ/∂ V|_N = ∂ (V - b N)/∂ V|_N = 1 + n^2db/dn, Eq. (<ref>) takes the following form: p = - δ(n) ∂ F_pl(Ṽ, N)/∂Ṽ|_N. The derivative in the previous equation has the same functional form as the derivative in Eq. (<ref>), with the only difference being that it is now evaluated at Ṽ instead of at V. Consequently, we can identify such derivative as being p_pl(Ṽ, N), i.e.: p_pl(Ṽ, N) = - ∂ F_pl(Ṽ, N)/∂Ṽ|_N. Replacing Eq. (<ref>) into Eq. (<ref>) we find that the pressure for the system with excluded volume effects can be obtained from the pressure of point-like particles as: p= δ(n)p_pl(Ṽ,N). Pressure is an intensive quantity, which means it is a homogeneous function of degree zero, that is: p(α V, α N) = p(V, N), for arbitrary α <cit.>. By setting α = Ṽ^-1, we get: p_pl(Ṽ, N) = p_pl(Ṽ/Ṽ, N/Ṽ) = p_pl(1, n/q) , where we have used the fact that N/Ṽ = (N/V) × (V/Ṽ) = n/q. Therefore, the pressure for the system with excluded volume effects can be obtained from the pressure of point-like particles as: p(n) = δ(n) p_pl(nq) .Once more, we have established a simple approach to compute the pressure with corrections accounting for excluded volume. This method begins by representing the known expression p_pl for point-like particles in terms of the variable n/ q(n). Subsequently, by multiplying it with the correction factor δ(n), we obtain the pressure incorporating the effect of excluded volume. The correction factors q(n) and δ(n) are exclusively determined by the chosen ansatz b(n) for the excluded volume per particle. §.§.§ Chemical potentialTo obtain the chemical potential we start from: μ =∂ F_pl(Ṽ, N)/∂ N|_V. This derivative can be expressed as: μ = ∂ F_pl (Ṽ, N) /∂Ṽ|_N ∂Ṽ/∂ N|_V + ∂ F_pl (Ṽ, N)/∂ N|_Ṽ .As already shown, the derivative of F appearing in the first term of the previous equation is - p_pl(Ṽ, N). On the other hand, the derivative of the second term has the same functional form as the derivative in Eq. (<ref>), with the only difference being that it is now evaluated at Ṽ instead of at V. Consequently,we can identify it as being μ_pl(Ṽ, N), i.e.: μ_pl(Ṽ, N) =∂ F_pl (Ṽ, N)/∂ N|_Ṽ. Additionally, we define - λ≡∂Ṽ/∂ N|_V = ∂ (V - b N)/∂ N|_V= - ( b+ n db/dn) . Replacing p_pl, μ_pl and λin Eq. (<ref>) we obtain: μ = λ(n) p_pl(Ṽ, N)+ μ_pl(Ṽ, N) .The chemical potential is an intensive quantity; thus, using the same reasoning that led to Eq. (<ref>) we get: μ_pl(Ṽ, N)= μ_pl(1, n/q) . Therefore, Eq. (<ref>) reads: μ(n) = λ(n) p_pl(nq)+ μ_pl(nq) . Similar to the case of Eq. (<ref>), the above expression takes advantage of the already known expressions μ_pl and p_pl for point-like particles as functions of the variable n/ q(n), along with the correction factorλ(n).§.§.§ Assessing thermodynamic consistency via Euler's relation verification To confirm the thermodynamic consistency of the model incorporating excluded volume effects, we will check if the Euler relation is verified. We begin by assuming that the system of point-like particles is thermodynamicallyconsistent and satisfies the Euler relation: ϵ_pl(V,N) = - p_pl(V,N) + μ_pl(V,N) N/V.This relation holds for any system volume, including the volume Ṽ. Consequently, we obtain: ϵ_pl(Ṽ,N) = - p_pl(Ṽ,N) + μ_pl(Ṽ,N) N/Ṽ, which, upon reformulation, yields: ϵ_pl(Ṽ,N) = -p_pl(Ṽ,N) + μ_pl(Ṽ,N) n/q . The preceding equation can be reformulated as follows. First, each term is multiplied by q = 1 - bn. Then, we add and subtract the term p_pl n^2 db/dn. Finally, the terms are rearranged, leading to the equation below: q ϵ_pl = -p_pl - p_pl n^2 db/dn + n μ_pl + p_pl(n db/dn + b) n,which simplifies to: q ϵ_pl = - (1 + n^2 db/dn) p_pl + n [ μ_pl + p_pl(n db/dn + b ) ].This expression incorporates the previously defined factors δ and λ, along with the expressions presented in Eqs. (<ref>), (<ref>), and (<ref>), culminating in the result: ϵ(Ṽ,N) = - p(Ṽ,N) + μ(Ṽ,N) n.In summary, if the Euler relation is valid for a system of point-like particles, it is also valid for the system with excluded volume effects, thus confirming the thermodynamic consistency of the formalism. §.§ Summary of Sec. <ref>This section presented a methodology for taking into account the effects of excluded volume into a pre-existing zero-temperature EOS, initially designed for point-like particles, within the framework of the Helmholtz representation.Beginning with the established formulas for energy density (ϵ_pl), pressure (p_pl), and chemical potential (μ_pl) of point-like particles, which are functions of the particle number density (n), the process unfolds as follows. One first reformulates ϵ_pl, p_pl, and μ_pl by replacing n with the modified variable n/q, where q is defined by the expression given in Eq. (<ref>): q(n) = 1 - n b(n). In this framework, b(n) symbolizes the per-particle excluded volume, introduced into our model asa phenomenological ansatz. Then, we derive the correction factors δ and λ, which stem from the specific function assigned to b(n), and were defined in Eqs. (<ref>) and (<ref>), respectively: δ(n)=1 + n^2 db/dn,λ(n)=b + n db/dn. Finally,the new expressions for energy density, pressure, and chemical potential, now accounting for the effects of excluded volume, are: ϵ(n)=q(n) ×ϵ_pl(n/q), p(n)= δ(n) × p_pl(n/q), μ(n)= λ(n) × p_pl(n/q) + μ_pl(n/q). as shown in Eqs. (<ref>), (<ref>) and (<ref>). § 1-FLAVOR QMDDM: THE IMPACT OF EXCLUDED VOLUME EFFECTSWe will now present a reformulation of the QMDDM at T=0 focusing on a system with a single particle species. We will account for excluded volume effects by employing various ansatzes for b = b(n). As discussed in the previous section, we will utilize the thermodynamic expressions from the standard QMDDM <cit.>, evaluate them as a function of n/q, and incorporate the correction factors q, δ, and λ. For reference, Appendix <ref> summarizes the expressions for the one-flavor QMDDM for point-like particles as derived in Ref. <cit.>. In order to calculate the energy density, we start by usingEq. (<ref>) in terms of the variable n/q(n). To simplify the notation we introduce an auxiliary variable ñ defined as: ñ≡n/q. In terms of this new variable we have ϵ_pl(ñ)= gM̃^4 χ(x̃), where the function χ is given in Eq. (<ref>), g is the particle's degeneracy,and M̃ ≡ M(ñ) = m + C/ñ^a/3 ,x̃ ≡ x(ñ)= 1/M̃( 6 π^2 ñ/g)^1/3 . In these equations, m represents the current mass, while the constants a and C are treated as free parameters (see Ref. <cit.> for details).Replacing ϵ_pl(ñ) in Eq. (<ref>) we obtain the final expression for the energy density with excluded volume effects: ϵ(n)= q(n) gM̃^4 χ(x̃) . To determine the pressure,we employ Eq. (<ref>) as a function of ñ: p_pl(ñ)=p^FG(ñ) - B(ñ) , being: p^FG(ñ)≡g M̃^4ϕ(x̃) ,B(ñ)≡ -g M̃^3ñ∂M̃/∂ñβ(x̃)>0,∂M̃/∂ñ=-C/3a/ñ^a / 3+1 . The functions ϕ and β are provided in Eqs. (<ref>) and(<ref>) respectively. Substituting p_pl(ñ) into Eq. (<ref>), we obtain the expression for the pressure that accounts for excluded volume effects: p(n) =δ(n)[p^FG(ñ)-B(ñ) ].Finally,for the calculation of the chemical potential, we begin with Eq. (<ref>) expressed in terms of ñ: μ_pl(ñ)= μ^FG(ñ)-B(ñ)/ñ, where μ^FG(ñ) ≡M̃√(x̃^2+1). The chemical potential with excluded volume effects is obtained by substituting μ_pl(ñ) in Eq. (<ref>), and reads: μ(n) = λ(n) [ p^FG(ñ) - B(ñ)]+ [ μ^FG(ñ)-B(ñ)/ñ] . Inthe following subsections we focus on different models for b = b(n). §.§ Constant excluded volume The simplest expression for b = b(n) is to take it as a constant. Although this choice is in contradiction with the asymptotic freedom behavior of QCD, because volume exclusion does not vanish as n →∞, we will consider it in the following as a starting point.From Eqs. (<ref>), (<ref>) and (<ref>),the functions q(n), δ(n) and λ(n) read: q=1 - n b ,δ =1,λ = b . From Eqs. (<ref>),(<ref>)and (<ref>)we find:ϵ(n)= (1- n b) gM̃^4 χ(x̃) ,p(n) = p^FG(ñ)-B(ñ), μ(n) =b[ p^FG(ñ) - B(ñ)]+ [ μ^FG(ñ)-B(ñ)/ñ],withñ(n) = n/1 - n b. These new expressions contain excluded volume corrections through ñ. The chemical potential is additionally corrected by an extra term. In Fig. <ref> we show the effective quasiparticle mass M for different values of b. Note that while the mass formula predicts M to diverge as n → 0, in practice M has a maximum finite value at zero pressure and approaches the current quark mass m at large pressures and/or densities. Excluded volume effects decrease the effective mass at a constant density, but at a specific pressure, M remains constant for all values of b.This overlapping is due to the fact that both M and p depend directly on ñ without any explicit dependence on the constant b (cf. Eqs. (<ref>) and (<ref>)). In Fig. <ref> we show the EOS of a one-component gas for different values of b. Increasing values of the parameter b result in a stiffer EOS. Note that the curves tend to overlap at zero pressure and clearly separate as the particle density or energy density increases. This behavior arises from the fact that the excluded volume is constant; this means that the role of the excluded volume becomes proportionally less significant when the particles are widely separated (low density).The minimum of ϵ/n occurs at p=0 as required by thermodynamic consistency.Notably, the curve for ϵ/n as a function of pressure is independent of b. The above mentioned overlapping results from both ϵ/n and p having a direct dependence with ñ, without any explicit dependency on b (note that combining Eqs.(<ref>) and (<ref>)one findsϵ/n = gM̃^4 χ(x̃)/ ñ).We also show the speed of sound c_s defined by: c_s^2 = ∂ p/∂ϵ. At densities above a certain threshold, the speed of sound exceeds the speed of light for b≠ 0.Finally, notice that Eq. (<ref>) can be rewritten in terms of the specific volumes v ≡ 1/n and ṽ≡ 1/ñ as: ṽ = v - b. The available volume per particle, ṽ, cannot be negative, as that would mean that rigid spheres of volume b ocuppy a volume larger than the system's volume.Therefore, excluded volume effects are no longer physically meaningful for v <b. As a consequence, the EOS is no longer valid at particle number densities exceeding 1/b. For b = 0.1  fm^3, this threshold density is around 62 n_0, while for b = 0.3  fm^3 it is approximately 20 n_0 (being n_0 the nuclear saturation density). Moreover, it must be noticed that the causality condition c_s < c is violated at much smaller densities, as seen in Fig. <ref>(d). Additionally, c_s does not approach the conformal limit of 1/√(3)at asymptotically high densities.The above discussion shows that the ansatz with constant b is not satisfactory.In the following subsection we will explore other formulas for b(n) that fulfill causality and the asymptotic freedom condition of QCD.§.§ Density-dependent excluded volumeNow, we adopt an ansatz for b in agreement with the asymptotic freedom behavior of QCD, i.e. a formula that allows b to vanish as n →∞. To this end, b will be expressed as:b = κ n^-ℓ. where κ and ℓ are positive constants. Note that while this formula predicts b to diverge as n → 0, in practice b has a maximum finite value at zero pressure. The values of the parameters κ and ℓ are in fact interconnected. Atn ∼ n_0, we expect that repulsive interactions will have a range smaller than about L ∼ 1 fm, leading to an excluded volume not larger than approximately 43π L^3 ∼ 4 fm^3. Consequently, if we significantly increase the parameter ℓ, the constant κ cannot be excessively large, as that would result in an overly extensive excluded region.Taking ℓ=1 as a reference, Eq. (<ref>) gives κ = b n, yielding a maximum value of κ≈ 0.64 for n= 0.16 fm^-3. This upper limit is more stringent that the absolute limiting value for κ coming from the condition that the available volume cannot be larger than the total volume; i.e. in Eq. (<ref>) ṽ/v must be < 1, meaning that for ℓ =1, κ cannot be larger than 1. In the remainder of this section,we will adopt the following parameter values: κ = 0,0.16, 0.64 together with ℓ=1. The excluded volume per particle reads: b(n) = κ/n, and, using Eqs. (<ref>), (<ref>) and (<ref>),the functions q(n), δ(n) and λ(n) are: q=1 - κ,δ = 1 - κ,λ = 0 . From Eqs. (<ref>),(<ref>)and (<ref>)we find:ϵ(n)= (1- κ) gM̃^4 χ(x̃) , p(n) = (1- κ) [p^FG(ñ)-B(ñ) ] , μ(n) =μ^FG(ñ)-B(ñ)/ñ,with ñ(n) = n/1 - κ. Notice that, the functional forms of the pressure and energy density resemble those for point-like particles. The main differences are the introduction of an overall correction factor 1-κ and the evaluation of the functions atñrather than n. The expression for the chemical potential remains the same as for point-like particles, with the sole distinction being that it is now evaluated at ñ.In Fig. <ref>, we display the effective quasiparticle mass M as a function ofdensity, chemical potential, and pressure. We considered different κ values with a constant exponent ℓ = 1. The mass M exhibits the same qualitative behavior as that of point-like particles; that is, it has a finite maximum value at zero pressure and gradually approaches the current mass m at extremely high pressures or densities.For specific density or pressure values, M is always lower for higher κ. Interestingly, the mass curves for different κ values match when plotted against the chemical potential. This is because both M and μ depend directly on ñ without any explicit dependence on κ, as shown in Eqs. (<ref>) and (<ref>).In Fig. <ref>, we present the EOS for various values of κ. The pressure behavior displayed in Fig. <ref> is qualitatively different from that in Fig. <ref>. As seen in panels (a) and (c), the curves for different κ values are separated at all densities. The curves do not overlap at low n because the excluded volume is inversely dependent on the density. As a result, the volume exclusion effect becomes more significant at lower densities. As density increases, volume exclusion vanishes, and the separation between the curves tends to remain constant, as observed in panel (c) of Fig. <ref>. For the same reason, all the speed of sound curves converge to the conformal limit of c_s = 1/√(3) for asymptotically large densities. Another relevant aspect of the EOS is its increasing stiffness as κ rises, as seen in panels (a) and (c) of Fig. <ref>. As seen in panel (b) of Fig. <ref>, the minimum of ϵ/n occurs at p=0, as required for thermodynamic consistency. Contrary to the case of Sec. <ref>, the curves no longer overlap.In summary, in contrast to the constant excluded-volume approach discussed in the previous subsection, the density-dependent ansatz provided by Eq. (<ref>) yields a viable quark matter EOS, in agreement with qualitative aspects of QCD.§ EXCLUDED VOLUME EFFECTS IN A GENERIC 3-FLAVOR EOS For applications in astrophysics, the most relevant state of quark matter involves an electrically neutral mixture of up (u), down (d), and strange (s) quarks together with a minor proportion of electrons (e), with all components in a state of chemical equilibrium due to the influence of weak interactions. Below, we will expand upon the equations presented in Sec. <ref>to accommodate the scenario involving all three quark flavors.§.§ EOS for point-like particlesAssuming the absence of excluded volume effects, let the Helmholtz free energy function be denoted as F_pl(V, {N_j}), where “pl” represents point-like particles. Here, {N_j} represents the set {N_u, N_d, N_s}, with N_u, N_d, and N_s being the total number of particles for the u, d, and s quarks, respectively. At T = 0, the energy density is identicalto the Helmholtz free energy per unit volume:ϵ_pl = F_pl(V, {N_j})/V .All other thermodynamic quantities of the system of point-like particles are derived directly from F_pl(V, {N_j}). The pressure is given by: p_pl (V, {N_j}) =-∂ F_pl(V, {N_j})/∂ V|_{N_j} , and the chemical potential by: μ_pl, k(V, {N_j}) = ∂ F_pl(V, {N_j})/∂ N_k|_V, {N_j ≠ k}.Using the extensivity of F and the intensivity of the pressure and chemical potential<cit.>, i.e. F_pl(α V,{α N_j}) =α F_pl(V, {N_j}),p_pl(α V,{α N_j})=p_pl(V, {N_j}),μ_pl, k(α V,{α N_j})= μ_pl, k(V, {N_j}), we can express all thermodynamic quantities in terms of the particle number densities n_i ≡ N_i/V. By setting α = V^-1 the equations transform into: ϵ_pl({n_j}) =F_pl(V, {N_j})/V=F_pl(V/V,{N_j/V}) =F_pl(1, {n_j} ), p_pl( {n_j} )= p_pl(1, {N_jV}) , μ_pl, k( {n_j} )= μ_pl, k(1, {N_jV}) .§.§ EOS with excluded volume effectsTo account for excluded volume effects we replace the system's volume V by the available volume Ṽ. Since volume exclusion is the result of repulsion associated with strong interactions, the natural generalization of the one flavor expression given in Eq. (<ref>) is the followingflavor independent formula: Ṽ=V -b(n_B) N_B, where N_B is the total baryon number of the system and b represents the volume that would be excluded by each particle if it were treated as a rigid sphere. To maintain generality, we allow b to depend on the baryon number density n_B, which is given by: n_B=13(n_u+n_d+n_s) .Once we replace V with Ṽ in the Helmholtz free energy F_pl(V, {N_i}) of point-like particles, the resulting function F_pl(Ṽ, {N_i}) becomes the fundamental starting point for the thermodynamic description that incorporates excluded volume effects. §.§.§ Energy densityAt T = 0, the energy density is equivalent to the Helmholtz free energy per unit volume:ϵ = F_pl(Ṽ, {N_j})/V .By applying the homogeneity property from Eq. (<ref>) with α = Ṽ^-1 to Eq. (<ref>), the energy density can be reformulated as: ϵ = Ṽ/ṼF_pl(Ṽ, {N_j})/V = Ṽ/V F_pl( Ṽ/Ṽ,{N_j/Ṽ}). We now introduce the available volume fraction q, defined as:q(n_B) ≡Ṽ/V = 1- n_B b(n_B) ,which depends solely on the postulated ansatz for the excluded volume per particle, b(n_B). Note that q satisfies 0 < q < 1 since 0 < Ṽ < V. The ratio N_j/Ṽ in Eq. (<ref>) can be rewritten as: N_j/Ṽ = N_j/VV/Ṽ = n_j/q(n_B) Thus, Eq. (<ref>) can be reformulated as: ϵ({n_j}) = q(n_B) F_pl(1, {n_j/q(n_B)}), which is exclusively a function of {n_j}. In Eq. (<ref>), we identify the energy density for point-like particles, as depicted by Eq. (<ref>), expressed as a function of {n_j/q}. This allows us to obtain the energy density that incorporates excluded volume effects in terms of ϵ_pl: ϵ({n_j}) = q(n_B) ϵ_pl({n_j/q(n_B)}).This equation provides a straightforward procedure to calculate the energy density with excluded volume corrections. The process begins by expressing ϵ_pl for point-like particles, which we assume as a known quantity, as a function of the modified variables {n_j/q(n_B)} andthen multiplying the result by the correction factor q(n_B) defined in Eq. (<ref>). As emphasized earlier, q(n_B) depends solely on the chosen ansatz for the excluded volume per particle b(n_B). §.§.§ PressureAs discussed before, excluded volume effects are taken into account by substituting Ṽ = V - b(n_B) N_B into the Helmholtz free energy for point-like particles. With such replacement, F will still be a function of V and {N_j}, but the dependence on volume occurs only through the auxiliary function Ṽ = Ṽ(V, {N_j}). To determine the pressure of the system we follow the same procedure as in Sec. <ref>. We start with the pressure definition: p(V, {N_j}) =-∂ F_pl(Ṽ, {N_j})/∂ V|_{N_j} , which can be expressed as: ∂ F_pl/∂ V|_{N_j} =∂ F_pl/∂Ṽ|_{N_j}∂Ṽ/∂ V|_{N_j} + ∑_i∂ F_pl/∂ N_i|_Ṽ, { N_j≠ i }∂ N_i/∂ V|_{N_j} . Using the fact that ∂ N_i / ∂ V|_{N_j} = 0 and defining δ as: δ≡∂Ṽ/∂ V|_{N_j} = ∂ (V - b N_B)/∂ V|_{N_j} = 1 + n_B^2db/dn_B, we can reformulate Eq. (<ref>) in the following manner: p = - δ(n_B) ∂ F_pl(Ṽ, {N_j} )/∂Ṽ|_{N_j} . The derivative in the previous equation has the same functional form as the derivative in Eq. (<ref>), but it is now evaluated at Ṽ instead of at V. Therefore, it can be identified as the point-like pressure -p_pl(Ṽ, {N_j}). Consequently, Eq. (<ref>) reads: p(V, {N_j}) = δ(n_B) p_pl(Ṽ, {N_j}). Given that pressure is an intensive quantity, it satisfies the condition p(α V, α N) = p(V, N), for arbitrary α<cit.>. Setting α = Ṽ^-1, we obtain: p_pl(Ṽ, {N_j} ) = p_pl(Ṽ/Ṽ, {N_j/Ṽ}) = p_pl(1, {n_j/q}) , where we have used the fact that N_j/Ṽ = (N_j/V) × (V/Ṽ) = n_j/q. Therefore, the pressure for the system with excluded volume effects can be obtained from the pressure of point-like particles as: p( {n_j} ) = δ(n_B) p_pl({n_jq}) . Once again, we arrive at a straightforward procedure for calculating the pressure with excluded volume corrections. We begin with the known expression for the pressure p_pl for point-like particles, rewrite it in terms of the variable set {n_j/q}, and multiply by the factor δ. Both q and δ are uniquely determined by the chosen ansatz b(n_B) for the excluded volume per particle.§.§.§ Chemical potential To obtain the chemical potential weproceed as inSec.<ref>.We start from: μ_k(V, {N_j}) =∂ F_pl(Ṽ, {N_j})/∂ N_k|_V,{N_j ≠ k} , which can be expressed as: μ_k =∂ F_pl/∂Ṽ|_{N_ j }∂Ṽ/∂ N_k|_V,{ N_j ≠ k}+∑_i ∂ F_pl/∂ N_i|_Ṽ, N_j≠ i∂ N_i /∂ N_k|_V,{ N_j ≠ k}. As already shown, the derivative of F appearing in the first term of the previous equation is -p_pl(Ṽ, {N_j}). On the other hand, the derivative of F in the second term has the same functional form as the derivative in Eq. (<ref>), with the only difference being that it is now evaluated at Ṽ instead of at V. Consequently,μ_pl, k(Ṽ, {N_j}) = ∂ F_pl(Ṽ, {N_j})/∂ N_k|_Ṽ, {N_j ≠ k}. Additionally, we define - λ≡ ∂Ṽ/∂ N_k|_V,{ N_j ≠ k}= ∂ (V - b N_B)/∂ N_k|_V,{ N_j ≠ k}= -13( b+ n_B db/dn_B) . Replacing the prior expressions in Eq. (<ref>) we obtain: μ_k = λ(n_B) p_pl(Ṽ, {N_j}) + μ_pl, k(Ṽ, {N_j}) .The chemical potential is an intensive quantity; thus, using the same reasoning that led to Eq. (<ref>) we get: μ_pl, k(Ṽ, {N_j})= μ_pl, k(1, {n_j/q}). Therefore, Eq. (<ref>) reads: μ_k( {n_j} )= λ(n_B) p_pl({n_jq})+ μ_pl, k({n_jq}) . Similar to the case of Eq. (<ref>), the above expression takes advantage of the already known expressions μ_pl and p_pl for point-like particles as functions of the variable set {n_j/q}, along with the correction factorλ.§.§.§ Thermodynamic consistency: a look at Euler's relationTo verify the thermodynamic consistency of the formulas incorporating excluded volume effects, we begin by postulating that the system of point-like particles is thermodynamically consistent and satisfies the Euler relation (cf. Sec. <ref>): ϵ_pl(V, {N_j}) = - p_pl(V, {N_j}) + ∑_k μ_pl, k(V, {N_j}) N_k/V. This relation holds for any system volume, including the volume Ṽ. Consequently, we obtain: ϵ_pl(Ṽ, {N_j})=- p_pl(Ṽ, {N_j}) + ∑_k μ_pl, k(Ṽ, {N_j}) N_k/Ṽ . Using N_k / Ṽ =n_k / q and reorganizing the previous expression one obtains:q ϵ_pl(Ṽ, {N_j}) = -q p_pl(Ṽ, {N_j}) + ∑_k μ_pl, k(Ṽ, {N_j}) n_k. We first substitute q with 1 - bn_B. Next, we add and subtract the term p_pl n_B^2 db/dn_Bto the equation. After rearranging the terms, the final form of the equation is: q ϵ_pl=-p_pl - p_pl n_B^2 db/dn_B + ∑_k n_k μ_pl, k + p_pl(n_B db/dn_B + b ) n_B,which simplifies to: q ϵ_pl = - (1 + n_B^2 db/dn_B) p_pl + ∑_k n_k [ μ_pl, k+ p_pl 13(n_B db/dn_B + b ) ].This expression incorporates the previously defined factors δ and λ, along with the expressions presented in Eqs. (<ref>), (<ref>), and (<ref>), culminating in the result: ϵ(Ṽ, {N_j} ) = - p(Ṽ, {N_j} ) + ∑_k μ_k(Ṽ, {N_j} ) n_k.In summary, if the Euler relation is valid for a system of point-like particles, it is equally valid for the system with excluded volume effects, thus confirming the thermodynamic consistency of the formalism.§.§ Summary of Sec. <ref> In this section, we showed how to straightforwardly incorporate excluded volume effects into any zero-temperature EOS already formulated for point-like particles in the Helmholtz representation.We begin by considering the established expressions for the energy density ϵ_pl, the pressure p_pl, and the chemical potentials μ_pl, i of point-like particles, which depend on the set of particle number densities {n_j}. Initially, we rewrite ϵ_pl, p_pl, and μ_pl, i using the modified variable set {n_j/q}, where q is defined as:q(n_B) = 1- n_B b(n_B) , in accordance with Eq. (<ref>). In this context, b(n_B) represents the excluded volume per particle and is incorporated into the model as a phenomenological ansatz. Next, we determine the correction factors δ and λ, which are defined in Eqs. (<ref>) and (<ref>), respectively: δ = 1 + n_B^2db/dn_B,λ = 13( b+ n_B db/dn_B) . These factors are derived directly from the specified function for b(n_B).Finally, as shown in Eqs. (<ref>), (<ref>) and (<ref>), the energy density, pressure, and chemical potentials,incorporating excluded volume effects, are expressed as follows: ϵ({n_j }) = q ∑_iϵ_pl,i( { n_j /q} ), p( { n_j } ) =δ∑_i p_pl,i ( { n_j /q} ) ,μ_k( {n_j} ) =λ(n_B) p_pl({n_jq})+ μ_pl, k({n_jq}) .§ 3-FLAVOR QMDDM WITH EXCLUDED VOLUME EFFECTS: FORMALISMIn this section, we use the formalism presented in Sec. <ref> to incorporate excluded volume effects in the 3-flavor QMDDM EOS developed in Ref. <cit.>. For completeness, we provide in Appendix <ref> a summary of the relevant equations of the EOS for point-like particles.The mass of the quark quasiparticle of flavor i is given by: M_i = m_i+C/n_B^a/3 ,(i = u, d, s), where C and a are positive flavor independent free parameters. In order to calculate the energy density, we start from the expression for point-like particles given in Appendix <ref>:ϵ_pl =∑_i gM_i^4 χ(x_i) , where the variable x_i is defined in Eq. (<ref>) and the function χ is given in Eq. (<ref>).Excluded volume effects are incorporated by writing the previous expression in terms of the set of variables { n_j/q } and including the correction factor q, as indicated in Eq. (<ref>). To simplify the notation, we will use from now on the auxiliary variableñ_i = n_i/q(n_B). After this procedure, the energy density reads: ϵ=∑_i q(n_B) g M̃_i^4 χ(x̃_i) , where g=6 is the quark degeneracy and M̃_i ≡ M_i(ñ_B) =m_i+C/ñ_B^a / 3, x̃_i≡ x(ñ_i)= 1/M_i(ñ_B)( 6 π^2 ñ_i/g)^1/3 ,ñ_B≡ N_B /Ṽ=13 ( ñ_u+ñ_d+ ñ_s ) . Adding the contribution of electrons toEq. (<ref>) we obtain the complete energy density with excluded volume corrections: ϵ = ∑_i=u, d, sq(n_B) gM̃_i^4 χ(x̃_i)+ ϵ_e , where electrons are described as point-like particles, i.e. ϵ_e = g_em_e^4χ(x_e),g_e=2, x_e=m_e^-1 (6 π^2 n_e / g_e )^1 / 3, being m_e the electron's mass. To determine the pressure, we start from the expression for point-like particles given in Appendix <ref>: p_pl= ∑_i [ gM_i^4 ϕ(x_i) - B_i ], where ϕ is defined in Eq. (<ref>)and the “bag constant" B_i is given by Eq. (<ref>). To take into account excluded volume effects, we use Eq. (<ref>), i.e. we rewrite Eq. (<ref>) in terms of ñ_i and add the correction factor δ. The complete expression for the pressure is obtained adding the contribution of electrons. The result is:p = δ(n_B)∑_i=u, d, s[gM̃_i^4 ϕ(x̃_i) - B̃_i ]+ p_e , where p_e = g_em_e^4ϕ(x_e) and B̃_i ≡ B(ñ_i).Finally,for determining the chemical potential we start from the expression for point-like particles given in Eq. (<ref>): μ_pl,i = M_i √(x_i^2 + 1) - 1/3 n_B∑_j B_j . To take into account excluded volume effects, we use Eq. (<ref>), i.e. we rewrite Eqs. (<ref>) and (<ref>) in terms of ñ_i and include the correction factor λ. The result is: μ_i =λ(n_B)∑_i=u, d, s[gM̃_i^4 ϕ(x̃_i) - B̃_i ]+M̃_i√(x̃_i^2+1)- 1/3 ñ_B∑_i=u, d, sB̃_i.§ 3-FLAVOR QMDDM WITH EXCLUDED VOLUME EFFECTS: NUMERICAL RESULTS§.§ The EOS Now, we adopt an ansatz for b consistent with the asymptotic freedom behavior of QCD, i.e.,we adopt aformula that allows b to approach zero at asymptotically large densities.Based on the prescription of Eq. (<ref>) with ℓ=1, the excluded volume will be expressed as: b = κ/n_B . being κ a positive constant. Using Eqs. (<ref>), (<ref>) and (<ref>),the functions q, δ and λ are: q(n_B)=1 - κ,δ(n_B)= 1 - κ,λ(n_B)= 0 .Replacing these parameters in Eqs. (<ref>), (<ref>) and (<ref>) we obtain: ϵ = ∑_i=u, d, s(1 - κ) gM̃_i^4 χ(x̃_i)+ ϵ_e , p= ∑_i=u, d, s(1 - κ)(gM̃_i^4 ϕ(x̃_i) - B̃_i) + p_e , μ_i= M̃_i√(x̃_i^2+1)-1/3 ñ_B∑_jB̃_j.In the context of cold dense quark matter found in typical neutron star conditions, the preceding expressions need to be supplemented with the requirements of charge neutrality and chemical equilibrium under weak interactions. Assuming that neutrinos leave freely the system (μ_ν_e=0),the chemical equilibrium conditions read: μ_d = μ_u+μ_e,μ_s = μ_d, while charge neutralityis given by: 23 n_u-13 n_d-13 n_s-n_e=0 .In Figs. <ref>, <ref>, and <ref>, we present results for the EOS calculated with two different sets of parameters a and C, along with three values of κ (0, 0.5, 0.6).In Fig. <ref>, we depict the Gibbs free energy per baryon, G / n_B = (ϵ+p) / n_B, against pressure. Depending on the selected parameters a, C, and κ, the value of G / n_B at p=0 can either exceed or remain below the energy per nucleon of the most tightly bound atomic nucleus, ^62Ni, approximately 930 MeV. Consequently, two distinct scenarios are presented in panels (a) and (b) of Fig. <ref>.For the parameter set yielding G / n_B < 930 MeV at vanishing pressure and temperature, as illustrated in panel (a), we are in the domain of self-bound quark matter, i.e.bulk quark matter invacuum remains stable and doesn't transition into hadronic matter. When this self-bound matter encompasses three flavors at p=T=0, it is designated as strange quark matter (SQM). Under these conditions, Nature would allow the existence of compact stars completely constituted of quark matter, known as self-bound quark stars[Note that, as shown in the analysis of Fig. 4 from Ref. <cit.>, it is possible that two-flavor matter may be more stable than three-flavor matter for a specific choice of parameters. However, this specific case will not be analyzed in this study.].Conversely, panel (b) shows the situation where G / n_B > 930 MeV. This is referred to as hybrid matter due to its transition from a hadronic state at lower pressures to a deconfined state at higher pressures. In such scenarios, stars with quark matter manifest as hybrid stars with a quark core surrounded by hadronic matter.For the parameter selection illustrated in Fig. <ref>(b), the uds curves consistently appear below their ud counterparts, indicating that quark matter encompasses all three flavors. The Gibbs free energy curves exhibit significant sensitivity to variations in the parameter κ. Nonetheless, we always observe SQM in panel (a) and hybrid matter in panel (b) for all our choices of κ. In Fig. <ref>we show the total pressure p and the bag constant B as a function of the energy density for the same parameter choices ofFig. <ref>. In all cases the pressure becomes negative at finite energy density due to the effect of B. The bag constant depends on density, always being a decreasing function of ϵ. At asymptotically large densities B tends to zero and the system behaves as a free Fermi gas of electrons and quarks with M_i=m_i. The EOS is notably sensitive to changes in the parameter κ, invariably leading to an increase in the stiffness of the EOS as κ increases. As κ rises, the bag constant decreases, which in turn shifts the energy density at which the matter pressure becomes zero to lower energy density values. Since the excluded volume is inversely dependent on the baryon number density, the excluded volume effect vanishes at high densities, causing the curves with different κ values to convergewith each other. The speed of sound c_s is shown in Fig. <ref> for the same parameter choices of previous figures. In all cases c_s is a decreasing function of the baryon number density and tends asymptotically to the conformal limit c_s=1 / √(3).Since the pressure tends to zero at a finite density, the curves are truncated at the point where p = 0. As the value of the parameter κ increases, the speed of sound decreases for a given density. This difference vanishes at asymptotically high densities, at which point all curves converge with each other.§.§ Astrophysical applicationsFinally, we analyze stellar configurations based on the two main parameter sets discussed in the previous section, representing strange quark matter and hybrid matter. This paper does not aim for a comprehensive investigation of stellar structure; a more detailed exploration is planned for a future publication. Our primary goal here is to show that the presented model can produce stellar configurations, both strange and hybrid, consistent with current astrophysical constraints. Let us consider first strange star configurations shown in Fig. <ref>. For point-like quarks (κ = 0), the maximum mass does not reach the required2 M_⊙ constraint and the mass-radius curve does not fulfill the astrophysical constraints set by NICER <cit.> observations. Upon accounting for the excluded volume of quarks, the EOS stiffens as described in the previous section. Consequently, the maximum mass rises, and the stellar radii increase for a given mass. The maximum mass increases significantly, approaching close to 2.3 M_⊙ for κ = 0.6.In Fig. <ref>we analyzethe properties of the hybrid matter EOS and its implications on the mass-radius relationship of compact objects. The hybrid EOS shown in the upper panel was constructed using the relativistic Brueckner-Hartree-Fock hadronicEOSknown as MPA1 <cit.> (red curve) together with quarkmatter characterized by parameters a = 3 and C = 75, with different levels of volume exclusion (black curves).The plateaus represent sharp first-order phase transitions between hadrons and quarks. Increasing κ diminishes the transition pressure as well as the energy density jump between both phases. The quark EOS becomes stiffer as κ increases.In panel (b) we show the mass-radius relationship, resulting from the aforementioned EOS. It should be noted that for quark matter composed of point-like particles (κ =0), all dynamically stable configurations (i.e. to the right of the maximum mass point) are hadronic.Hybrid stars appear only beyond the maximum mass object and are dynamically unstable for κ =0. As the transition pressure decreases with an increasing value of κ, it becomes possible for dynamically stable hybrid stellar configurations to emerge, satisfying modern astrophysical constraints.Notice that the maximum mass decreases asκ increases. Specifically, for the value κ = 0.5, the maximum mass is around 2.1 M_⊙, and for κ = 0.6 it is around 2 M_⊙. It is important to emphasize that finding parameterizations of the EOS that yield hybrid stars with M_max > 2 M_⊙ is challenging when considering point-like particles. In this regard, the inclusion of excluded volume effects is essential for the emergence of dynamically stable models of hybrid stars consistent with astrophysical requirements. The main conclusion from the preceding analysis is that the QMDDM with excluded volume effects is not onlyviable from an astrophysical standpoint, but that excluded volume effects play a crucial role to make the model consistent with astrophysical constraints. A more in-depth examination is reserved for a future study. § SUMMARY AND CONCLUSIONSIn our previous study <cit.>, we revisited the QMDDM model, adopting the canonical ensemble framework in place of the usual grand canonical ensemble. This methodological change was essential for resolving the thermodynamic inconsistencies that had persisted in the model for decades.The QMDDM is based on the idea that some relevant aspects of the strong interaction between quarks, particularly in the high-density, low-temperature regime, can be effectively modeled by treating the system as a gas of quasiparticles, wherein the effective masses of these quarks depend on the local density. As a natural consequence of this density-dependent quark mass, a “bag” term naturally arises in the pressure, resulting in quark confinement. Clearly, not all qualitative features of strong interactions can be faithfully replicated through the simple introduction of density dependent masses. Improved versions of the model are necessary to better capture some phenomenological aspects of non-perturbative QCD. In this present study, we focused our efforts in that direction, incorporating the influence of repulsive interactions. We approached this by departing from the assumption of point-like particles and instead considering particles as entities endowed with an excluded volume in their vicinity. The present approach aims to bring a more realistic representation of the quark matter EOS of dense matter at vanishing temperature.To account for the influence of excluded volume, we introduced the “available volume", Ṽ=V-b(n) N, in the Helmholtz free energy F_pl(V, N) for point-like particles, where V is the system's volume. The parameter b, which can be a function of the particle's number density n, represents the volume that each particle would exclude if it were considered a rigid sphere.Once V is replaced with Ṽ, the resulting function F_pl(Ṽ, N) becomes the starting point for describing the system with excluded volume effects. In Secs. <ref> and <ref> we focused in a simple one-flavor model and showed that it ispossible to incorporateexcluded volume corrections into the existing expressions of point-likeparticles in a straightforward manner, achieving a quite practical approach. The procedure involves substituting the particle density n that appears in the expressions for point-like particles with a density n/q where the correction factor q is defined in Eq. (<ref>) and takes into account the available volume in the system. Additionally, multiplicative correction factors arise in the energy density (Eq. (<ref>)) pressure (Eq. (<ref>)), and the chemical potential (Eq. (<ref>)).Using the aforementioned equations, we analyzed two different ansatzes for the excluded volume. Initially, we considered a scenario where particles always exclude the same volume regardless of the matter density. This assumption, as depicted Fig. <ref>(d) is not satisfactory as the speed of sound becomes acausal with increasing density. A more adequate prescription must account for the asymptotic freedom of particles, requiring them to behave as free point-like particles as density increases. To incorporate this behavior, we adopted an ansatz for the excluded volume that is inversely proportional to density, causing it to decrease and tend toward zero as density approaches infinity. This assumption cures the unphysical behavior of the speed of sound, causing it to asymptotically approach the conformal limit c_s=1 / √(3), as illustrated in Fig. <ref>(d). Moreover, the EOS becomes significantly stiffer, as shown inFig. <ref>(c). In Secs. <ref> and <ref>,we extended the EOS to a three-flavor system. In this context, we assumed that both the mass and the excluded volume depend on the baryon number density. Similar to the single-flavor case, we can directly incorporate the excluded volume corrections into the existing EOS for point-like particles, as summarized in Sec. <ref>.By applying these formulas to the model outlined inRef. <cit.>, we derived explicit expressions for the energy density (Eq. (<ref>)), pressure (Eq. (<ref>)), and chemical potential (Eq. (<ref>)) of a system composed by quarks u, d, s, and electrons.In Sec. <ref> we adopted an ansatz for the excluded volume that mimics the asymptotic freedom behavior of QCD. Similarly to the single-species case we assumed that the excluded volume per baryon is inversely proportional to the baryon number density (Eq. (<ref>)). For this specific choice, the EOS takes the form given in Eqs. (<ref>), (<ref>), and (<ref>).Given our focus on astrophysical applications, our numerical computations of the EOS are centered on an electrically neutral system in chemical equilibrium under weak interactions. To represent the two most relevant astrophysical scenarios, we adoptsets of parameters a and C, to characterize self-bound matter and hybrid matter.For the parameter κ, we adopt values that correspond to different excluded volume sizes. In both the self-bound and hybrid cases, based on the chosen parameter set, the uds composition consistently exhibits the lower G / n_B, being energetically preferred. It's worth noting that the Gibbs free energy curves display a remarkable sensitivity to changes in the parameter κ (see Fig. <ref>). The EOS is substantially affected by changes in the parameter κ,leading to an increase in the stiffness as κ increases. At high densities the excluded volume effect vanishes and the curves with different κ values converge with each other (see Fig. <ref>). The speed of sound (see Fig. <ref>) exhibits a decreasing trend with increasing baryon number density, ultimately approaching the conformal limit asymptotically. At a specific density, the excluded volume effect tends to diminish the speed of sound. However, this distinction becomes negligible at very high densities, where all curves converge with each other.Finally, we studied stellar configurations for the two parameter sets representing self-bound and hybrid matter. In the first case, shown in Fig. <ref>, we found that stellar configurations are compatible with all current astrophysical constraints only for a significant amount of volume exclusion. Since the EOS becomes remarkably stiffer as excluded volume increases, the maximum mass of self-bound stars significantly rises for large κ approaching close to 2.3 M_⊙ for κ = 0.6.Recently, evidence has been found that suggests that the extremely tiny and light compact object, HESS J1731-347, may in fact be a self-bound star with a mass around 0.77 M_⊙ and a radius of ∼ 10km <cit.>. As shown in Fig. <ref> the mass-radius curves of self-bound objects naturally cross the credibility contours of HESS J1731-347 if a substantial amount of excluded volume is incorporated in the EOS. If confirmed, this object could be a strange star candidate.We also explored hybrid star configurations using a representative hadronic EOS and various parametrizations of the QMDDM. From our calculations we learn thatthe constraint of 2 M_⊙ is fulfilled only for very stiff hadronic EOS together with a QMDDM EOS incorporating a substantial degree of volume exclusion. In Fig. <ref> it can be seen that the hybrid curves composed by the MPA1 hadronic EOS and the QMDDM with κ = 0.5 and 0.6 are in agreement with all the modern multimessenger constraints. In conclusion, we have extended the QMDDM model, which was previously modified in an earlier work to ensure thermodynamic consistency, to incorporate the effects of excluded volume. These effects phenomenologically represent repulsive interactions between quasiparticles. The incorporation of these effects leads to stiffer EOS increasing the masses of compact objects, aligning them more closely with recent astrophysical observations. A comprehensive analysis of stellar structure using this model is planned as the focus of our future research. § ACKNOWLEDGEMENTS G.L. acknowledges the financial support from the Brazilian agencies CNPq (grant 316844/2021-7) and FAPESP (grant 2022/02341-9). A. G. G. would like to acknowledge the financial support from CONICET under Grant No. PIP 22-24 11220210100150CO,ANPCyT (Argentina) under Grant PICT20-01847, and the National University of La Plata (Argentina), Project No. X824. § SUMMARY OF THE 1-FLAVOR QMDDM EOS FOR POINT-LIKE PARTICLESFor the sake of completeness, we summarize below all the expressions derived in Ref. <cit.> for the QMDDM in the one-flavor case. As discussed in the main text, these expressions describe a system of point-like particles. The Helmholtz free energy is given by: F =g V M^4χ(x) , where M(n) = m + C/n^a/3, x(n) =1/M(6 π^2 n/g)^1 / 3 ,χ(x)=x √(x^2 + 1) (2 x^2 + 1) - arcsinh(x)/16 π^2, being g the particle's degeneracy, M the density dependent mass, and x the dimensionless Fermi momentum.The energy density at T=0 is given by ϵ=F / V and reads: ϵ(n)=g M^4χ(x) . The pressure is given by p(n) = n^2∂(ϵ / n)/∂ n, which results in: p(n) =p^FG(n) - B(n) , being: p^FG(n) = g M^4ϕ(x) ,B(n) =-g M^3 n ∂ M /∂ n β(x)>0 , ∂ M /∂ n =-C/3a/n^a / 3+1,ϕ(x)=x √(x^2+1)(2 x^2 -3) +3 arcsinh(x)/48 π^2 ,β(x) =1/4 π^2[x √(x^2+1) -arcsinh(x)]. The quantity B serves as a bag constant, inducing negative pressure at sufficiently low densities, mimicking the effects of quark confinement. Finally, the chemical potential is: μ(n) =∂ϵ(n) /∂ n =g ∂[M^4(n) χ(x)] /∂ n . As shown in Eq. (40) of Ref. <cit.> the latter derivative results in: μ(n) = μ^FG(n)-B(n)/n, with μ^FG(n)=M √(x^2+1).§ SUMMARY OF THE 3-FLAVOR QMDDM EOS FOR POINT-LIKE PARTICLES WITH A FLAVOR-BLIND MASS FORMULA In this appendix, we present a synopsis of the QMDDM EOS for point-like quarks, as developed in Ref. <cit.>. We concentrate on the parametrization of the QMDDM with a flavor-independent mass formula. Specifically,we assume that the mass of the quark quasiparticle of flavor i depends on the baryon number density n_B as follows: M_i = m_i+C/n_B^a/3 ,(i = u, d, s), where C and a are free parameters, and n_B = 13 (n_u + n_d + n_s).The flavor dependence of M_i comes only from the different values of the current masses m_i.We will describe the system as a mixture of non-interacting quarks with effective masses M_iand free electrons. The total Helmholtz free energy is simply the sum of the contribution of each species: F(V,n_B) = ∑_i=u, d, s, eF_i . where,F_i=g V M_i^4 χ(x_i) (i = u, d, s) , g_e V m_e^4 χ(x_e) (electrons) , with g=6 and g_e =2.The function χ(x) is defined in Eq. (<ref>) and:x_i= 1/M_i( 6 π^2 n_i/g)^1/3(i = u, d, s) , x_e=1/m_e( 6 π^2 n_e/g_e)^1/3 (electrons) . The energy density is ϵ = ∑_i=u, d, s, eϵ_i, where: ϵ_i=g M_i^4 χ(x_i) (i = u, d, s) , g_e m_e^4 χ(x_e)(electrons) . The totalpressure is p = ∑_i=u, d, s, ep_i, being p_i=g M_i^4 ϕ(x_i) - B_i(i = u, d, s) ,g_e m_e^4 ϕ(x_e)(electrons) , where ϕ(x) is defined in Eq. (<ref>) andthe “bag constant” B_i is given by:- B_i = g n_iM_i^3∂ M_i/∂ n_iβ(x_i)=g n_BM_i^3∂ M_i/∂ n_Bβ(x_i), with β(x) defined in Eq. (<ref>). In our previous paper, only the first line of Eq. (<ref>) was presented. The second line is simpler to compute in the flavor-blind model, and it is straightforward to demonstrate the equivalence of both lines. Finally, the chemical potential of u, d and s quasiparticles is:μ_i=M_i √(x_i^2+1) - 1/3 n_B∑_jB_j, while for electrons it reads: μ_e = m_e √(x_e^2+1) . It is important to note that the second term ofEq. (<ref>) differs from the expression presented in Eq. (88) of Ref. <cit.> due to an error in that earlier work.
http://arxiv.org/abs/2312.16095v1
{ "authors": [ "G. Lugones", "A. G. Grunfeld" ], "categories": [ "nucl-th", "astro-ph.HE", "astro-ph.SR", "hep-ph" ], "primary_category": "nucl-th", "published": "20231226154343", "title": "Excluded volume effects in the quark-mass density-dependent model: implications for the equation of state and compact star structure" }
empty Effects of fluctuations in higher-dimensional AdS black holes Takeru Yokota=============================================================Hyewon Han[[email protected]], Bogeun Gwak[[email protected]] Division of Physics and Semiconductor Science, Dongguk University, Seoul 04620,Republic of Korea We explored the impact of mass fluctuations on anti-de Sitter black holes in higher dimensions, particularly focusing on their effects on thermodynamic properties and null trajectories of the black holes. Our findings indicate that mass oscillations lead to perturbations in thermodynamic variables and geodesics. These result in the second-order fluctuations for the location of the horizon, thereby altering the Hawking temperature and Bekenstein–Hawking entropy. Furthermore, we derived equations for perturbed null rays near the horizon with arbitrary dimensions and for complete null rays in the large D limit.=18ptplain =18pt plain § INTRODUCTIONA black hole, in classical gravity theory, is an entity that completely absorbs any matter falling onto its surface. However, accounting for quantum mechanical effects in curved spacetime reveals that black hole emits thermal radiation, a mechanism demonstrated by Hawking <cit.>. Quantum fluctuations governed by the uncertainty principle spontaneously generate energy quanta near the event horizon of a black hole, and a distant observer measures the thermal spectrum. The black hole can be in thermal equilibrium with its radiation and is specified by the Hawking temperature, which is proportional to its surface gravity. It was also proposed that a black hole has Hawking–Bekenstein entropy proportional to the area of the horizon <cit.>. Furthermore, these black hole features, which are the result of considering quantum effects, resemble the laws of thermodynamics <cit.>. The thermodynamic nature of black holes have been a focus of recent extensive research.Using a semi-classical treatment of quantum fluctuations, York <cit.> presented a dynamic description of the Hawking effect. This model involves a spherical, asymptotically flat black hole oscillating in its gravitational quasi-normal mode. It calculates the locations of the black hole’s horizons for both spherical and non-spherical oscillations, and the results suggest the formation of a `quantum ergosphere' through which matter can tunnel out of the black hole. Building upon this work, Barrabès et al. <cit.> modified the Hawking radiation caused by fluctuations in the black hole geometry. They also examined the propagation of a massless field by considering a stochastic ensemble of fluctuations <cit.>. These stochastic fluctuations arising from quantum matter fields, can be described within the framework of stochastic semi-classical gravity <cit.>. Furthermore, the fluctuating black hole model was extended to higher-dimensional gravity, and the dimensional dependencies of the corrections due to spherical oscillations were explored in <cit.>. Various aspects of fluctuating black hole geometries are also addressed in different contexts in <cit.>.In this context, we are considering a gravity theory with a negative cosmological constant. The anti-de Sitter (AdS) spacetime, a maximally symmetric solution of the Einstein field equations, has a constant negative curvature and a boundary at spatial infinity. In particular, AdS spacetime is crucial in conformal field theory (AdS/CFT) correspondence <cit.>, which postulates the relationship between gravity theory on AdS spacetime and CFT defined on the AdS boundary at zero temperature. Namely, (D-1)-dimensional quantum field theory can be described using D-dimensional gravity theory. CFT at finite temperatures is dual to the physics of black holes in AdS spacetime <cit.> because black holes behave as thermodynamic systems. In this context, AdS/CFT correspondence has prompted active research into AdS black holes in arbitrary dimensions <cit.>.Given that gravity theories in higher dimensions typically entail complex and nonlinear dynamics, it is crucial to consider scenarios where the number of dimensions, D, approaches infinity <cit.>. In the context of high-dimensional spacetime (D), the radial gradient of the gravitational potential near the event horizon of a black hole becomes exceedingly steep. Beyond the event horizon, in the exterior region of a narrow radial span of approximately 1/D, gravitational effects become negligible. This corresponding length scale allows us to effectively separate the vicinity of the horizon from distant regions. Near the horizon, the gravitational influence of the black hole is predominant, while in the outer region, the black hole behaves much like a non-interacting particle within a flat (in our case, AdS) spacetime. This limit fundamentally transforms the dynamics, leading to an effective theory on the surface of the black hole’s horizon. The 1/D expansion simplifies complex calculations significantly and facilitates an analytical approach in various recent studies. It has been applied to investigate a wide range of topics, including the spectra of quasi-normal modes and the stability of static black holes<cit.>, rotating black holes <cit.>, (A)dS black holes <cit.>, and black string and brane solutions <cit.>. Furthermore, the large D limit has also been employed to explore the dynamics of solutions in the Einstein–Gauss–Bonnet theory <cit.>. In this work, we investigated mass fluctuations in higher-dimensional AdS black holes. Incorporating a negative cosmological constant, our model represents a direct generalization of the fluctuating black hole discussed in <cit.>. Given the pivotal role that mass plays in the context of black hole spacetime, its fluctuations have a profound impact, not only on the geometry but also on the thermodynamics and the trajectories of geodesics. Mass fluctuation was introduced as a small oscillation in mass, which manifests as its effects prominently in the metric. Solving the equation of motion reveals an oscillation in the black hole's horizon, which we have determined up to the second order. In particular, the oscillatory horizon leads to a change in key thermodynamic variables such as the Hawking temperature and Bekenstein–Hawking entropy. This allows us to gain insights into the preferred state of the black hole within the thermodynamic analysis. Furthermore, the oscillations in mass perturb geodesics within the spacetime, and we have derived a general solution for the propagation of null rays in close proximity to the horizon. Determining the complete solution for an arbitrary location in the geodesic equation becomes considerably intricate in AdS spacetime due to integrals in general dimensions. However, focusing on the gravitational influence's strength enables us to render the geodesic equation solvable by employing a higher-dimensional limit called the large D limit. Our comprehensive solution for the large D limit demonstrates the perturbed trajectory of the null ray.The remainder of this paper is organized as follows: In Section 2, we review the fluctuating black hole model and generalize it to AdS spacetime. In Section 3, we present the perturbed position of an event horizon and calculate the thermodynamic variables. In Section 4, we derive a solution describing the trajectory of a null ray propagating in a fluctuating black hole geometry. In Section 5, we examine the solution to a large D limit. The results are summarized in Section 6. We employed a metric signature (-,+,+,+, ⋯) and its units, where G_D=c=1.§ MODEL OF FLUCTUATING BLACK HOLESThe influence of metric fluctuations has been investigated for four <cit.> and higher dimensions <cit.>. To classically treat quantum fluctuations near the event horizon of a black hole, they employed the fluctuating black hole model proposed in <cit.>. This is described by an ingoing Vaidya-type solution with an oscillating mass. For D≥4, the metric is given by <cit.>ds^2 = - ( 1- m/r^D-3) d v^2 + 2 dv dr + r^2 d Ω_D-2^2,where d Ω_D-2^2 are the line elements on the unit (D-2) spheres. The mass function is given bym = m(v,θ) = M [ 1 + ∑_l (2l+1) μ_l sin (ω_l v) Y_l(θ) ] ϑ (v),where M is a mass parameter related to the black hole mass.M_BH=(D-2)Ω_D-2/16 π M,and Ω_D-2=2 π^(D-1)/2/Γ(D-1/2) denotes the area of the unit (D-2) sphere. The second term in the brackets of (<ref>) represents the fluctuating part given by the sum of the resonant modes for each angular index l. The dimensionless amplitude parameter μ_l=α_l ħ^1/2 M_BH^-1, where α_l is a pure number, is extremely small for massive black holes. ω_l indicates the resonant frequencies, and Y_l(θ) are the normalized spherical harmonics on unit (D-2) spheres with a zero azimuthal index. The angular coordinates, except for the azimuthal angle, are collectively denoted as θ=(θ_1, ⋯, θ_D-3). The Heaviside step function ϑ(v) is included assuming that a black hole is formed by the gravitational collapse of a spherical null shell with mass M. Therefore, metric (<ref>) describes the flat spacetime inside the null shell when v<0 and the fluctuating black hole spacetime when v>0.If we consider the lowest mode l=0 as a simple case, we have Y_0(θ)=1 <cit.> andm(v)=M[1+μ_0 sin (ω v) ] ϑ(v),where the subscript 0 of the frequency is omitted. It describes spherical oscillations with a small amplitude μ_0 and a period 2π/ω. The study focused on examining the effects of fluctuations on the event horizon and outgoing rays in black holes. This was achieved by solving a null geodesic equation specific to this geometry. The research aimed to deepen understanding of black hole dynamics under the influence of perturbations.We generalize this model to a theory with the cosmological constant Λ. The Einstein field equations are obtained from the following actions in D dimensions.I = 1/16π∫ d^D x √(-g)(R-2Λ) + I_matter.The cosmological constant is given byΛ = ±(D-1)(D-2)/2 L^2,where L is the curvature radius of the de Sitter (dS) or anti-de Sitter (AdS) spacetime; the positive sign corresponds to dS, and the negative sign corresponds to AdS. The action principle yields the following field equations.G_μν + Λ g_μν = 8 πT_μν,where T_μν is the stress-energy tensor. We consider a spherically symmetric solution with negative Λ. A fluctuating black hole geometry can be approximated using a higher-dimensional Vaidya-AdS metric. This is written in the ingoing Eddington–Finkelstein coordinates (v,r) as <cit.>ds^2 = - f(v,r) d v^2 + 2 d v d r + r^2 d Ω^2_D-2,wheref(v,r) = 1 - m(v)/r^D-3+ r^2/L^2.To simplify the model, we consider only the spherically oscillating mass (<ref>). Subsequently, we obtain the stress-energy tensor in the formT_μν = D-2/16 π r^D-2[ M (1+μ_0 sin ( ω v)) δ(v) + Mμ_0 ωcos(ω v) ϑ(v) ] l_μ l_ν,where l_μ=-∂_μ v is a null vector field tangent to the incoming null geodesics. The first term, containing the delta function, corresponds to a null shell with mass M and additional fluctuating and ingoing null dust. The second term indicates that the energy density in the black hole spacetime fluctuates around a mean of zero. The causal structure of this spacetime without fluctuations, that is, μ_0=0, is shown in Figure <ref>. The vertical line on the right side of the diagram represents the AdS boundary r=∞ and the line on the left side represents a regular origin r=0. The doubled line is a null shell propagating along v=0, and the dashed line after collapse represents the future event horizon r=r_H of the black hole. In the study, it was observed that for rays to propagate outward in black hole spacetime without entering the trapped region, incoming rays leaving the boundary must depart earlier than the trajectory represented by the dashed line. The focus was on observing outgoing rays reaching the boundary, disregarding those reflected at the boundary and traveling towards the singularity. Metric fluctuations were found to modify the trajectories of rays in black hole geometry, highlighting the intricate dynamics influenced by these perturbations. (I)at (4,0); (a)at (4,-2) ; (I) +(90:4)coordinate (Itop)+(-90:4) coordinate (Ibot)+(-90:5) coordinate (bot)+(180:4) coordinate (Ileft); [dashed] (Ileft) – node[near end, above, sloped] r=r_H (Itop); (Itop) – (bot) node[near start, right, inner sep=2mm]r=∞ ; [dashed] (Ibot) –(Ileft); (Itop) + (-4,0) coordinate (sing); (sing) +(0,-9) coordinate (past); (sing) – (past) node[near start, left, inner sep=2mm]r=0; [decorate,decoration=zigzag] (sing) – (Itop)node[midway, above, inner sep=2mm] r=0; (I) +(0,0) coordinate (v0); [thin,double distance=1pt] (v0) –node[near start, below, sloped] v=0 (sing); (Ileft) +(0,-0.5) coordinate (bounce); (Ibot)+(0,-0.5) coordinate (Vu); (Itop) +(0,-0.5) coordinate[label=0:u] (bounce2); (Itop) +(-0.5,0) coordinate (end); [ultra thick](Vu) – node[near start, below, sloped] v=V_0(u) (bounce) – (bounce2); figureConformal diagram of the asymptotically AdS black hole formed by a gravitational collapse of a null shell without fluctuations. The collapsing null shell propagated along the double line v=0. The zigzag line indicates the singularity of the black hole, and the thick solid line represents the trajectory of a radial null ray propagating near an event horizon.To examine the influence of fluctuations on the event horizon and propagating rays, we solved a null geodesic equation for radially outgoing rays in the region v>0.1 - M[1+μ_0 sin (ω v) ]/r^D-3+ r^2/L^2 = 2 d r /d v.As the amplitude parameter μ_0 of the fluctuations is assumed to be minuscule, we can employ a perturbation method. We set the radial coordinates tor = r (v) = R (v) + ρ (v) + σ (v) + ⋯,where R(v) is a solution without fluctuations and ρ(v) and σ(v) correspond to the first- and second-order perturbations in μ_0, respectively. The higher-order terms are denoted by dots, and we work up to the second order. In the following calculations, we fix the value of the retarded time coordinates as u=v-2R_*, whereR_*= L arctan(R/L), v<0∫(1 - M/R^D-3+ R^2/L^2)^-1 d R,v>0which is the radial tortoise coordinate without fluctuations. This requirement corresponds to a condition in which the rays propagating outward in the perturbed black hole geometry reach the boundary at the same retarded time as the rays propagating in the unperturbed geometry. Subsequently, u is a fixed parameter specifying the unperturbed trajectory r=R(v;u) of the ray.By substituting (<ref>) into the geodesic equation (<ref>) for v>0 and linearizing, we obtain 2 d R /d v = 1 - M/R^D-3 +R^2/L^2, 2 d ρ/d v - [(D-3) M/R^D-2 +2 R/L^2] ρ = - M/R^D-3μ ,2 d σ/d v - [(D-3) M/R^D-2+2 R/L^2] σ = (D-3) M/R^D-3[ ρ/Rμ - D-2/2ρ^2/R^2]+ρ^2/L^2,where μ=μ_0 sin (ω v). It is convenient to write the perturbation equations (<ref>) and (<ref>) in common their form as d k/d v - [D-3/2M/R^D-2 +R/L^2] k = K,where the first-order perturbation corresponds tok = ρ,K = -M/2R^D-3μ.The second-order perturbation corresponds tok = σ,K = D-3/2M/R^D-3[ ρ/Rμ - D-2/2ρ^2/R^2] + ρ^2/2 L^2.In the following chapters, we solve these perturbation equations to seek solutions describing the perturbed position of an event horizon and the perturbed trajectories of the rays propagating in black hole geometry.§ PERTURBED EVENT HORIZONS AND THERMODYNAMICSWe begin by considering an unperturbed solution R=R_H and determine a perturbed solution r=r_H=R_H+ρ_H+σ_H+⋯ for v>0, describing the position of the event horizon of a fluctuating black hole. The unperturbed position R_H satisfiesR_H^D-3( 1+ R_H^2/L^2) - M = 0.The unperturbed surface gravity isκ = D-3/2M/R_H^D-2 + R_H/L^2.Using these variables, the perturbation equations are expressed asd ρ_H/d v - κρ_H =- μ/2( 1+ R_H^2/L^2),d σ_H/d v - κσ_H = D-3/2( 1+ R_H^2/L^2) [ ρ_H/R_Hμ - D-2/2ρ_H^2/R_H^2] + ρ_H^2/2 L^2.By solving these first-order differential equations, we obtained the perturbed position of the event horizon.ρ_H =μ_0/2 κ( 1+ R_H^2/L^2) Ωcos (ω v) + sin (ω v)/1+ Ω^2 , σ_H = μ_0^2/4 κ^2( 1+ R_H^2/L^2)^2 [ (D-3)/2 R_H 2Ω^2 (2 - Ω^2) cos (2ω v) + Ω(1-5Ω^2) sin (2 ω v)/(1+ Ω^2)^2 (1+4Ω^2). . - 1/4 κ{(D-3)(D-4)/ R_H^2 +(D-1)(D-2)/L^2}. . × (1-5Ω^2) sin^2 (ω v) + Ω (2-Ω^2) sin (2 ω v) + Ω^2(5+2Ω^2) /(1+ Ω^2)^2 (1+4Ω^2)],where Ω=ω/κ denotes a dimensionless frequency. Here, the integration constants are chosen to eliminate terms that cause fluctuations ρ_H(v) and σ_H(v) to increase exponentially over time v. These results indicate that the event horizon fluctuates with the amplitudes and frequencies determined by the metric fluctuation parameters.Small and periodic changes in the position of the event horizon affect the thermodynamic variables. We calculated their mean values by averaging over time v to investigate the overall changes and denote them with an overbar. The mean values of the surface area and surface gravity of the perturbed black hole geometry were calculated as follows.𝒜 ≡Ω_D-2(r_H^D-2(v)) = Ω_D-2R_H^D-2[1+μ_0^2/4 (1+Ω^2)( 1+ R_H^2/L^2)^2 (D-2){(D-3)-(D-1) R_H^2/L^2 }/{(D-3)+(D-1) R_H^2/L^2 }^3], κ ≡D-3/2M ( 1+μ_0 sin (ω v)/r_H^D-2(v)) + (r_H(v))/L^2= κ[1+μ_0^2/4 (1+Ω^2)( 1+ R_H^2/L^2)^2 . . ×(D-2)(D-3)^2+(D-1) R_H^2/L^2{4(D-3)-(D-1)(D-2)R_H^2/L^2 }/{(D-3)+(D-1) R_H^2/L^2 }^4].The mean value of the Hawking temperature can be obtained from the relation T_H = ħ κ /(2 π k_B), where k_B is the Boltzmann constant. Subsequently, the changes in the surface area δ𝒜 = 𝒜 - 𝒜 and Hawking temperature δ T_H = T_H - T_H caused by fluctuations are related byδ𝒜/𝒜 = δ T_H/T_H - μ_0^2/1+Ω^2( 1+ R_H^2/L^2)^2 (D-1)(D-3)R_H^2/L^2/{(D-3)+(D-1)R_H^2/L^2}^4,where 𝒜 and T_H are values without fluctuations. Unlike the asymptotically flat case in <cit.>, the fluctuating AdS black hole features a relation that includes an additional term that is dependent on the dimension number and curvature radius. The entropy of the black hole also fluctuates. By identifying the energy with the mean mass of the fluctuating black hole E=m(v)=M and using the first law dE = T_H dS, the mean value of the entropy is calculated as follows.S≃ k_B 𝒜/4 ħ[1- μ_0^2/2 (1+Ω^2)( 1+ R_H^2/L^2)^2 . . ×(D-2)(D-3)^2 + (D-1)R_H^2/L^2 {2(D-3)-(D-1)(D-2)R_H^2/L^2}/{(D-3)+(D-1)R_H^2/L^2}^4].This indicates that the standard relationship between the entropy and surface area was modified. The value is slightly less than the entropy of the stationary spacetime without fluctuations, μ_0=0. Because a system with higher entropy is preferred, this implies that after an extended period, the black hole absorbs a fluctuating null flux, and the system can evolve into a stationary spacetime according to the second law δ S ≥ 0. Moreover, if the other parameters L and R_H are fixed, the correction term decreases as the number D of spacetime dimensions increase. We recover the results <cit.> for the asymptotically flat case when 1/L^2=0. § PROPAGATING NULL RAYS IN FLUCTUATING GEOMETRY We will now calculate the general solutions for R > R_H and v>0. This pertains to the trajectory of a perturbed ray propagating outward beyond an event horizon. To achieve this, we employ the zeroth-order equation (<ref>) and substitute the advanced time parameter v in the perturbation equation (<ref>) with R(v;u), where u is the fixed retarded time parameter: Subsequently, we have(1-M/R^D-3+R^2/L^2) d k/d R-[ (D-3)M/R^D-2+2R/L^2] k = 2K.By solving this equation, we obtaink = (1-M/R^D-3+R^2/L^2) [-∫^∞_R2 K/(1-M/R'^D-3+R'^2/L^2)^2d R' + k_∞],where k_∞ = lim_R →∞k(R)/(1-M/R^D-3+R^2/L^2),which denotes the integration constant. In order for the perturbed rays to reach the boundary in the same retarded time as the unperturbed rays, we make the assumption that k_∞=0 <cit.>. Therefore, we haveρ =( 1-M/R^D-3+R^2/L^2) ∫^∞_R(1-M/R'^D-3+R'^2/L^2)^-2 M/R'^D-3μd R',σ =-( 1-M/R^D-3+R^2/L^2) ∫^∞_R(1-M/R'^D-3+R'^2/L^2)^-2×{(D-3) M/R'^D-3(ρ/R'μ- D-2/2ρ^2/R'^2) +ρ^2/L^2}d R'.The fluctuating part of the mass function is written asμ=μ(R')=μ_0 sin[ Ω(ũ+2κ R_*(R')) ],where R_* denotes the tortoise coordinates in (<ref>) for v>0, and ũ=κ u.We consider a ray reaching the boundary at later time u. Because such a ray propagates near the event horizon in black hole geometry, its effective frequency becomes extremely high. Thus, we can track it backward in time using the geometric optics approximation. The trajectory of the ray without fluctuations, that is, μ_0=0, is indicated by the thick solid line in Figure. <ref>. In the region v>0, the ray propagates close to the event horizon and crosses the null shell at v=0. It travels in AdS spacetime inside the shell v<0, bounces off at the regular origin r=R=0, and diverges toward the boundary. One can establish a relation V_0(u) between the value of the advanced time v when the ray leaves the boundary and the value of the retarded time u when it arrives at the boundary. When v<0, we obtain the following relationship.V_0=-2L arctan( R_0/L),where R_0 denotes the value of the unperturbed radial coordinate R on the null shell v=0. When v>0, we haveu = - 2 ∫(1 - M/R_0^D-3+ R_0^2/L^2)^-1 d R_0.Because the value of the radial coordinate of the ray should be continuous across the shell v=0, we obtain the relation V_0(u) by combining the equations above. Fluctuations affect the trajectory of the ray in the region v>0 and thus modify the V_0(u) function. In the presence of fluctuations μ_00, Equation (<ref>) remains unchanged, whereas Equation (<ref>) is modified toV=-2L arctan[ R_0 + ρ(R_0) + σ(R_0)/L].By calculating Equations (<ref>), (<ref>), and (<ref>) for v=0 and large u, we obtain the V(u) function from relation (<ref>), which illustrates the effect of metric fluctuations in the black hole geometry. § THE RAY PROPAGATION IN THE LARGE D LIMITThe complex nature of the fluctuating component (<ref>) of the mass function, which includes the tortoise coordinates, poses a challenge for analytical solutions in general dimensions. In this section, we demonstrate how simplifications can be applied by taking advantage of a large D limit <cit.>, enabling the derivation of a comprehensive solution.In the scenario of a significantly high number of spacetime dimensions, where D ≫ 1, the radial gradient of the gravitational potential becomes extremely steep. Consequently, the gravitational field becomes highly localized within a narrow radial range near the event horizon of a black hole. In this context, we can examine the black hole’s geometry by segregating the regions very close to the horizon from those farther away. To investigate a ray reaching the boundary at a later time u in the `near-horizon zone,' we introduce a new coordinate 𝖱̂≡( R/R_H)^D-3,defined by ln𝖱̂≪ D-3. Using this near-horizon coordinate and considering up to the first order in 1/D, the unperturbed tortoise coordinate for v>0 is calculated asd R_* = (1 - M/R^D-3+ R^2/L^2)^-1d R≃ R_H/D (1+R_H^2/L^2)1/𝖱̂-1d 𝖱̂,yieldingR_* = R_H/D(1+R_H^2/L^2)ln (𝖱̂-1).Note that (D-3) becomes D because we take D to be very large. Subsequently, we have the fluctuating part of the mass function in the form μ (𝖱̂ )= μ_0 sin[ Ω{ũ + (1+2R_H^2/D(R^2_H+L^2)) ln (𝖱̂-1) }] = μ_0 I m[ e^i Ωũ (𝖱̂-1)^i Ω(1+ 2R_H^2/D(R^2_H+L^2))].The first-order perturbation is as followsρ (𝖱̂) =R_H/D(1-1/𝖱̂) I(𝖱̂),whereI(𝖱̂)= ∫^∞_𝖱̂μ(τ)/(τ - 1)^2 d τ =μ_0 I m[e^i Ωũ∫^∞_𝖱̂(τ - 1)^-2+iΩ(1+2R_H^2/D(R^2_H+L^2)) d τ] = μ_0 I m[ e^i Ωũ1 + iΩ/1+Ω^2 ( 𝖱̂ -1 )^-1+i Ω(1+2R_H^2/D(R^2_H+L^2))].To calculate the V(u) function, we need a value on the null shell v=0 that satisfiesũ + (1+2R_H^2/D(R^2_H+L^2)) ln (𝖱̂_0-1 ) = 0,where 𝖱̂_0 denotes the value of 𝖱̂ at v=0. By imposing this condition, we obtain the following solution on the shell.ρ(𝖱̂_0)≃μ_0 R_H Ω/D(1+Ω^2)𝖱̂_0.Next, the second-order perturbation is rewritten in near-horizon coordinates as follows.σ (𝖱̂) = - (1-1/𝖱̂) ∫^∞_𝖱̂ρ(τ)/(τ - 1)^2{μ(τ) - D/2R_Hρ(τ) } d τ.Using Equation (<ref>), we obtainσ (𝖱̂)= - R_H/D(1-1/𝖱̂) [ ∫^∞_𝖱̂μ(τ)/τ(τ - 1) I(τ) dτ - 1/2∫^∞_𝖱̂I^2(τ)/τ^2 dτ].The integration of the second term by parts yieldsσ (𝖱̂)= R_H/D(1-1/𝖱̂) [ I^2(𝖱̂)/2𝖱̂ - ∫^∞_𝖱̂I(τ)/τ{μ(τ)/τ - 1 - d I(τ)/dτ}dτ].UtilizingdI(τ)/dτ = d/dτ∫^∞_τμ(ξ)/(ξ-1)^2dξ = -μ(τ)/(τ-1)^2,it becomes evident that the second term in Equation (<ref>) can be calculated as∫^∞_𝖱̂I(τ)/τ{μ(τ)/τ - 1 - dI(τ)/dτ}dτ = ∫^∞_𝖱̂μ(τ)/(τ-1)^2 I(τ)dτ = 1/2 I^2(𝖱̂).Therefore, we have σ (𝖱̂)= -R_H/2D(1-1/𝖱̂)^2 I^2(𝖱̂).We obtain the value on the shell using the condition (<ref>) asI^2(𝖱̂_0) ≃μ_0^2 Ω^2/(1+Ω^2)^2 (𝖱̂_0-1)^2.Thus, the value of the second-order perturbation on the shell isσ(𝖱̂_0) ≃ - μ_0^2 R_H Ω^2 /2D(1+Ω^2)^2 𝖱̂_0^2 ,where 𝖱̂_0 can be written as a function of ũ using (<ref>). Finally, we obtain the modified V(ũ) function (<ref>) owing to fluctuations by collecting the solutions (<ref>) and (<ref>).V(ũ)=-2L arctan[ R_H/D L{ D + ln(1+ e^-(1-2R_H^2/D(R^2_H+L^2))ũ) . . . .+μ_0 Ω/1+Ω^2(1+ e^-(1-2R_H^2/D(R^2_H+L^2))ũ)^-1. . . . -μ_0^2 Ω^2/2(1+Ω^2)^2(1+ e^-(1-2R_H^2/D(R^2_H+L^2))ũ)^-2}].This result indicates the influence of metric fluctuations in a large D-dimensional AdS black hole geometry in the near-horizon regime. Interestingly, a large D limit simplifies the calculations and enables us to obtain a complete solution in a compact form. § CONCLUSIONSIn this study, we investigated the effects of small fluctuations near the event horizon of high-dimensional AdS black holes. We generalized the models of <cit.>, where the fluctuations were approximated by the oscillations of the sources of asymptotically flat black holes, for a negative cosmological constant. In our simple model, a fluctuating black hole is described by the following higher-dimensional Vaidya-AdS solution: This results from the gravitational collapse of a massive null shell and is characterized by a mass oscillating in the spherical mode, l=0. To investigate the impact of small-amplitude fluctuations on the horizon and rays within black hole geometry, we employed a perturbation method to solve the null geodesic equation for radially outgoing rays. We found that the event horizon’s position fluctuates, with amplitudes and frequencies determined by metric fluctuation parameters. Consequently, thermodynamic variables at the horizon exhibit periodic changes, with time-averaged values incorporating variables that have correction terms proportional to the square of the amplitude parameter μ_0. In particular, the modified entropy of the black hole has a slightly smaller mean value than that without fluctuations, corresponding to a stationary system. The fluctuating spacetime evolves into classical stationary spacetime following the second law δ S ≥ 0 of thermodynamics. Furthermore, we calculated a solution that describes the perturbed trajectory of an outgoing ray near the horizon. The impact of fluctuations can be investigated by calculating the relation V(u) between the advanced time v=V when the ray starts at the AdS boundary inside the null shell and the retarded time u when it reaches the boundary of the black hole geometry. We derived a complete solution by introducing near-horizon coordinates defined in the large D limit, which simplifies complex calculations in higher dimensions. Our results include those of <cit.> for an asymptotically flat spacetime limit, L →∞.We focused on spherical oscillation modes for simplicity, but exploring higher modes l ≥ 2 could yield intriguing results. Future studies can use our results to examine both the outgoing energy flux and the asymptotic spectrum. Moreover, these findings lay the groundwork for investigating how fluctuations affect the geometry of spinning black holes. AcknowledgmentsThis research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2022R1I1A2063176) and the Dongguk University Research Fund of 2023. BG appreciates APCTP for its hospitality during completion of this work. jhep
http://arxiv.org/abs/2312.16027v1
{ "authors": [ "Hyewon Han", "Bogeun Gwak" ], "categories": [ "gr-qc", "hep-th" ], "primary_category": "gr-qc", "published": "20231226123044", "title": "Effects of fluctuations in higher-dimensional AdS black holes" }
IEEEexample:BSTcontrol Bezier-based Regression Feature Descriptor for Deformable Linear Objects Fangqing Chen ^*University of Toronto Copyright may be transferred without notice, after which this version may no longer be accessible. ^* Corresponding Author. =============================================================================================================================================================================== In this paper, a feature extraction approach for the deformable linear object is presented, which uses a Bezier curve to represent the original geometric shape. The proposed extraction strategy is combined with a parameterization technique, the goal is to compute the regression features from the visual-feedback RGB image, and finally obtain the efficient shape feature in the low-dimensional latent space. Existing works of literature often fail to capture the complex characteristics in a unified framework. They also struggle in scenarios where only local shape descriptors are used to guide the robot to complete the manipulation. To address these challenges, we propose a feature extraction technique using a parameterization approach to generate the regression features, which leverages the power of the Bezier curve and linear regression. The proposed extraction method effectively captures topological features and node characteristics, making it well-suited for the deformation object manipulation task. Large mount of simulations are conducted to evaluate the presented method. Our results demonstrate that the proposed method outperforms existing methods in terms of prediction accuracy, robustness, and computational efficiency. Furthermore, our approach enables the extraction of meaningful insights from the predicted links, thereby contributing to a better understanding of the shape of the deformable linear objects. Overall, this work represents a significant step forward in the use of Bezier curve for shape representation.Robotics,Shape-servoing,Asymmetric saturation,Sliding Mode Control, Deformable objects § INTRODUCTIONThe manipulation of deformable objects is currently an open (and hot) research problem in robotics <cit.> that has attracted many researchers due to its great applicability in many areas, e.g. manipulating fabrics <cit.>, shaping of food materials <cit.>, assembling soft components <cit.>, manipulating cables <cit.>, interacting with tissues <cit.>, etc. Note that physical interactions between a robot and a deformable object will inevitably alter the object's shape. The feedback control of these additional object degrees-of-freedom (DOF) is referred to in the literature as shape servoing <cit.>, a frontier problem that presents one main challenge <cit.>:The efficient feedback characterization of the object's shape <cit.>, which has an infinite number of DOF to be controlled by a robot with limited manipulation directions <cit.>. Bezier curve, a class of fitting models designed to work with shape fitting and approximation, has demonstrated remarkable success in numerous tasks such as shape fitting <cit.>, data extraction <cit.>, and shape reconstruction <cit.>. Their ability to capture both local and global structural information, as well as node features, makes them a promising candidate for the problem of shape representation in the field of deformable object manipulation <cit.>. In this paper, we introduce a Bezier-based shape representation framework that leverages the power of regression features and linear regression, Our approach extends traditional Bezier curve models by incorporating parameterization curve enhancements tailored specifically to the unique challenges presented by shape representation. To the best of our knowledge, this is the first attempt to design a feature extraction framework for DLO with the consideration of the Bezier curve and parameterization technique, which helps to obtain a low-dimensional shape descriptor in the latent feature space. The rest of this paper is organized as follows:Section 2 provides a review of related work in the field of shape representation and Bezier curve.Section 3 presents the methodology of the regression feature extraction, including the problem formulation and the algorithm architecture.Section 4 describes the simulation setup and presents the results and comparison to existing methods.Finally, Section 6 concludes the paper and discusses potential directions for future work. The key contributions of this paper are two-fold: * Bezier-based Shape Descriptor. This paper forms a Bezier-based extraction framework utilizing the parameterization regression. The core modules are constructed in three parts, the build of the cost function, the build of the augmented function, and the linear regression, The module aims to generate a low-dimensional feature representing the original shape configuration of the deformable linear object. The proposed manipulation runs in a model-free manner and does not need any prior knowledge of the system model.* The Linear Regression Calculation. This paper uses linear regression to solve the fitting cost function, thereby directly obtaining the analytical expression of the shape feature and also providing the conditions for the effective stability of the feature. Moreover, the calculation method has good real-time performance and stronger robustness than the other counterparts.§ RELATED WORKIn this section, we review the existing literature relevant to our study. This discussion is divided into two main subsections: shape detection and feature extraction. §.§ Shape DetectionIn the field of deformable object manipulation, the important module is to obtain a suitable shape description <cit.>. For this issue, the current methods are mainly divided into two aspects: local shape descriptor (LSD) and global shape descriptor (GSD). As for LSD, the typical example is the point form in 2D/3D points, e.g., central point, marker point, feature point, etc. This point form is the simplest LSD using data points as shape features of deformable linear objects.Because no matter from the perspective of geometry or image processing, features based on data points are the simplest features that can be extracted. For example, feature points <cit.>,hole points <cit.>, and other artificial markers <cit.>. However, point-based LSD has an obvious drawback, i.e., high-dimensional considerable, and low robustness to lightness, contrasts, and other human interference <cit.>, which can easily lead to inaccurate DLO shape description and affect system control accuracy <cit.>. To this end, the center-mass point is used to represent the shape change of the ball's middle area in <cit.>, which improves the anti-interference of the point-based feature. Other types of LSD,such as lines, ellipses, etc., and hybrid features are also used in some servo tasks <cit.>. However, the above-point-based features often face the high-dimensional question, which may have a bad effect on the real-time system's performance, and even cause the manipulation failure <cit.>. With the above talk, it is crucial to design a feature extraction method for the original shape configuration to generate a low-dimensional feature vector, which captures the physical characteristics as much as possible <cit.>. Especially in some specified cases with highly dynamic processes, the shapes change fast, and an efficient shape descriptor is needed in complex environments. Meanwhile, it is also important to enhance the robustness to the problems of illumination, contrast, environmental noise, and object surface texture in the real environment.§.§ Feature ExtractionFor the traditional rigid objects, it is easy to use 6-DOF vectors to describe the whole shape configuration <cit.>. However, as flexible objects usually have infinite-dimensional geometric information, thus it is not possible to use simple 6-DOF to represent <cit.>. Thus, a simple and effective feature extractor that can characterize these objects in an efficient (i.e., compact) manner should be designed <cit.>. A method based on linearly parameterized (truncated) Fourier series was also proposed to represent the object's contour <cit.>. § BEIZER-BASED FEATUREIn this article, the centerline configuration of the object is defined as follows:𝐜̅= [𝐜_1^,…,𝐜_N^]^∈ℝ^2N, 𝐜_i=[c_xi,c_yi]^∈ℝ^2where N is the number of points comprising the centerline, 𝐜_i for i=1,…,N are the pixel coordinates of i-th point represented in the camera frame. Note that the dimension 2N of the observed centerline 𝐜̅ is generally large, thus it is inefficient to directly use it in a shape controller as it contains redundant information. In this work, we just give this generation concept, i.e., shape feature extraction. This module aims to extract efficient features 𝐬 from the original data space and then map them into the low-dimensional feature space. In this work, we regard the centerline 𝐜̅ of the 2D feedback as a continuous parametric curve of arc length with the following format:𝐜_i = 𝐟(ρ_i) ∈ℝ^2 ,i = 1, …, Nwhere ρ∈ [0,1] is a parametric variable representing the curve's normalized arc length. In the level of the geometric meaning, ρ_i can be seen as the arc-length between the start point 𝐜_1 and point 𝐜_i, e.g., ρ_1=0 and ρ_N=1. The Bezier-based parametric curve fitting is modeled as follows:𝐟( ρ_i) = ∑_j = 0^n 𝐩 _jB_j,n( ρ_i), i = 1, …, Nwhere n∈ℕ is the fitting order, and 𝐩_j ∈ℝ^2 represents the shape parameters of 𝐜̅, and B_j,n(ρ) is the chosen Bezier regression parameterization, which can take the following form:B_j,n( ρ) = n!/j!( n - j )!(1 - ρ)^n - jρ ^jBezier curve approximates the curve with a polynomial expression using control points.n+1 control points can determine a n-degree Bezier curve. Interestingly, the feature vector 𝐩 generated from the Bezier curve has the obvious physical meaning, i.e., the shape control points of the Bezier curve. Bezier curve has a first-order derivability, and it can guarantee that the fitting will advance smoothly with the control points without fluctuations, thus, it can represent complex shapes. However, there may be a computational burden as the fitting order increases. By using the parameterization curve (<ref>) and Beizer fitting mode (<ref>), the cost function can be constructed as follows:Q = ∑_i = 1^N 𝐜_i - ∑_j = 0^n 𝐩_jB_j,n( ρ _i)^2Then, the augmented matrix format, refer to (<ref>) can be further improved as follows:Q = ( 𝐁𝐬 - 𝐜̅)^( 𝐁𝐬 - 𝐜̅)for a “tall” regression matrix 𝐁 satisfying:𝐁 = [𝐁_1^, …, 𝐁_N^]^∈ℝ^2N × 2(n+1) 𝐁_i= [B_0,n(ρ_i),…,B_n,n(ρ_i)] ⊗𝐈_2 ∈ℝ^2 × 2(n+1)where 𝐈_2 is the eye matrix with format 2 × 2, and ⊗ is the Kronecker operation.We seek to minimize (<ref>) to obtain a feature vector 𝐬 that closely approximates Thus, the feature vector 𝐬 is computed using normal equation at every iteration as:𝐬 = ( 𝐁^𝐁)^ - 1𝐁^𝐜̅where it is assumed that N≫ n+1 is should be met to ensure the reversibility of 𝐁^𝐁. Although this paper only gives the Bezier curve form, many other geometric expressions can be used as the regression, e.g., B-spline and rational approximation.§ EXPERIMENTAL RESULTSIn this section, several experiments are conducted to validate the effectiveness of the proposed feature extraction method.§.§ Centerline DetectionFor accurate shape detection, we adopt the method in [x] to give examples. Four clustering algorithms are used to detect the centerline configuration from the original 2D feedback RGB image. The adopted methods are Self Organization Map (SOM), K-means (KMS), Gaussian Mixture Model (GMM) Fuzzy C-Means (FCM). From Fig. <ref>, it can be seen that KMS has the best clustering ability to extract the centerline. Thus, throughout the paper, the KMS technique is used as the centerline extraction measure. §.§ Feature ExtractionIn this section, the proposed feature extraction module is tested within various fitting orders, i.e., n=2, n=4, n=6, n=8. The number of shape points for the centerline configuration is set to N=120. The reason why setting the fitting order n is big is to ensure the validation of the assumption of the matrix invert ability.Fig. <ref> presents the extraction comparison of the proposed Bezier-based feature extraction method. The fitting order is set from n=2 to n=8. The result shows that, as the fitting order n increases, the extraction accuracy also increases. And the case with n=8 is the best, followed by n=6 and n=4, while n=2 provides the worst performance. § CONCLUSIONIn this paper, we presented a feature extraction approach to generate a low-dimensional feature that represents the whole shape configuration of the deformable linear object. The proposed extraction algorithm successfully addresses the high-dimensional issue, which can help the robot to complete the manipulation and does not need any artificial markers. Our experiments on real-world experiments, the results demonstrate that the approach introduced in this work has better accuracy and faster real-time performance than the existing methods, showcasing its effectiveness and robustness.IEEEtran
http://arxiv.org/abs/2312.16502v1
{ "authors": [ "Fangqing Chen" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20231227101426", "title": "Bezier-based Regression Feature Descriptor for Deformable Linear Objects" }
These authors contributed equally to this work QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. These authors contributed equally to this work QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Netherlands Organisation for Applied Scientific Research (TNO), Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Netherlands Organisation for Applied Scientific Research (TNO), Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. [email protected] QuTech, Delft University of Technology, Delft, The Netherlands. Kavli Institute of Nanoscience, Delft University of Technology, Delft, The Netherlands. The coherent control of interacting spins in semiconductor quantum dots is of strong interest for quantum information processing as well as for studying quantum magnetism from the bottom up. On paper, individual spin-spin couplings can be independently controlled through gate voltages, but nonlinearities and crosstalk introduce significant complexity that has slowed down progress in past years. Here, we present a 2×4 germanium quantum dot array with full and controllable interactions between nearest-neighbor spins. As a demonstration of the level of control, we define four singlet-triplet qubits in this system and show two-axis single-qubit control of all qubits and SWAP-style two-qubit gates between all neighbouring qubit pairs. Combining these operations, we experimentally implement a circuit designed to generate and distribute entanglement across the array. These results highlight the potential of singlet-triplet qubits as a competing platform for quantum computing and indicate that scaling up the control of quantum dot spins in extended bilinear arrays can be feasible.Universal control of four singlet-triplet qubits Lieven M. K. Vandersypen January 14, 2024 ================================================== The coherent control of a large-scale array of spins in the solid state represents a major challenge in the field of quantum-coherent nanoscience <cit.>. As a quintessential platform for quantum spin control, the lithographically-defined semiconductor quantum dot has shown great promise both for fault-tolerant digital quantum computationo <cit.> and for analog quantum simulation of emergent quantum phenomena <cit.>. Nevertheless, the inherent nanoscale dimensions of the devices, the geometrical constraints in integrating all the required components, and the necessity of employing high-frequency electromagnetic fields in cryogenic environments present important challenges for the integration and control a large number of spins.Already, significant efforts have been undertaken to tackle these challenges. For single-spin qubits, the number of coherently controlled spins has been scaled up to six in a one-dimensional array <cit.> and four in a two-dimensional array <cit.>. A six-dot linear array was also used to achieve universal control of two qubits that are each encoded in a subspace of three electron spins distributed over three dots <cit.>. For singlet-triplet qubits defined in a subspace of two spins across two dots, recent progress includes the individual control of three to four uncoupled qubits <cit.> and the operation of a single qubit in a 3×3 quantum dot array <cit.>.Similar to exchange-only qubits, singlet-triplet qubits <cit.> allow fully electrical qubit control using baseband voltage pulses. The use of baseband-only control signals can avoid commonly encountered problems of single-spin qubits such as microwave heating effects <cit.> and may furthermore alleviate crosstalk effects <cit.>. Singlet-triplet qubits also map naturally to the spin-readout basis in Pauli spin blockade (PSB), which is a common method for spin-to-charge conversion in quantum dots <cit.>. By using pulse optimization, single-qubit control fidelities of single-triplet qubits have exceeded 99% <cit.>, whereas two-qubit gate fidelities relying on the relatively weak capacitive (Coulomb) interaction reached 72-90% <cit.>. In theory, the two-qubit gate fidelity can be further improved by replacing the capacitive coupling with the stronger exchange coupling <cit.>, although it has been little investigated in experiments <cit.>. Despite this progress, universal control of more than two interacting singlet-triplet qubits remains yet to be achieved. Recently, a controlled number of charge carriers were loaded in 2×4 arrays, a 4×4 array and a 1×12 array <cit.>. These advances set the stage for exploring the operation of three or more interacting singlet-triplet qubits experimentally.Here we demonstrate coherent control of four interacting singlet-triplet qubits in a 2x4 germanium quantum dot array, which forms a quantum dot ladder. Taking advantage of the strong intrinsic spin-orbit coupling and small in-plane g-factors of holes in strained germanium quantum wells <cit.>, we encode the qubit in the singlet (|S⟩) and the lowest triplet (|T_-⟩) of two exchange-coupled spins, a variant of the originally proposed singlet-triplet qubit <cit.>. By controlling the exchange interaction inside each spin pair along the ladder rungs, we first map out the qubit energy spectrum. Then we show universal control of each qubit by pulsing both the detuning and tunneling barrier of the corresponding double quantum dot (DQD). With proper simultaneous control of detunings and tunneling barriers of neighbouring S-T_- qubits, we achieve a two-qubit SWAP-style gate induced by exchange interactions for each pair of neighbouring qubits in the ladder. Finally, with the demonstrated single- and two-qubit control, we implement a quantum circuit for quantum state transfer across the ladder.§ GERMANIUM QUANTUM DOT LADDER As shown in Fig. <ref>a and b, the 2x4 quantum dot ladder is fabricated in a germanium quantum-well heterostructure <cit.>. The gate pattern and substrate have the same design as that in ref. <cit.>. The eight quantum dots are labeled with numbers 1 to 8 and the four charge sensors to measure the charge states in the quantum dots are labeled S_TL, S_TR, S_BL and S_BR, respectively. The quantum dot potentials are controlled by plunger gates P_i, and the interdot or dot-sensor tunnel couplings are controlled by barrier gates B_ij or B_i, with i or j denoting the corresponding quantum dot number. Linear combinations of plunger gate voltages {P_i} allow us to set the overall electrochemical potential of each DQD μ_ij= (vP_i + vP_j)/2 and the interdot detuning ε_ij = (vP_i - vP_j)/2. The prefix “v” indicates that the physical gate voltages are virtualized to compensate the crosstalk on the dot potentials <cit.> (see Supplementary Information section III for the virtual gate matrix). Single-hole occupation of each quantum dot in the array is confirmed by measuring the charge stability diagrams using sensors S_BL and S_BR (see Supplementary Information section II). All plunger and interdot barrier gates are connected to a bias tee to allow both DC voltages and voltage pulses to be applied.§ SINGLET-TRIPLET QUBIT AND ENERGY SPECTROSCOPY We encode the qubit into the two-spin singlet-triplet states, |S⟩ and |T_-⟩, of the DQDs along the rungs of the quantum dot ladder, with the singlet |S⟩=(|↑↓⟩-|↓↑⟩)/√(2) and the lowest-energy triplet |T_-⟩=|↓↓⟩. Thus Q1, Q2, Q3 and Q4 are formed using DQD 1-5, 2-6, 3-7 and 4-8. Qubit readout is achieved by pulsing the corresponding DQD to the PSB regime. This regime converts the singlet and triplet states into distinct charge states, which are then measured through the charge sensor. The single-qubit Hamiltonian can be written asH_ST_- = E_z-J/2σ_z + Δ_ST_-/2σ_x,where σ_x and σ_z are the Pauli matrices, J = J(ε_ij, vB_ij) is a function of both the detuning ε_ij and the barrier gate voltage vB_ij, and E_z = g_ijμ_B B is the average Zeeman energy of the two hole spins in the DQD, with g_ij the average g-factor, μ_B the Bohr magneton, and B the magnetic field strength. Unless indicated otherwise, an in-plane magnetic field (up to alignment precision) of B = 10 mT is applied to the device. The intrinsic spin-orbit interaction for holes in germanium couples the states |S⟩ and |T_-⟩ with an energy Δ_ST_-.Figs. <ref>c-e show the energy levels of the two-spin |S⟩ and |T_-⟩ states in a DQD with J(ε_ij = 0) < E_z, J(ε_ij = 0) = E_z, J(ε_ij = 0) > E_z, respectively. The other two-spin states are |T_0⟩=(|↑↓⟩+|↓↑⟩)/√(2) and |T_+⟩=|↑↑⟩. In a DQD, we can describe the charge states as (n_L,n_R) to denote the charge number distribution in the left (n_L) and right (n_R) quantum dot. By adjusting the detuning ε_ij of the DQD from negative to positive, we can change the charge state from (2,0) to (1,1) and then to (0,2), as indicated by the labels on top of each diagram, and the energy levels of the two-spin states in the DQD will change accordingly. As shown in Fig. <ref>c, when J(ε_ij = 0) is smaller than E_z, the singlet |S⟩ crosses the triplet |T_-⟩ twice in the (1,1) regime. Due to intrinsic spin-orbit coupling, these are in fact avoided crossings with a gap Δ_ST_-, where the states |S⟩ and |T_-⟩ are admixed. As J(ε_ij = 0) increases, the two anticrossings approach each other and eventually merge into one, as shown in Fig. <ref>d. When J(ε_ij = 0) increases even further, see Fig. <ref>e, |S⟩ and |T_-⟩ no longer exhibit an anvoided crossing. Experimentally, we probe the position of the avoided crossings as follows. First, we initialize one of the qubits to a singlet by diabatically pulsing from (0,2) or (2,0) to the detuning ε_ij in (1,1). After waiting for a certain time, we pulse the qubit back to the PSB regime to record the triplet probability. When the pulse takes the system to an anticrossing, the singlet will evolve into a triplet during the waiting time (a 40 ns duration is chosen to obtain a sizable triplet probability). Performing such measurements as a function of the barrier gate voltage vB_ij that controls J for each qubit, results in the parabola-like patterns, also called spin mixing maps <cit.>, in Fig. <ref>f-i. As expected, when vB_ij is tuned from positive to negative, J increases and the positions of the S-T_- anticrossings move inwards before disappearing. The asymmetry visible in panel g in particular can arise from imperfect virtualization of the barrier gates or from a detuning-dependent Zeeman energy <cit.> (see Supplementary Information section IV).§ UNIVERSAL SINGLE-QUBIT CONTROL With the knowledge of the energy spectrum of the four S-T_- qubits, we next implement the two-axis control of each qubit, which is necessary and sufficient for universal single-qubit control. By operating the qubit in the regime where J = E_z, the first term of Eq. <ref> goes to zero and Δ_ST_- rotates the qubit around the x-axis in the Bloch sphere, as shown in Fig. <ref>a. Furthermore, we tune the barrier voltage to obtain J = E_z at zero detuning, which is a symmetry point where the effect of detuning noise is strongly suppressed <cit.>. The pulse scheme for testing x-axis control is shown in Fig. <ref>k: first we initialize the qubit into a singlet by starting in the (2,0) (or (0,2)) regime, then pulse the detuning to the center of the (1,1) regime where J(ε_ij = 0) = E_z, next allow the qubit to evolve for a variable time t_wait, and finally pulse the detuning back to a point in the (2,0) (or (0,2)) regime for spin readout via PSB. The measured rotations of Q1-Q4 as a function of the pulsed detuning position are shown in Fig. <ref>c-f. With proper calibration, the qubits rotate around the x-axis at zero detuning (this condition occurs slightly away from the point ε_ij=0 extracted from charge stability diagrams). To realize z-axis control, we operate the Hamiltonian in the opposite limit where we increase J to such a large value that the effect of spin-orbit coupling becomes negligible. As a result, the qubit is rotated around an axis close to the z-axis, with a frequency proportional to J-E_z. The rotation is never exactly around the z-axis due to the presence of a finite Δ_ST_-, yet, sufficiently orthogonal control is possible when (J-E_z) ≫Δ_ST_- (furthermore, by finely tuning the magnet field direction, Δ_ST_-=0 can be achievable <cit.>). In experiments, we perform a Ramsey-like pulse sequence to demonstrate z-axis control. As illustrated in Fig. <ref>b and l, we first initialize the qubit into a singlet, perform a π/2 rotation around the x-axis of duration t_π/2, and then change J diabatically by pulsing the corresponding barrier gate by an amount δvB_ij to implement a z-axis rotation. Throughout the barrier gate pulse, we aim to stay at the symmetry point to minimize the sensitivity to charge noise <cit.>. Finally, we perform another π/2 operation around the x-axis and project the qubit into the S-T_- basis for spin readout. The measured z-axis rotations as a function of the change of barrier voltage δvB_ij for the four qubits are shown in Fig. <ref>g-j. The oscillation frequency is given by f_ST_-=√((J-E_z)^2+Δ^2_ST_-)/h, where h is Planck's constant. Fig. <ref>n summarizes how f_ST_- can be tuned via δvB_ij. We note that the outer two barrier gates vB_15 and vB_48 have a stronger effect on the corresponding J_ij than the inner barrier gates vB_26 and vB_37. This may be explained by additional residues below the inner barrier gates, which are fabricated in the last step <cit.>, and by the different fan-out routing for the outer barrier gates (see Fig. <ref>a,b). With the tuning range shown in panel n, the highest ratio (J-E_z)/Δ_ST_- amounts to around 20 for the outer qubits Q1 and Q4 and about 10 for the inner qubits Q2 and Q3. When the external magnetic field strength is varied while keeping the gate voltages fixed, the oscillation speed increases linearly with the field due to the contribution from Zeeman energy in f_ST_- (see Fig. <ref>m). From the slope, we extract g_ij for the four qubits, obtaining values of 0.33(1) - 0.37(1), similar to previous devices <cit.>.The dephasing time under free evolution, which is traditionally termed T_2^*, is an important metric for assessing the qubit quality. Since the qubit rotations around both the x-axis and the z-axis are the result of free evolution, we here introduce T_x^* and T_z^* to describe the corresponding dephasing times. Fig. <ref>o,p show the measured damped oscillations of the qubits under x-axis and z-axis control, to which we fit with a function P_T=P_0 + A cos(2π ft+ϕ)exp[-(t/T^*)^β]. P_0, A, β, f, T^* are fitting parameters, where f refers to the oscillation frequency and T^* refers to T_x^* or T_z^*. From the fits, we obtain a T_x^* of 0.5 - 2.1 μs and a T_z^* of 42(5) - 147(31) ns. The measured values of T_x^* are slightly lower than previously reported values measured at B = 3 mT <cit.> under the same condition that B is parallel to the hole movement direction. This can be partly attributed to the larger magnetic field in the present experiment: Δ_ST_- scales with B <cit.> and not only sets f_ST_- but also constitutes a proportionality factor between noise and f_ST_- fluctuations (see also Supplementary Information section VIII). The extracted x-axis rotation frequencies in Fig. <ref>o reflect the values of Δ_ST_- for each qubit, which are around 11.9-15.9 MHz, nearly an order of magnitude larger than the results reported at B = 3 mT <cit.>. This confirms that Δ_ST_- is stronger in the present experiment than in the previous work at 3 mT. The small variation in Δ_ST_- and in the measured average g-factors suggests a homogeneous spin-orbit coupling in this device. In comparison, the variations in T_x^* are quite large, which may be caused by spatially dependent charge noise or inhomogeneous hyperfine-induced dephasing due to dot size differences <cit.>.We also observe that T_z^* is nearly an order of magnitude smaller than T_x^*. Possibly this is due to the fact that fluctuations in the tunnel barrier translate to fluctuations in J, which couple in directly during z-axis evolution but only to second order for x-axis evolution <cit.>. Additionally, the increased curvature of the singlet branch for larger J implies a higher sensitivity to detuning noise. The variations in J thus may contribute to the spread of the T_z^* values we obtained in Fig. <ref>p.§ TWO-QUBIT GATE In order to realize universal control of the full four-qubit register, we need to complement single-qubit gates with two-qubit entangling gates. Assuming isotropic exchange interactions between adjacent S-T_- qubits, the two-qubit Hamiltonian in the basis of {|SS⟩, |ST_-⟩, |T_-S⟩ and |T_-T_-⟩} can be written as:H_2Q =(E_z, ij-J_ij)σ_z^ij +Δ_ST_-, ijσ_x^ij/2 +(E_z, kl-J_kl)σ_z^kl + Δ_ST_-, klσ_x^kl/2+J_coup/4[σ_x^ijσ_x^kl + σ_y^ijσ_y^kl + 1/2(σ_z^ij - I)(σ_z^kl - I)],where ij and kl refer to the respective qubit dot pair, and the interqubit coupling J_coup=(J_ik+J_jl)/2. The coupling term is reminiscent of two well-known interaction Hamiltonians. If the factor 1/2 of the σ_zσ_z coupling term were 1 instead, we recover the exchange Hamiltonian that generates the SWAP gate and the universal √(SWAP) gate. If that factor were zero, only the flip-flop terms would survive, which generate the iSWAP and √(iSWAP) gate. The coupling Hamiltonian in Eq. <ref> thus generates a SWAP-style gate that is not a standard two-qubit gate but is also universal from the perspective of quantum computing (see Supplementary Information section VIII). For simplicity, we call it a SWAP gate in the remainder of this work. To activate the SWAP gate, we equalize the energy splittings of two qubits and turn on J_ik and J_jl such that the flip-flop terms can exchange the qubit populations (note that if the qubit energies were set very different from each other, a CZ gate would result instead). Our strategy for meeting both requirements at the same time is to use the interdot detuning of both qubits <cit.>. A typical potential landscape for the two qubits in DQD ij and kl is shown in Fig. <ref>a, where we pulse ε_ij to large positive and ε_kl to large negative detuning. The detunings ε_ik and ε_jl, which control the interactions between the qubits, are then automatically increased as well. Therefore, all the exchange interactions involved are enhanced simultaneously and the effect of the single-qubit terms σ_x is made negligible. In practice, we fix the (large) detuning of one qubit and fine-tune that of the other to find the position where two qubits have equal energy splittings. This is illustrated by the energy spectrum in Fig. <ref>b, where we fix the detuning ε_kl to a large negative value and scan the detuning ε_ij. We see that the states |ST_-⟩ and |T_-S⟩ anticross at the two positions where J_ij-E_z, ij is equal to J_kl-E_z, kl. The gap size is given by J_coup. Since ε_ik and ε_jl, which control J_coup via J_ik and J_jl, are also dependent on ε_ij, the sizes of the two gaps are not necessarily the same. Fig. <ref>c shows an example of the pulse scheme we use in the experiment to observe two-qubit SWAP oscillations. Starting from both qubits in (0,2) or (2,0), we initialize one qubit to |T_-⟩ using single-qubit control by pulsing ε_ij to zero and waiting for a π rotation, and we initialize the other qubit to |S⟩ by pulsing ε_kl to a large value in (1,1) (other qubits are either initialized to singlets by pulsing back and forth to (0,2) or (2,0), or remain in the (1,1) regime all the time). Then we pulse the detuning of one qubit such that the energies of the two qubits match, and SWAP oscillations are initiated. Simultaneously, several barrier voltages are pulsed to help set the respective exchange-interaction strengths to appropriate values (details of these pulses vary). Fig. <ref>d-f show the resulting SWAP oscillations for Q1-Q2, Q2-Q3, and Q3-Q4. Chevron-style patterns are observed with the energies of the two qubits aligned in the middle of the patterns. Moving away from the middle, the energy of one qubit is shifted with respect to that of the other. This qubit-qubit energy detuning tilts the rotation axis and accelerates the rotation. Looking closely, the chevron patterns are not symmetrical. This can be understood by the fact that the qubit energy does not vary linearly with detuning. In some panels, single-qubit oscillations around an axis close to the x-axis are also observed, such as the data at ε_15 = -2 mV in Fig. <ref>d and that at ε_48 = 2.5 mV in Fig. <ref>f. These ε_ij values are close to zero interdot detuning, and when J(ε_ij = 0) is not much larger than E_z, such rotations are expected. We note that SWAP oscillations between |ST_-⟩ and |T_-S⟩ were also observed in previous research on simulating the dynamics of an antiferromagnetic spin chain and resonating-valence-bond states based on the Heisenberg model in four-quantum-dot systems <cit.>.The oscillation frequencies in the middle of the chevron patterns are in the range of 22.2(3) - 112(1) MHz, corresponding to a √(SWAP) (entangling) gate with durations of just 2.2 ns to 11.3 ns, more than an order of magnitude faster than the entangling gate based on capacitive coupling <cit.>. To determine the dephasing times of the SWAP oscillations, we collect data in the middle of the chevron patterns, as shown in Fig. <ref>g, and fit them with the same function as used for single-qubit oscillations. The extracted dephasing times are between 63(24) and 154(43) ns. The dephasing times may be further increased by executing the SWAP oscillations at the symmetry points of the detuning of each qubit, which requires a stronger tunability of the exchange interactions using the barrier gates.§QUANTUM CIRCUIT IMPLEMENTATION Finally, using a combination of the single- and two-qubit gates demonstrated above, we aim to implement a quantum circuit designed to create and distribute an entangled state. As shown in Fig. <ref>a, we first initialize Q1 and Q2 into |ST_-⟩ by applying a π rotation on Q2 and then activate a SWAP interaction between Q1 and Q2 for a duration t_SWAP. This interaction is expected to generate entanglement when t_SWAP corresponds to a quarter period, i.e. for a √(SWAP) gate. Next, we apply consecutive half-period SWAP gates of Q2-Q3 and Q3-Q4 to transfer the state of Q2 to Q4 via Q3. Finally, we perform a single-qubit x-axis rotation of Q4 for a time t_Q4 and measure Q4. The experimental results are shown in Fig. <ref>b, where the single-qubit oscillations of Q4 as a function of t_Q4 are modulated in phase by t_SWAP, resulting in a checkerboard pattern. The underlying mechanism is that the state of Q2 oscillates as a function of t_SWAP, as quantum information is periodically exchanged between Q1 and Q2. Therefore the state of Q4 following the quantum state transfer also oscillates with t_SWAP. Where the evolution of Q4 changes phase, t_SWAP corresponds to the duration of a √(SWAP) operation (modulo an integer number of SWAP operations), at which point maximal entanglement between Q1 and Q2 is expected. When two qubits are maximally entangled, the density matrix of each qubit by itself is fully mixed. At this point, the measured P_T of Q4 should not oscillate as a function of t_Q4. This is indeed what we observe, see Fig. <ref>d, where we show the linecuts from Fig. <ref>b. A trace without oscillations is observed between two sets of out-of-phase oscillations of Q4, as expected. The same features are seen in Fig. <ref>c, which shows the ideally expected checkerboard pattern obtained from numerical simulations of the protocol of Fig. <ref>a, assuming perfect initialization, operations and readout. We note the checkerboard pattern is quite robust to errors in the SWAP gates. Small errors will merely change the contrast of the pattern; for large SWAP errors, the alternating rows are no longer equal in height. However, when the initialization of Q1 or Q2 leads to superposition states with a y-axis component (and assuming perfect SWAP gates), the pattern acquires a tilt. In this case, the rotation angle of the final x-axis rotation needed to maximize or minimize P_T is no longer exactly 0 or π but depends on the y-axis component of Q4 (and hence also on t_SWAP) after the sequence of SWAP operations. Looking closely, the blue and green oscillations in Fig. <ref>d are not perfectly out of phase with each other, and the data in Fig. <ref>b shows weak diagonal features not seen in the numerical simulations. These point at the imperfect initialization of Q1 or Q2.§ CONCLUSIONIn conclusion, we have experimentally demonstrated initialization, readout, and universal control of four singlet-triplet (S-T_-) qubits in a 2x4 germanium quantum dot array. The intrinsic spin-orbit coupling in germanium combined with tunnel barrier control of the exchange interactions within each qubit, allows us to realize two-axis control of the four individual S-T_- qubits using baseband signals only. Furthermore, through independent control of the exchange interactions between any pair of neighbouring spins across the device, we are able to demonstrate √(SWAP) and SWAP gates for each neighboring pair of qubits and implement a quantum circuit that spans the entire array. With four universally controlled qubits in a bilinear array, these results put baseband-controlled singlet-triplet spin qubits in germanium firmly on the map as a potential candidate for large-scale quantum computing.In future experiments, the control fidelities and error channels of these qubits can be thoroughly characterized by various metrology methods such as randomized benchmarking and gate set tomography <cit.>. The qubit quality and control fidelity can be potentially improved by tuning the qubits into the operational sweet spot with respect to the electric field or magnetic field <cit.>, optimizing control pulses <cit.>, reducing charge noise <cit.>, further improvements in the Ge/SiGe heterostructure <cit.>, and suppressing nuclear spin noise using substrates with reduced spinful nuclei <cit.>. Moreover, with programmable control of exchange interactions in the array, this spin ladder can also be used for analog simulation of a wealth of rich physical phenomena such as quantum magnetism <cit.>.( [ 1 1 1 1; 0 0 0 0 ]).We thank C. Déprez for insightful discussions and kind help. We also thank other members of the Vandersypen group and Veldhorst group for their stimulating discussions. We acknowledge S. L. de Snoo's help in software development and technical support by O. Benningshof, J. D. Mensingh, R. Schouten, R. Vermeulen, R. Birnholtz, E. Van der Wiel, and N. P. Alberts. This work was funded by an Advanced Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research. M.R.-R. acknowledges support from the Netherlands Organization of Scientific Research (NWO) under Veni grant VI.Veni.212.223. Author contributions X.Z. and E.M. performed the experiment and analyzed the data with help from M.R.-R. and D.J.. X.Z. performed the numerical simulations with help from M.R.-R. and D.J.. M.R-R developed the theory model. T.-K.H., P.C.F. and C.-A.W. contributed to the preparation of the experiments. S.D.O. fabricated the device with inputs from T.-K.H., P.C.F. and X.Z.. A.S. grew the Ge/SiGe heterostructure, M.V. supervised the device fabrication and G.S. supervised the heterostructure growth and development. X.Z. and L.M.K.V. conceived the project and L.M.K.V. supervised the project. X.Z., M.R.-R., E.M. and L.M.K.V. wrote the manuscript with inputs from all authors. Data availibility The data reported in this paper are archived on a Zenodo data repository at https://doi.org/10.5281/zenodo.10431402 naturemag
http://arxiv.org/abs/2312.16101v2
{ "authors": [ "Xin Zhang", "Elizaveta Morozova", "Maximilian Rimbach-Russ", "Daniel Jirovec", "Tzu-Kan Hsiao", "Pablo Cova Fariña", "Chien-An Wang", "Stefan D. Oosterhout", "Amir Sammak", "Giordano Scappucci", "Menno Veldhorst", "Lieven M. K. Vandersypen" ], "categories": [ "cond-mat.mes-hall", "quant-ph" ], "primary_category": "cond-mat.mes-hall", "published": "20231226160713", "title": "Universal control of four singlet-triplet qubits" }
Denotational semantics for languages for inference:semirings, monads, and tensorsCristina Matache University of Edinburgh [email protected] Sean Moss University of Oxford [email protected] Sam Staton University of Oxford [email protected] Ariadne Si Suo University of Oxford [email protected]========================================================================================================================================================================================================================================================== Computational effects are commonly modelled by monads, but often a monad can be presented by an algebraic theory of operations and equations. This talk is about monads and algebraic theories for languages for inference, and their connections to semirings and tensors.A basic class of examples of algebraic theories comes from considering the theory of modules for a semiring, e.g. the theory of unnormalized distributions, where the semiring is that of the non-negative real numbers. We propose that an interesting perspective is given by studying theories via semirings, and to this end explore several examples of subtheories of module theories, mostly relating to probability.Our main contribution concerns the commutative combination of effects, as studied by Hyland, Plotkin and Power: we observe that while the semiring tensor does not in general determine the tensor of subtheories of module theories, it still does in several fundamental probabilistic examples.Computational effects are commonly modelled by monads <cit.>, but often a monad can be presented by an algebraic theory of operations and equations <cit.>. A basic class of examples of algebraic theories comes from considering the theory of modules for a semiring, e.g. the theory of unnormalized distributions, where the semiring is that of the non-negative real numbers. We propose that an interesting perspective is given by studying theories via semirings, and to this end explore several examples of subtheories of module theories, mostly relating to probability.Our main contribution concerns the commutative combination of effects <cit.>: we observe that while the semiring tensor does not in general determine the tensor of subtheories of module theories, it still does in several fundamental probabilistic examples.§ OVERVIEW Semirings abound both in program analysis (e.g. <cit.>) and in semantics for probabilistic programs (e.g. <cit.>). Monads abound in functional programming, in inference programming (e.g. <cit.>), and in probabilistic programming for statistical modelling, both in theory (e.g. <cit.>) and in practice (e.g. <cit.>). This talk is about the relationships between semirings and monads, and about new ways of building new semirings and new monads to interpret various different phenomena, and combinations of phenomena, in languages for inference. We summarize several examples of semirings: * The non-negative reals, ^+, used for unnormalized measures in Bayesian statistics and other optimization scenarios;* The polynomials, [X,X̅], thought of as unnormalized probabilities with an unknown Bernoulli parameter X (see <Ref>);* For input/output messages, we consider a semiring I freely generated by {𝗂𝗇𝗉𝗎𝗍_i | i∈ I}∪{𝗈𝗎𝗍𝗉𝗎𝗍_i | i∈ I}, for some finite set of input/output messages I (see <Ref>).* For state, we consider a semiring I generated by {𝗋𝖾𝖺𝖽_i | i∈ I}∪{𝗐𝗋𝗂𝗍𝖾_i | i∈ I}, for some finite set of storable values I (see Appendix A for details).The key methods we propose are tensors and convexity classes.Tensors and commutativity. We show that the tensor of semirings corresponds to the tensor of monads, as studied in <cit.>. Thus the tensor of semirings is a method for combining monads for programming. Moreover, the tensor of semirings corresponds to the idea that in the combined monad, the lines of a program can be reordered (commute), i.e.=whereandcome from monads for different semirings. This reordering of lines is important as a program optimization, particularly in probabilistic programming <cit.>, and moreover connects intuitively to conditional independence ofand . Monads via convexity classes. Many semirings come with a particular subset which we call a `convexity class'. For example, the semirings ^+ and [X,X̅] have convexity class {1}, and the semirings for state and I/O have convexity classes too (see <Ref> and Appendix A). Any convexity class in a semiring induces a monad. We show that the probability, state and I/O monads all arise in this way, giving a new `convex' understanding of state and I/O. The semiring [X,X̅] induces a new monad which is good for modelling programs with an unknown Bernoulli variable.We then combine these two methods, tensors and convexity classes, to give methods for building models of phenomena across languages for inference. Specifically, we show that several tensors of monads can be computed via convexity classes in tensors of semirings. For example: [X,X̅]⊗^+ combines a Bernoulli variable with ordinary probability; [X,X̅]⊗[Y,Y̅] models two unknown Bernoulli variables; ^+⊗ I could model a combination of probability and state (as in e.g. <cit.>); ^+⊗ I models the combination of probability and input-output, for an interactive probabilistic program (e.g. over streaming data <cit.>).§ ALGEBRAIC THEORIES AND SEMIRINGS Rather than general monads we work with finitary monads or algebraic theories, which in practice we think of as abstract clones (defined just like a Kleisli triple except TX appears only for X a finite set). For a theory T, we write T(n) for the set of equality-classes of terms in n variables, or equivalently free T-algebra on the set {x_1,…,x_n}. This is the relationship between monads and algebraic effects (e.g. <cit.>).We omit precise definitions but roughly a semiring R is a `ring without subtraction' and an R-module is an (additive) commutative monoid with a compatible (left) action of R. Modules are the algebras of a theory T_R, where T_R(n) ≅ R^n identifying (r_1,…,r_n) ∈ R^n with r_1x_1 + … + r_nx_n <cit.>. It is useful to think of T_R(n) as the set of R-valued measures on {x_1,…,x_n}. Switching back to monads briefly, T_R(X) is the set of finitely supported functions X→ R.T_^+ corresponds to the monad of finitely-supported unnormalized measures.§ CONVEXITY CLASSES AND THEORIESA subset S of a semiring R is a convexity class if 1 ∈ S and whenever a_1 + … + a_n, b_1, …, b_n ∈ S we have a_1b_1 + … + a_nb_n ∈ S. For such (R,S), the theory of (R,S)-convex sets is the subtheory T_R,S of T_R where the set T_R,S(n) of n-ary operations consists of those ∑_i r_ix_i ∈ T_R(n) where ∑ r_i ∈ S. We call such theories convexity theories.In general, T_R,S relates to T_R as probability measures relate to unnormalized measures. The usual theory of convex sets <cit.> is T_^+,{1} and here (n) is the set of probability measures on {x_1,…,x_n}. Considering T_^+,[0,1] gives subprobability measures instead. For any finite set I, let I be the free theory with unary operations 𝗈𝗎𝗍𝗉𝗎𝗍_i for i ∈ I and one |I|-ary operation 𝗂𝗇𝗉𝗎𝗍 – the I/O monad <cit.>. Then I is actually T_ I,Λ_I for the semiring I from the introduction, for a suitable convexity class Λ_I ⊆ I. Concretely, elements of I are finite multisets of lists in {𝗂𝗇𝗉𝗎𝗍_i, 𝗈𝗎𝗍𝗉𝗎𝗍_i | i ∈ I}.Letbe the theory presented by a single binary operation c satisfying the equationsc(x,x) = x c(c(w,x),c(y,z)) = c(c(w,y),c(x,z))i.e. c is idempotent and commutes with itself. The operation c represents a random binary choice made by a coin with an unknown bias: c(x,y) means “flip the coin and continue as x if heads and as y if tails”. Terms t ∈(n) represent experimental procedures reporting one of n outcomes according to a sequence of coin flips. With the aim of equating procedures which induce the same distribution on outcomes, we can discard irrelevant coin flips (idempotence) and swap the order of two consecutive coin flips (commutativity) <cit.>.Elaborating on some notation from the introduction, let [X,X̅] stand for the semiring presented by ⟨ X, X̅| X + X̅ = 1, X X̅ = X̅ X ⟩. This is like adding toan indeterminate real number 0 ≤ p ≤ 1 together with 1-p (though we prefer notation that does not suggest subtraction). Indeed, one can show that the elements of [X,X̅] are represented polynomials P in X,X̅ with coefficients inwhere P = P' in [X,X̅] iff P[p/X,(1-p)/X̅] = P'[p/X,(1-p)/X̅] in ^+ for all real numbers p ∈ [0,1]. There is a theory homomorphism → T_[X,X̅] induced by c(x_1,x_2) ↦ X x_1 + X̅ x_2 which in general sends t ∈(n) to ∑_i P_i x_i where P_i ∈[X,X̅] is such that if the bias of the coin were actually p ∈ [0,1] then P_i[p/X,(1-p)/X̅] ∈ [0,1] would be the actual probability of outcome x_i. We also have ∑_i P_i = 1, and in fact we get an isomorphism ≅ T_[X,X̅],{1}. Thus the semiring perspective confirms that the equational theory ofequates experiments precisely when they induce the same distribution on outcomes for all possible values of the unknown bias p.§ COMMUTATIVE COMBINATIONSThe following standard construction <cit.> lets one combine notions of computation so that they commute with each other. The commutative combination or tensor T_1 ⊗ T_2 of two theories is the universal theory admitting homomorphisms ϕ_1 : T_1 → T_1 ⊗ T_2, ϕ_2 : T_2 → T_1 ⊗ T_2 such that each term in the image of ϕ_1 commutes with each term in the image of ϕ_2.The tensor R_1 ⊗ R_2 of two semirings is the universal semiring with homomorphisms ϕ_1 : R_1 → R_1 ⊗ R_2, ϕ_2 : R_2 → R_1 ⊗ R_2 such that ϕ_1(a) ·ϕ_2(b) = ϕ_2(b) ·ϕ_1(a) for all a ∈ R_1, b ∈ R_2. T_R_1⊗ T_R_2≅ T_R_1 ⊗ R_2.This lets us compute the theory tensor of two semiring theories, in terms of the usually more tractable semiring tensor. The semiring tensor extends to semirings paired with convexity classes: (R_1,S_1) ⊗ (R_2,S_2) = (R_1 ⊗ R_2, S_1 ⊗ S_2) where S_1 ⊗ S_2 is the smallest convexity class containing the images of S_1 and S_2. This time we have a canonical comparison mapΦ_(R,S),(R',S') : T_R,S ⊗T_R',S' →T_R ⊗R', S ⊗S'When Φ is an isomorphism, the tensor of theories can be understood in terms of the semiring tensor, as we now demonstrate. For p ∈^+ let [p] be the subsemiring of ^+ generated by p. Then [12] ⊗[13] ≅[16]. Taking the convexity classes to be {1}, Φ happens in this case to give an isomorphism T_[12],{1}⊗ T_[13],{1}≅ T_[16],{1}. In programming terms, this says that a language with equiprobable binary and ternary choices is equivalent to one with a six-sided die.Actually write out a proof of this elsewhere.Terms of ⊗ are simple generative models built from flipping a coin with unknown bias and sampling from known sources of randomness. The semiring ^+[X,X̅] ^+ ⊗[X,X̅] extends the polynomials of [X,X̅] with coefficients in ^+, and admits a similar characterization of equality. The composite⊗≅T_^+,{1}⊗T_[X,X̅],{1}T_^+[X,X̅], {1}sends t ∈ (⊗)(n) to ∑_i P_i x_i where P_i ∈^+[X,X̅] is a polynomial computing the probability of outcome x_i (and ∑_i P_i = 1). In fact, the map is an isomorphism ⊗≅ T_^+[X,X̅], {1} (c.f. <cit.>).The map Φ is an isomorphism in the case of (^+,{1}) ⊗ ( I,Λ_I). Thus the combination of probability and I/O, ⊗ I, is a convexity theory T_^+ ⊗ I, {1}⊗Λ_I. Concretely, ^+ ⊗ I consists of finitely-supported functions from lists in {𝗂𝗇𝗉𝗎𝗍_i, 𝗈𝗎𝗍𝗉𝗎𝗍_i | i ∈ I} to ^+. § CONCLUSION As we have shown, semirings give a fundamental perspective on many computational phenomena, and are also a tool to calculate the commutative combination of different effects with a focus on probability and inference. Current work in progress is to find sufficient conditions for Φ to be an isomorphism. In further work we hope to generalize away from finitary monads onto cover iteration (e.g. <cit.>) and more general classes of measures.abbrv§ APPENDIX: SUMMARY OF SEMIRINGS AND CONVEXITY CLASSESA semiring is a set R equipped with two associative binary operations + and ×, with units 0 and 1 respectively, such that + is commutative (r+s=s+r) and × distributes over + on both sides.Examples of semirings considered in this abstract: * The semiring ^+ comprises the positive reals with the usual additional and multiplication.* The polynomial semiring [X,X̅] is generated by ⟨ X, X̅| X + X̅ = 1, X X̅ = X̅ X ⟩. This means that it is the least semiring containing formal constants X and X̅ and additionally satisfying the given equations (Ex. <ref>).* The semiring for input/output I is the free semiring generated by {𝗂𝗇𝗉𝗎𝗍_i | i∈ I}∪{𝗈𝗎𝗍𝗉𝗎𝗍_i | i∈ I}. The inhabitants are formal sums of strings built from 𝗂𝗇𝗉𝗎𝗍_i and 𝗈𝗎𝗍𝗉𝗎𝗍_i, where the multiplication is string concatenation (Ex. <ref>).* The semiring for state I is generated by⟨[ i, i; (i∈ I) ] | [i i= i,ij = j; i j =0 (i≠ j), ∑_i∈ I ii = 1 ]⟩.(See <cit.>, and compare with <cit.>.)* The tensor of semirings R_1 and R_2 is generated by⟨[ ϕ_1(r_1),ϕ_2(r_2); (r_1∈ R_1,r_2∈ R_2) ]|[ ϕ_1(r_1)·ϕ_2(r_2)= ϕ_2(r_2)·ϕ_1(r_1);ϕ_1,ϕ_2 are homomorphisms ]⟩.Convexity classes considered in this abstract: * We considered ^+ and [X,X̅] with the singleton convexity class {1}.* We also considered ^+ with convexity class [0,1], for subprobability (Ex. <ref>).* For I, we built the convexity class as follows. A finite tree t built from I-ary input nodes and unary output nodes can be regarded in terms of the set of paths p through the tree, which are given by strings from {𝗂𝗇𝗉𝗎𝗍_i | i∈ I}∪{𝗈𝗎𝗍𝗉𝗎𝗍_i | i∈ I}. We consider the convexity class given byΛ_I{∑_p∈ t p | t is a valid tree}⊆ I.In this way we recover the I/O monad <cit.>. We remark that the individual paths, as strings, are reminiscent of traces in probabilistic programming (e.g. <cit.>).* For I, we consider the following convexity class.A state transformer is a function f:I→ I. Any state transformer induces (∑_i∈ I i f(i))∈ I. We consider the convexity class{∑_i∈ I i f(i) | f:I→ I} ⊆ I.In this way we recover the global state monad from <cit.>.We also considered tensors of convexity classes (<ref>).
http://arxiv.org/abs/2312.16694v1
{ "authors": [ "Cristina Matache", "Sean Moss", "Sam Staton", "Ariadne Si Suo" ], "categories": [ "cs.LO", "cs.PL", "math.CT" ], "primary_category": "cs.LO", "published": "20231227192237", "title": "Denotational semantics for languages for inference: semirings, monads, and tensors" }
* Ivan Dimitrijević, Branko Dragovich, Zoran Rakićand Jelena Stanković==========================================================================emptyIn recent years the equations of relativistic first-order viscous hydrodynamics, that is, the relativistic version of Navier-Stokes, have been shown to be well posed and causalunder appropriate field redefinitions, also known as hydrodynamic frames. We perform real-time evolutions of these equations for a conformal fluid and explore, quantitatively, the consequences of using different causal framesfor different sets of initial data. By defining specific criteria, we make precise and provide evidence for the statement that the arbitrarily chosen frame does not affect the physics up to first order, as long as the system is in the effective field theory regime. Motivated by the physics of the quark-gluon plasma created in heavy-ion collisions we also explore systemswhich are marginally in the effective field theory regime, finding that even under these circumstances the first order physics is robust under field redefinitions.§ INTRODUCTIONRelativistic hydrodynamics is an effective field theory that provides the real-time description of microscopic theories whenthe system is locally in thermal equilibrium and gradient corrections are small. If the characteristic macroscopic scale of the system is much larger than the microscopic scale of the underlying theory, then ideal hydrodynamics shouldprovide a good description. However, if these two scales are not well separated, then viscous effects are expected to become relevant.Relativistic viscous hydrodynamics is well known toplay a fundamental rolein the experimental description of the quark gluon-plasma (QGP) created in heavy-ion collision experiments at RHIC and LHC <cit.>. In this case the size of the QGP droplet is comparable to the microscopic scale of quantum chromodynamics (QCD). The data from the Beam Energy Scan experiment at RHIC <cit.> is currently being analysed, and more experiments are planned for the near future at FAIR and NICA.Relativistic viscous hydrodynamics also plays a relevant role in astrophysical systems.For instance, in neutron star mergers, weak processes during the post merger dynamics can give rise to an effective bulk viscosity relevant for the scales of the system <cit.>.Moreover, magnetorotational instabilities may act as the onset of turbulence, whose dynamics can be effectively captured by introducing an effective viscosity <cit.>.Evaluating the effect of viscosity in numerical evolutions of neutron star mergers is an active area of research <cit.>.Both the QGP and neutron star mergers provide two experimental windows into the hot and dense properties of the strongly interacting matter. They may contribute to provide access to uncharted regions of the QCD phase diagram,and relativistic viscous hydrodynamics is a key element to obtain the relevant real-time dynamical evolutions. Moreover, the transport properties might be useful to distinguish between different phases of matter. The equations of relativistic first-order viscous hydrodynamics have their origins in the works of Eckart <cit.> and Landau and Lifshitz <cit.>. From a more modern perspective in the language of effective field theory, the two different sets of equations proposed by Eckart and Landau and Lifshitz canbe obtained by using different field redefinitions of the fundamental variables (temperature, velocity...) in a gradient expansion of the constitutive relations of the stress tensor up to first order in derivatives.[In the Landau frame the local fluid velocity is chosen such that there is no energy flow in the local rest frame of the fluid, and in the Eckart frame the local fluid velocity is chosen in such a way that there is no charge flow in the local rest frame of the fluid. The fundamental variables are not uniquely defined away from thermodynamic equilibrium and in this case the field redefinitions of the local fluid velocity correspond to different extensions of these quantities away from equilibrium with different physical meanings.]Different choices of field redefinitions are referred in the literature as `hydrodynamic frames'.Since then, working in a specific hydrodynamic frame such as Eckart's frame or Landau's frame became the usual approach. Written in this form, the equations of first-order viscous hydrodynamics were observed to present causality issues and instabilities <cit.>.That is, the relativistic generalization of the Navier-Stokes equations, a theory that is meant to provide the effective description of any relativistic viscous fluid, would seem to be unphysical.This puzzlingtheoretical question remained unsolved for many years.Meanwhile the experimental analysis of the QGP created in heavy-ion collisions required of some effective viscous relativistic hydrodynamical description.An approach that provides such a description was introduced by Müller, Israel and Stewart (MIS) <cit.>, who proposed a set of equations for which the causality and stability problems are absent.The strategy consists in including second order terms in the gradient expansion and introduce extra variables (and their corresponding evolution equations), in such a way that the global set of equations has good causality and stability properties. There are several formulations based on the same idea that we denote by MIS-type theories, for example: BRSSS <cit.>, DNMR <cit.>, divergence-type theories <cit.>, etc..This approach has been very successful describing the experimental data in heavy-ion collision experiments. In fact, Geroch famously showed that all these different theories of relativistic dissipative hydrodynamics provide the same physical predictions and should have the same physical content as the Navier-Stokes equations as long as they are in the regime of validity of effective field theory <cit.>.In spiteof this success, the question of why the relativistic generalization of Navier-Stokes appeared to be unphysical remained unsolved. A solution was proposed in recent years.Bemfica, Disconzi, Noronha <cit.> and Kovtun <cit.>proposed a formulation of the relativistic Navier-Stokes equationswhich is well-posed, causal and stable.Such a formulation has become known as the`BDNK equations'.The key insight was to realize that by performing field redefinitions up to first order in the hydrodynamic gradient expansion, one can change the highest derivative terms in the equations of motion (i.e., the principal part) and achievelocal well-posedness of the initial value problem (Cauchy problem), relativistic causality and dynamical stability of the homogeneous thermal states.Thus, the solution to obtain a mathematically and physically well behaved relativistic version of theNavier-Stokes equations is to perform an appropriate field redefinition, or choice of hydrodynamic frame. Traditionally used frames such as the Landau or Eckart's frame do not belong to this set of allowed frames.Having at our disposal this novel theory, wemay wonder: why do we need another formulation of relativistic viscous hydrodynamics if the MIS-type theories were successful describing the experimental data? We provide arguments why first-order viscous hydrodynamics might be a good alternative to MIS-type theories.An important advantage of first-order viscous hydrodynamics theories over MIS-type theories are the well posedness properties of the initial value problem. Local well posedness of the initial value problem has been well established and studied for many physically relevant sets of equations such as the Maxwell equations, ideal hydrodynamics or the Einstein equations of general relativity. However, there are limited resultsin the context of relativistic viscous hydrodynamics. It was not until very recently that a study was performed for a set of MIS-like equations <cit.>. The result is that these theories present some limitations, for example, local well posedness may depend on the specific state under consideration. Thus, the conditions must be checked pointwise in spacetime for each solution. These conditions were shown to fail in realistic simulations of the QGP created in heavy-ion collisions experiments <cit.>.In contrast, for the relativistic Navier-Stokes equations, once a causal frame has been chosen, local well posedness is satisfied independently of the specific state. Thus, this is a clear advantage of relativistic Navier-Stokes over MIS. Related to the previous point there is another argument regarding the principles of relativity. For the MIS equations the ability to propagate information (more precisely, the characteristic velocities of the PDEs) might be larger than the speed of light, and again this depends on the specific state and must be checked pointwise in every evolution.In contrast, for the relativistic Navier-Stokes equations the characteristic velocities are fixed once the frame has been chosen, and they do not depend on the specific state. Choosing a frame for which the characteristic velocities are not larger than the speed of light ensures that every evolution performed with those equations respects the principles of relativity. In MIS it cannot be ensured that the solution respects the principles of relativity, and this has to be checked for every solution.Another important advantage of relativistic Navier-Stokes is that it is able to capture arbitrarily strong shockwave solutions, while MIS-theories may present limitations evolving large shocks; this may be an advantage in astrophysical applications.The ability to evolve large shocks is related to the characteristic velocities, and in MIS-type theories, these depend on the state and might be less than the speed of light. A shock with velocity larger than the characteristic velocities cannot be evolved with the MIS equations and hence their predictive power is lost in these situations.In relativistic Navier-Stokes, once the frame has been fixed, also the characteristic velocities are fixed. Moreover, frames can be chosen such that these characteristic velocities are equal to the speed of light. Mathematical results in this direction were obtained in <cit.> and numerical studies were performed in <cit.>, providing evidence that arbitrarily large shocks can be evolved with the relativistic Navier-Stokes equations. An advantage of MIS-type formulations is that the numerical infrastructure is already implemented in the context of the QGP. For relativistic Navier-Stokes this infrastructure is still under construction and this paper aims to contribute towards this goal. The above arguments suggest that first-order viscous hydrodynamics might be a very good alternative to MIS-type theories in aspects which are relevant for an appropriate description of physical systems of interest like neutron star mergers or the QGP created in heavy-ion collisions.However, for a practical implementation of the first-order viscous hydrodynamics equations it is relevant to address the following question. Now that we are not using traditionally preferred frames like the Landau frame, what frame should we use in actual numerical evolutions? In principle, one can use any frame within the set of causal frames. Actually, if the system is within the effective field theory regime, then it should not matter which specific frame is being used, and using different frames should lead to an equivalent physical description. Equivalently, if the system is within the effective field theory regime, the physics to first order should be independent of the arbitrarily chosen frame. If not, the equations would not have predictive power. Numerical studies of the first-order viscous hydrodynamics equations were performed in <cit.>. These papers constitute a proof of principle that first-order viscous hydrodynamics produces physically sensible real-time evolutions. Moreover, numerical techniques have been studied, with extensions to non conformal and charged systems. In this paper we go beyond the state of the art by performing a detailed, quantitative study of the effect of field redefinitions in numerical evolutions of the relativisticNavier-Stokes equations for a conformal theory.For this purpose we consider different sets of initial data: a small amplitude perturbation of a homogeneous thermal state, a large amplitude Gaussian perturbation of a homogeneous state and shockwave solutions. We define a specific set of criteria to assess different conditions on the numerical solutions. By using these criteria, we make precise and provide evidence for the statement that the arbitrarily chosen frame does not affect the physics to first order, as long as the system is in the effective field theory regime. Motivated by the physics of the QGP, we also study the effect of field redefinitions in situations where the system is marginally in the effective field theory regime. Furthermore, we also study the effect of changing the frame in the initial data. In this paper we use equivalently the terminology `relativistic first-order viscous hydrodynamics equations', `relativistic Navier-Stokes equations' and `BDNK equations'. From the perspective of effective field theory it is natural to use `relativistic Navier-Stokes equations' <cit.>. We define it as a collection of sets of partial differential equations (PDEs), where each set of PDEs is obtained by choosing a hydrodynamic frame and truncating the hydrodynamics gradient expansion of the constitutive relations up to first order in derivatives and plugging it into the conservation equation for the stress tensor. By using different frames the truncations are different, obtaining a different set of PDEs for each frame.Each set of PDEs can be considered as a different version of relativistic Navier-Stokes, like Eckart's or Landau's versions; it is expected that they all describe equivalent first-order physics as long as the solutions are in the effective field theory regime. The main results of this paper provide a quantitative assessment ofthis point of view.[See <cit.> for a detailed and rigorous study of the validity of effective field theory in the context of the Abelian Higgs model, where similar issues arise. ] The reader interested in a summary and conclusions may jump to Section <ref> Discussion.§ FIRST-ORDER HYDRODYNAMICS AND EFFECTIVE FIELD THEORYWe consider a conformal fluid in 3+1 dimensions in Minkowski spacetime.Conformal symmetry fixes the equation of statep = ϵ/3 ,where ϵ is energy density andp is pressure. It also imposes that the energy is proportional to T^4,ϵ=𝒞 T^4,where T is the temperature and𝒞 is a constant that we choose 𝒞=3/4π^4.[We choose this constant motivated by holographic fluids.] The conservation of the stress-energy tensor∇_μT^μν = 0 ,provides the evolution equations for the relativistic fluid. Considering the 4-velocity of the fluid u^μ, normalized such that u^2=-1,we define Δ^μν≡ g^μν + u^μ u^ν, ϵ̇≡ u^μ∇_μϵ, ∇_⊥^μ≡Δ^μν∇_ν, ∇·u ≡∇_ρ u^ρandσ^μν≡ 2∇^⟨μ u^ν⟩, whereA^⟨μν⟩≡1/2Δ^μαΔ^νβ(A_αβ+A_βα)-1/3Δ^μνΔ^αβA_αβ ,is symmetric, traceless and transverse to u^μ. We start by presenting the equations of ideal hydrodynamics. The constitutive relations of ideal hydrodynamics are T^μν = ϵ(u^μ u^ν + 1/3 Δ^μν) . The conservation of the stress-energy tensor (<ref>) with (<ref>) provides evolution equations for the dynamical variables, namelyϵ and the independent components of the velocity vector u^μ, 3/4ϵ̇/ϵ+∇· u=0 , u̇^μ+1/4∇_⊥^μϵ/ϵ =0. The constitutive relations of relativistic hydrodynamics up to first order in the gradient expansion in the Landau frame[The Landau frame is defined by a local fluid velocity that satisfies u_μT^μν =ϵu^ν, where ϵ is the energy density in the local rest frame of the fluid. ]can be written as T^μν = ϵ(u^μ u^ν + 1/3 Δ^μν) -η σ^μν+ O(∂^2) , In the spirit of effective field theory we consider the most general field redefinition of the fundamental variables {ϵ,u^μ} by first order terms compatible with Poincaré and conformal symmetries. There are two Lorentz scalars ϵ̇, ∇· u and two Lorentz vectors u̇^̇μ̇, ∇_⊥^μϵ that we can construct with the first derivatives of the dynamical fields. However, conformal symmetry uniquely determines the most general field redefinition <cit.> ϵ →ϵ+ a_1 η(3/4ϵ̇/ϵ+∇· u ) + O(∂^2), u^μ → u^μ +3a_2 η/4ϵ(u̇^μ+1/4∇_⊥^μϵ/ϵ) + O(∂^2). In these equations, the overall factor multiplying the terms in brackets has dimensions fixed by conformal symmetry, and we choose it to be proportional to the shear viscosity η times a constant, which we denote by a_1 or a_2 respectively.By performing a general field redefinition (<ref>) of the constitutive relations in the Landau frame (<ref>) and neglecting higher order terms, we obtain the stress tensor up to first order in the gradient expansion in a general hydrodynamic frame T^μν=  ϵ(u^μ u^ν + 1/3 Δ^μν)-η σ^μν+ a_1 η( 3/4ϵ̇/ϵ+∇· u )(u^μ u^ν + 1/3 Δ^μν)+a_2 η[(u̇^μ+1/4∇_⊥^μϵ/ϵ)u^ν + (μ↔ν)]+ O(∂^2) .The two real numbers {a_1,a_2} in (<ref>) specify the hydrodynamic frame. The Landau frame constitutive relations (<ref>) correspond to {a_1,a_2}={0,0}.The stress tensor (<ref>) can be equivalently obtained by considering the most general stress tensor up to first order in the gradient expansion compatible with Poincaré and conformal symmetries <cit.>.The first order terms in (<ref>) are the shear term -η σ^μν, which is field redefinition independent, and the terms in second and third lines of (<ref>), which we refer to as the`a_1 term' and `a_2 term' respectively and depend on the choice of frame. We use the notation T_μν^(0) to refer to the ideal part and T_μν^(1) to the first order part in (<ref>).The equations of first-order viscous hydrodynamics are obtained by plugging (<ref>) into the conservation equation for the stress tensor (<ref>), and are of second order in derivatives.Inspired by the works of Van and Biro <cit.> and Freistuhler and Temple <cit.>, Bemfica, Disconzi and Noronha <cit.> and Kovtun <cit.>, showed that in some frames the equations of first-order hydrodynamics present good causality and stability properties. The key observation is that different hydrodynamic frames give rise to different evolution equationsat the level of two derivatives. More precisely,the principal part of the equations, that is, the highest derivative terms,depends on the constants a_1, a_2 and hence on the choice of frame. Local well posedness and the causality properties of the PDEs are determined by the principal part of the equations and these references found that in some frames the equations have these good properties. More specifically, for the conformal fluid it was proven <cit.> that if a_1 ≥ 4,      a_2≥3a_1/a_1-1, then the PDEs (<ref>) with (<ref>) are stronglyhyperbolic and the initial value problem is locally well posed.[By `well posed' we mean that the solution locally exists, is unique and it depends continuously on the initial data, which should be in a suitable Sobolev space.]Also, the characteristic velocities can be chosen to be no larger than the speed of light, thus respecting the principles of relativity.These results provide an answer to the longstanding question of why first-order viscous hydrodynamics had apparent ill causality properties: it is just a matter of choice of hydrodynamic frame and it can be solved by an appropriate field redefinition.The Landau frame {a_1,a_2}={0,0} lies outside the region (<ref>), and there is a gap between the causal frames and the Landau frame. Another important aspect of the first-order viscous hydrodynamics equations is that in the Landau framethe equilibrium states are unstable <cit.>. Thus, apart from the local well posedness and causality properties, it is also important to discuss the stability of equilibrium states under the conditions (<ref>), and this was shown in <cit.>.A relevant result in this direction was recently proven in <cit.>, who showed that under the conditions (<ref>), if a thermal equilibrium state is stable in one Lorentz frame, then it is stable in all Lorentz frames. A natural question that arises if one is familiar with working in the Landau frameisthe physical meaning of the `extra terms' a_1 and a_2 in (<ref>). We now provide intuition about the significance of these terms.We start by emphasizing that the field redefinition dependent terms in (<ref>), that is, the a_1 and a_2 terms, are proportional to the lower order equations of motion, namely the ideal hydrodynamics equations (<ref>). Thus, upon using the equations of motion, these terms areequivalent to secondorder terms <cit.>. Actually, this is a general principle ineffective field theory: when a term at a given order in the derivative expansion is proportional to the lower order equations of motion, it can be pushed to a higher order by a field redefinition. Therefore, on shell, the a_1, a_2 terms are effectively of second order and hence one should expect that their contribution to the first order physicsis negligible, as long as the system is in the effective field theory regime. In other words, in the effective field theory regime they are smaller than the shear viscosity term, which is of first order and field redefinition independent. Thus, changing frame should leave the physics to first order invariant, as it is a second order effect.[In charged fluids the physical interpretation of the local fluid velocity is different in the Landau and Eckart frames, and this is because the change of frame is first order but not proportional to the ideal equations of motion. In the conformal and uncharged fluids studied in this paper, all field redefinitions are second order on shell and so the physical interpretation of the fundamental variables does not change under field redefinitions.]This is a formal statement in effective field theory, where gradients are assumed to be infinitesimally small. However, in realistic situations the gradients are finite, and hence it would be desirable to have some specific quantitative criteria to assess whether the higher order gradients are indeed negligible compared to the first order ones.In the following we suggest some criteria that capture different aspects relevant for this discussion; more precise definitions of these criteria will be given in Section <ref>.* Criterion A  The `size' of each term evaluated on a solution of the viscous equations at given order in the hydrodynamic gradient expansion is smaller than any of the lower order ones.* Criterion B  The `size' of thea_1 and a_2 terms in(<ref>) evaluated on a solution of the viscous equations are much smaller than the shear term. * Criterion C  The difference between two solutions of the viscous equations obtained in two different frames is much smaller than the difference between the solution of the ideal equations and any of the solutions of the viscous equations. In practice, to check Criterion A we will evaluate the ideal and first order terms in (<ref>) on our numerical solutions. Criteria B and C are two different and complementary criteria to make precise when the physics to first order in the gradient expansion is independent of the choice of hydrodynamic frame. For Criterion C to be meaningful,we need to require that the different causal frames are sufficiently distinct from each other, otherwise, by continuity and Cauchy stability,two different frames corresponding to values of a_1 and a_2 that are sufficiently close will give solutions that are trivially close to each other even if they are not in the effective field theory regime. For this reason, in this paper we will compare situations in which the frames {a_1,a_2} differ by a factor of 2 at least. Typically used values will be {a_1,a_2}={5,5} and {a_1,a_2}={10,10}. We will consider different sets of initial data to obtain solutions to the viscous equationsthat we will use to analyse in detail the different criteria above. In particular, we study situations in which the system ismarginally in the effective field theory regime.This is motivated by the physics of the QGP created in heavy-ion collision experiments,where at the initial stages of the hydrodynamic evolution gradients might not be small and hence it is not clear that the system is within the effective field theory regime. We will explore the behaviour under change of frames by employing criteria B and C in this limiting situation in which Criterion A might be violated.Also motivated by the physics of the QGP, we will address the following question: given some initial data, how do we change from the usual Landau frame to the chosen causal frame?To do so, we will study the effect of the change of frame in the initial data.The equations of first-order relativistic hydrodynamics (<ref>) with (<ref>) in a frame satisfying (<ref>) can be understood as a UV completion of hydrodynamics in the following sense. The equations are obtained after truncating the gradient expansion of the stress-energy tensor to first order in derivatives (<ref>), and then plugging the latter into (<ref>).In general, solutions to these truncated equations will notbe in the effective field theory regime: they could be arbitrarily far from equilibrium, for example setting initial data with large gradients.[In particular, the theory(<ref>) contains non-hydrodynamic modes that depend on the choice of frame. Similarly, MIS-like theories also contain non-hydrodynamic modes and can be thought of as different UV completions. ] The fact that there are solutions to the viscous hydrodynamics equations that are not in the effective field theory regime is a generic feature in truncated effective field theories.If we want that a solution of the truncated theory to correspond to the long distance description of a solution of the microscopic theory, the former must be in the effective field theory regime at all times. Let us emphasize that this is the case not only for this particular set of equations; the same happens with the equations of ideal hydrodynamics and other theories of viscous hydrodynamics such as MIS-type equations. The fact that we construct solutions to the equations of (viscous) hydrodynamics does not necessarily mean that such solutions are consistent; for instance, there may be physical situations in which solutions explore the UV of theory, e.g., turbulence in 3+1 dimensions, and preventing a flow of energy to UV in an ad-hoc way in the truncated theory may be unphysical.We also emphasize that when we state that the system is in the `regime of hydrodynamics' it may sound trivial because we are solving the hydrodynamics equations, but it is not: we mean that the system is within the effective field theory regime, and this is in general not true for certain solutions of the equations of hydrodynamics.To assess if a solution is in the regime of hydrodynamics one would typically compare ideal and first order terms evaluated on the solution, that is, Criterion A.However, we can try to do better. One motivation is the following: there might be situations in which the system is away from the regime of hydrodynamics and still first order terms are smaller than ideal terms, so Criterion A would fail. One example is adding a perturbation to a homogeneous thermal state with a very small amplitude and large momentum: the small amplitude can make the first order terms small even if the momentum is large compared to the temperature. In that case the ratio of second order terms to first order terms would be large, indicating that the system is not in the regime of hydrodynamics, but the ratio of the first order terms to ideal terms is small. Another example is the Gaussian initial data that we explore in section <ref>: at t=0 gradients might be arbitrarily large (for sufficiently small width of the Gaussian) and yet the shear term vanishes. Then, can we improve our practical assessment of whether the system is in the regime of hdyrodynamics? We now provide one possible idea. Nothing prevents us from measuring the size of the higher order gradients of our specific solutions; in particular, we can evaluate the second derivative terms in the gradient expansion of the stress-energy tensor on the solutions of the first order theory. Note that doing so is perfectly consistentin our scheme since we are solving the classical equations of motion, which are of second order in derivatives; therefore, the second order terms in the gradient expansion are well-defined in our solutions.[The presence of viscosity smoothes out shock solutions, which therefore are continuous.]For this purpose, we collect here the constitutive relations of relativistic viscous hydrodynamics up to second order in the derivative expansion in the Landau frame for a conformal fluid <cit.> T^μν =  ϵ u^μ u^ν + p Δ^μν -η σ^μν+η τ_π(σ̇^⟨μν⟩+1/3 σ^μν ∇·u)+λ_1 σ^⟨μ_ρσ^ν⟩ρ+λ_2 σ^⟨μ_ρΩ^ν⟩ρ+λ_3 Ω^⟨μ_ρΩ^ν⟩ρ ,where η is the shear viscosity, τ_π, λ_1, λ_2, λ_3 are second order transport coefficients and Ω_μν=Δ_μ^μαΔ_ν^νβ∂_[αu_β] is the vorticity tensor.We will use the second line of(<ref>) to measure the size of the second order gradients in our solutions. Note that (<ref>) is in the Landau frame, and our solutions will be in a different (causal) frame.We could change the hydrodynamic frame of our solutions to Landau frame and evaluate (<ref>) in the Landau frame. However, the contribution of the terms corresponding the the change of frame to the second line in(<ref>) is of higher order, and hence negligible if the solution is in the hydrodynamic regime.Having a measurement of the size of the second order terms allows for a more detailed assessment of when the solution is in the regime of hydrodynamics since we can compare these second order terms with the ideal and first order terms. Hence, we will state that the system is in the effective field theory regime when there is a clear hierarchy between the `sizes' of the ideal, first order and second order terms respectively.In practice, when considering Criterion A, we will include the second order terms even though we do not use them to obtain the solutions.§ REAL-TIME EVOLUTIONSIn this section we perform a detailed study of real-time numerical evolutions of the relativistic first-order viscous hydrodynamics equations of a conformal fluid in 3+1 dimensions. These are obtained by plugging (<ref>) into the conservation equation (<ref>).We perform evolutions in Minkowski spacetime and we restrict the dynamics in 1+1 dimensions, assuming homogeneity along the other two spatial dimensions. We use Cartesian coordinates (t,x,y,z) and we choose the dynamics to take place along the x-direction. Our evolution variables are {ϵ, u_x}, where u_x is the x component of the 4-velocity u^μ. The numerical code that we use was presented in <cit.>, see Appendix A of that paper for details.[Even if our code evolves the equations in a 2+1 dynamical set up, in this paper we present evolutions with dynamics in 1+1 dimensions for presentation purposes: it is enough for the physics that we want to explore and easier to present. We do not expect that the qualitative conclusions obtained in this paper depend on the dimensionality of the dynamics of the problem.] In Appendix <ref> we include some convergence tests for the runs presented in this paper.In our evolutions we use hydrodynamic frames {a_1,a_2} that satisfy the causality conditions (<ref>). Typical values used in this paper will be {a_1,a_2}={5,5} or {a_1,a_2}={10,10}, but we will also use other frames that will be specified in due course. We use the conformal equation of state (<ref>) and the shear viscosity η = s/4π , where s is the entropy density. We use the value of shear viscosity (<ref>) of large-N and large (infinite) coupling holographic theories, which is a universal result within holographic theories with an Einstein gravity dual <cit.>.Another motivation for choosing (<ref>) is that this value is very close to the shear viscosity measured in the QGP created in heavy-ion collision experiments <cit.>.Even if in this paper we only perform evolutions of the first-order viscous hydrodynamics equations, in order to assess if our solutions are in the effective field theory regime we will evaluate pointwise in spacetime the second line of (<ref>), as explained above.For this purpose, we use the following second order transport coefficients <cit.> η τ_π = s/8 π^2 T(2-ln 2 ) ,λ_1= s/8 π^2 T ,λ_2= s/8 π^2 T(-2ln 2 )  ,λ_3= 0  .These are the coefficients of a set of 3+1 dimensional holographic conformal field theories, such as 𝒩=4 Super Yang Mills, which admit a dual gravitationaldescription in terms the Einstein-Hilbert action plus a negative cosmological constant.In planar 1+1 dynamics only τ_π and λ_1 are relevant, as the tensors multiplying λ_2, λ_3 vanish.We use these coefficients because in these holographic theories these coefficients are known; in other frameworks, obtaining these coefficients from a microscopic quantum field theory may be challenging.We will start by performing evolutions of a small perturbation of a homogeneous thermal state. We consider small amplitude sinusoidal perturbations with a large wavelength, and thus by construction this system is close to the linear hydrodynamic regime. We evolve the equations at the full non-linear level, but being close to the linear regime gives us good control of the system. This is particularly useful when studyingthe effect of field redefinitions. More specifically, by continuously varying the momentum we can have a system that is well within the regime of hydrodynamics (low momentum), or far from equilibrium (large momentum), thereby allowing us to assess the effect of field redefinitions in situations where the system may be on the verge of exiting the regime of hydrodynamics.We will continue our studies by considering initial data corresponding to a thermal state deformedby a Gaussian with a large amplitudeto explore the non-linear regime of the theory. By tuning the width of the Gaussian profile, we can smoothly interpolate betweenthe regime of hydrodynamics (large widths) and the far from equilibrium regime (very small widths).Finally, we continue exploring the non-linear regime of the theory in more extreme situations by studying shockwave solutions. By changing the amplitude of the shockwave we can take the system to be well in the regime of hydrodynamics (small shocks) or far from equilibrium (large shocks). §.§ Small perturbation of a homogeneous thermal stateWe start by considering a physical system in which we can study the properties of the first-order viscous hydrodynamics equations under well controlled conditions:a small amplitude sinusoidal perturbation of a homogeneous thermal state. By choosing large wavelength perturbations, the system is by construction in the regime of (linear) hydrodynamics. Choosing a small amplitude is useful because we can obtain some intuition from linear hydrodynamics; we use this intuition to analyse our solutions, that are obtained by solving the full non-linear equations. We consider a thermal homogeneous state with temperatureT and energy ℰ=3/4π^4 T^4, and a sinusoidal perturbation of amplitude 0.01 ℰ and momentum k/T≃ 0.184. Specifically, the initial data is as follows: ϵ |_t=0 = ℰ( 1+ 0.01 cos(k x) ) ,∂_t ϵ|_t=0 = 0, u_x|_t=0 = 0 ,∂_t u_x|_t=0 = 0.01 k sin(k x)/4(1+ 0.01 cos(k x)) . The time derivative of the velocity at t=0 is chosen so that the ideal hydrodynamics equations (<ref>) are initially satisfied.If a system is expected to be in the regime of hydrodynamics, the ideal equations should be nearly satisfied, up to higher order terms which are supposed to be small. Therefore, setting them to be exactly zero initially should bea good approximation to this situation. Other options such assetting to zero the time derivatives at t=0 would imply that the ideal equations would not be initially satisfied; later on we will analyse this case in detail.Also, very importantly, note that if the initial data satisfies the ideal equations, then the first order change of frame vanishes at t=0. This means that we can study the effect of using different frames along the evolution without having to worry about the effect of changing frame in the initial data: we leave this as a another exercise to explore in Section <ref> of the paper.We choose ℰ=1 and spatial domain of size LT≃ 34.2 with periodic boundary conditions. We perform the real-time evolution of the non-linear first-order viscous hydrodynamics equations (<ref>) with (<ref>), initial conditions (<ref>) and hydrodynamic frame {a_1,a_2}={5,5}. The resulting solution is shown in Fig. <ref> (left) for the T_tt component of the stress tensor. This evolution captures the well known physics of the sound mode, with frequency and decay rate well approximated by <cit.>: ω(k) = c_s k - i Γ/2(ϵ+p) k^2 +O(k^3),with c_s=1/√(3), p=ϵ/3 and Γ=4η/3. §.§.§ The criteria We start by examining if the solution is within the regime of hydrodynamics, and for this we explain in detail our definition of Criterion A. We use the numerical solution for {ϵ,u_x} to evaluate pointwise in spacetime the different terms of the constitutive relations (<ref>); in Fig. <ref> (right) we show the absolute value of the different terms in the T_xx component of the stress tensor (<ref>),[The reason why we analyse the relative sizes of the terms of the T_xx component of the stress tensor (<ref>) and not T_tx or T_tt is that in the shear tensor σ_μν, because of being transverse to the velocity, the component σ_tx is suppressed by the velocity andthe component σ_tt by the square of the velocity, and so these components are much smaller than σ_xx for this solution, in which velocities are small. ]in a log scale at x =0 as functions of time. Comparing the ideal term (solid black) with the shear term (solid blue), we conclude that first order terms are much smaller than ideal terms, indicating that the system is within the regime of hydrodynamics. To be precise,we should compare the ideal term T_xx^(0) with the full first orderT_xx^(1) (shear term + a_1 term + a_2 term), and not only the shear term; however, as we will describe in detail below,the a_1, a_2 terms are much smaller than the shear term and so T_xx^(1)≃ -η σ_xx is a very good approximation. As explained in the previous section, even if in the theory under consideration the stress tensor does not include second order terms, we will measure the size of such terms in our solutions since nothing prevent us from computing the required higher order derivatives. We evaluate the second line of the constitutive relations (<ref>) using our numerical solution for {ϵ,u_x}, obtaining the solid purple line in Fig. <ref> (right). We use the transport coefficients (<ref>). Note that these second order expressions in the second line of (<ref>) are in the Landau frame, and our {ϵ,u_x} are not in the Landau frame; however, as we will verify below in detail, the difference between frames is of higher order, and hence negligible in our solution.So, it is justified to use expression (<ref>) as a very good approximation to the second order terms.By comparing ideal, first order and second order terms, we find that the amplitude of each term in the gradient expansion is much smaller than the previous one.Ideally we would like to provide a quantitative criterion that is useful in a generic solution andthat does not rely on the amplitude of each term. For example, one possible criterion could be that the system is in the regime of hydrodynamics if, pointwise in spacetime and for all components of the stress tensor, each term in the hydrodynamic expansion is not larger than a certain threshold value, e.g., 10%, of any of the lower order terms.However, a local, pointwise criterion has some disadvantages. For instance, at some specific points it might fail even if one considers a solution that is clearly in the regime of hydrodynamics: in our previous example, the definition above always fails when one of the relevant quantities crosses zero, and they do because they are oscillating. We would like to capture the fact that the system is well described by hydrodynamics in spite of these regions. One possible idea to define a criterion that avoids this problem of having regions where the relevant quantities cross zero is to depart from the pointwise analysis and consider some kind of integral version, i.e., a suitable norm that captures the global aspects of the solution. The idea would be to integrate in patches, possibly extended both in space and time. It would be reasonable to use patches whose size is similar to the characteristic scale of the problem: if they are too small we go back to the same problems as in the pointwise analysis, and if they are too large, we wash out the details of the system. It would be interesting to perform an exploration of different such definitions, and we leave this systematic exploration for future work. In this paper we will use the following definition: we compute the L_1 norm (i.e., integral of the absolute value)of the relevant quantities over the the full spatial domain and also in time, with a time integration domain of the order of the characteristic timescale of the system.[The choice of the L_1 norm is only for convenience; other norms could be equally useful. Also, the choice of 10% as a threshold value for acceptance is completely arbitrary. This value may depend on the system under consideration and the choice of initial conditions.] Criterion A The system is within the effective field theory regime at a given time t if the L_1 norm over the spatial domain and over times {t-t̃,t+t̃}, where t̃ is a characteristic time scale of the problem, of the shear term is smaller than 10% of the L_1 normof the ideal term, and also the L_1 of the second order terms is smaller than 10% the L_1 norm of the shear term, for all components of the stress tensor.In the case of a sinusoidal perturbation, we consider t̃ as the period of oscillation. Relevant data for this criterion is ploted in Fig. <ref> (right) at x=0 and in Fig. <ref> (top, left) in the whole spacetime domain, where weplot the 10% of the shear term (in blue) and the second order term (in purple) in such a way that we can visually inspect their relative sizes. We do not include a plot of the ideal term because the shear term is orders of magnitude smaller, so it clearly satisfies the criterion. By computing the corresponding norms we verified that Criterion A is well satisfied in our solution at all times, and thus we conclude that the system is in the effective field theory regime. The ratio of the L_1 norm of second order and first order terms is 2.2%, and of first order and ideal terms is 3.4%. We can then define the hydrodynamization time as the time that it takes for a system that is initially away from equilibrium, violating Criterion A, to relax to a regime satisfying Criterion A, and hence in the hydrodynamic regime. We now proceed with a detailed definition of our second criterion, denoted by Criterion B. We examine the size of the field redefinition dependent a_1, a_2terms in (<ref>) and compare them to the shear viscosity term in (<ref>), which is field redefinition independent. According to the discussion in Section <ref>, by using the equations of motion, the terms a_1, a_2 can be re-expressed, on shell, as second order terms, and thus in our solution they should be much smaller than the shear viscosity term;if they are much smaller, then the choice of frame will not affect the physics to first order.Criterion B We will say that the physics to first order is independent of the chosen hydrodynamic frame at a time t if the L_1 norm over the spatial domain and over times {t-t̃,t+t̃} of the spatial components of both the a_1 and a_2 terms in the stress energy tensor are smaller than 10% of the L_1 norm of the spatial components of shear term (computed over the same spacetime domain). Note that we only consider the spatial components of the stress tensorin this definition. This is because the shear tensor is transverse to the velocity and the temporal components are suppressed when the velocity is small, as in our solution, while in the a_1, a_2 terms this suppression is not present. Therefore, for the temporal components of the stress tensor, the shear term may be smaller than the a_1, a_2 terms.By computing the norms, we find that Criterion B is satisfied in our solution. In Fig. <ref> (right) we can observe that at x=0 the a_1 term is smaller than the shear term well below a 10%.In Fig. <ref> (middle, left) we present the a_1 term (in red) and a_1 term (in orange) in the spacetime domain, together with 10% the shear term (in blue). As additional information, we note that the ratio of amplitudes of the a_1 term and shear term is 0.19% and of the a_2 term shear term is 0.06%. Thus, we can conclude that the physics up to first order in the hydrodynamic gradient expansion is independent of the arbitrarily chosen frame used to perform the actual numerical evolution, see Fig. <ref>.After this conclusion one could think to proceed in practical terms as follows.If the system under consideration is well within the regime of hydrodynamics, one could think of the terms a_1, a_2 in the first-order hydrodynamic equations as mere regulators that render the equations well-posed since, on shell, they are of second order, and thus negligible to first order. Then we can work as if we were in the Landau frame, as changing frame is a second order effect.However, this way of thinking of the equations might be appropriate only when the system is well within the regime of hydrodynamics, and one must be careful if it is not clear that the system isin that regime, such as in systems that are marginally in the regime of hydrodynamics, as the ones we study below.Now we would like to find an easy way to understand the value of the amplitude of the a_1, a_2 terms in this solution. A natural approach would be to linearize the a_1, a_2 terms, but under linearization the detailed cancellations between the addends are spoiled. Recall that the a_1, a_2 terms are proportional to the equations of motion, which are nearly satisfied if the solution is in the effective field theory regime, explaining why there are detailed cancellations.Another option would be to use the equations of motion to replace the ideal hydrodynamic equations in a_1, a_2 by higher order terms. However, by linearising these higher order terms also cancellations among them are spoiled.A way forward is as follows. In the Landau frame, i.e., {a_1,a_2}={0,0}, the only contribution to the second order derivatives in the equations of motion comes from the shear term, and this contribution can be easily linearized. With this in mind, we can perform a field redefinition (<ref>) from the Landau frame to a generic frame {a_1,a_2} in which we replace the ideal hydrodynamic equations by the linearization of the contribution of the shear term to the terms with second order derivatives in the equations of motion. With these expressions in a general frame, we can now perform an expansion in amplitude of the relevant part of the a_1, a_2 terms (without spoiling any detailed cancellation) obtaining 3/4ϵ̇/ϵ+∇· u ≃η/ϵ (∂_x u_x)^2 + 3/4a_2 η^2/ϵ^2∂_x^3 u_x + ...,u̇^μ+1/4∇_⊥^μϵ/ϵ ≃η/ϵ∂_x^2 u_x+1/4a_1 η^2/ϵ^2∂_t ∂_x^2 u_x+ .... The first term on the right hand side comes from the leading term in amplitude of the shear part and the second term from the leading term in amplitude of the a_1, a_2 contribution.Notice that in (<ref>) the term linear in the shear is quadratic in the velocity, whilst the term proportional in a_2 part is linear in the velocity. Thus the latter dominates in our solution. In (<ref>) both the shear and a_1 part are linear in the velocity, but the shear part is of lower order in derivatives, so it dominates. Expressions (<ref>) are very useful because they allow to estimate the size of the a_1, a_2 terms in the constitutive relations (<ref>) in the particular case of the sinusoidal perturbation of a thermal state. We can have analytical control over these terms, with explicit dependence on the values of the constants a_1 and a_2, amplitude, momentum, etc.Thus, we can perform an analysis of the size of these terms without the need to perform numerical simulations.In particular, it is interesting to analyze the limiting situation where k/T is not small and the system is starting to exit the regime of hydrodynamics and check to what extent thea_1, a_2 terms are small or not. In Fig. <ref> we use the approximate expressions (<ref>) to obtain the amplitude of the a_1, a_2 terms of the T_xx component of the stress tensor (<ref>) for a sinusoidal perturbation of a thermal state (<ref>), as a function of momentum: a_1 term in red, a_2 term in orange. We also include the amplitude of the shear term in (<ref>) and the second order terms in (<ref>) that we obtain by linearizing these expressions. From Fig. <ref> we can obtain the following conclusions. We observe that the a_1, a_2 terms are smaller than the shear term up to values k/T≃1 or even beyond. Thus,different frames provide the same physical description to first order. If we consider the criterion that the a_1, a_2 terms are smaller than the shear term by at least 10%, that is, Criterion A, in the case of Fig. <ref> this happens at k/T≃ 1.37;however, this number will depend on the values of a_1, a_2 and on the amplitude of the perturbation (and of course on the choice of threshold). For larger values of a_1, a_2 this number will be smaller. This suggests that the larger a_1, a_2 are, the smaller the value of k/T where the ratio equals 10% is. Thus,if we are close to the boundary of the regime of applicability of hydrodynamics, it is preferable to work with smaller values of a_1, a_2 as this imposes a lower limit for the frame independence of the physics up to first order according to Criterion B.We now consider a third criterion, which we denote by Criterion C. The motivation is the following. Above we have proposed Criterion B to determine when the physics up to first order is independent of the choice of frame. We can complement it with another criterion: perform two numerical evolutions in different causal frames with similar initial data and compare the two solutions after a certain time t.It could happen that even if Criterion B indicates that the a_1, a_2 terms are small at all times, their effects accumulate over time, resulting large differences at late times. Another possibility is that even if the a_1, a_2 terms are not small, the solutions obtained in different frames remain close to each other. To asses this criterion,in addition to the solutionin Fig. <ref> in frame {a_1,a_2}={5,5}, we now perform another evolution with similar initial data (<ref>), k/T≃ 0.184, but in the frame {a_1,a_2}={10,10};[One may wonder if the initial data should be changed to the corresponding working frame, but for this specific initial data the change of frame at t=0 is exactly vanishing; this is because it satisfies the ideal equations (<ref>) at t=0. Below we will address the question of changing frame in the initial data in cases where the change of frame is non trivial.]we denote the two solutions by T^visc1_μν and T^visc2_μν respectively, shown in Fig. <ref> (left) in solid and dashed lines respectively. According to the discussion in Section <ref>, we choose two frames that are well separated, in this case by a factor of 2.In Fig. <ref> (left) we present the difference of the stress tensor component T_xx of the two solutions, T^visc1_xx-T^visc2_xx, at x=0 as a function of time in solid green.Having computed the difference T^visc1_xx-T^visc2_xx , we would like to find a relevant quantity to compare it with so that we can establish a criterion to decide when the system is invariant under field redefinitions to first order in the hydrodynamic gradient expansion.One possibility could be to compare this difference with T_xx itself; however, this might not be sensible in our solution because the perturbation is small compared to the average value and it does not tell us about the first order physics. Another possibility would be to compare this difference to the shear term; however, the former will typically increases with time while the shear term will decrease in such a way that the difference might eventually become larger than the shear term. We propose the following approach.We consider a third solution, namely one obtained by solving the ideal hydrodynamics evolution equations using the initial data (<ref>)–(<ref>),[Recall that ideal hydrodynamics equations are of first order in derivatives, thus we do not need to specify the time derivatives of the evolved quantities at the initial time. The latter were chosen so that the equations of ideal hydrodynamics are satisfied initially, so using this initial data for the ideal case is consistent. ]k/T≃ 0.184.We denote the stress energy tensor obtained from this third solution by T^ideal_μν. We can now compute the difference of the stress tensorsof one viscous solution andthe idealsolution, that is, T^visc1_μν-T^ideal_μν, and compare it with the difference of the stress tensors of the two viscous solutions,T^visc1_μν-T^visc2_μν.In this way we are comparing the effect of changing frames with the effect of including viscous terms in the hydrodynamics description, which is precisely what we want. Criterion CWe will say that the physics to up first order in the derivative expansion is invariant under achange of frame at time t when the L_1 norm of the differenceT^visc1_μν-T^visc2_μν computed over the spatial domain and time interval {t-t̃,t+t̃}is lessthan 10% of the L_1 norm of the difference T^visc1_μν-T^ideal_μν (and T^visc2_μν-T^ideal_μν), for all components of the stress tensor.[Let us emphasize that both quantitiesT^visc1_μν-T^visc2_μν and T^visc1_μν-T^ideal_μν are expected to grow in linearly in time (e.g., <cit.>), but their ratio should have a weaker time dependence and hence this criterion could still be useful at much later times.] In Fig. <ref> (left) we show the result of the difference T^visc1_xx-T^visc2_xx in solid green and T^visc1_xx-T^ideal_xxin solid grey at x=0. In Fig. <ref> (left, bottom) we show a plot in all spacetime of the quantities T^visc1_xx-T^visc2_xx and 10% of T^visc1_xx-T^ideal_xx; this allows for a direct visual inspection of the relevant quantities for Criterion C.Computing the L_1 norms,we find that the first one is smaller than 10% of the second one at all times, and thus we conclude that Criterion C is satisfied. The actual ratio of the L_1 norm of terms T^visc1_xx-T^visc2_xx and T^visc1_xx-T^ideal_xx is 0.14%.Thus, criterion C also confirms that the frame dependent terms have a higher order effect that is negligible for the physics up to first order. Also, we can conclude that the accumulated effect of the terms a_1, a_2 is small in this solution, and having locally small a_1, a_2 terms also implies having a small difference in the evolutions in different frames. Now we would liketo address the following question: would it be possible to have control on the difference T^visc1_xx-T^visc2_xx?We will obtain an analytical estimate of the difference T^visc1_xx-T^visc2_xx, which will allow to understand the size of this quantity. To this end, we use that the solution to the linear problem is known and given by the dispersion relation of the sound mode <cit.>:ω(k) = c_s k - i Γ/2(ϵ+p) k^2 +Γ^2/8 c_s(ϵ+p)^2 k^3-i a_2 ηΓ^2/2 (ϵ+p)^3 k^4 +O(k^5), up to O(k^4), with c_s=1/√(3), p=ϵ/3 and Γ=4η/3. Recall that this is the dispersion relation of the first-order hydrodynamics equations, that is, the truncated theory; beyond O(k^2), the terms would be different if we had included second and/or higher order terms in the expansion of the stress energy tensor. Recalling that our initial data is x↔-x symmetric, the general solution to the linear problem is a sum of functions (à e^ω_Im tcos(ω_Re t) + B̃ e^ω_Im tsin(ω_Ret) ) cos(k x),in harmonics of k, even if here we will restrict to the lowest harmonic as the other terms are very small in our solution. Here {ω_Re,ω_Im} are the valuesobtained from evaluating (<ref>). Now, for the two solutions in Fig. <ref> (left)we use the analytical expression (<ref>)to fit T_xx and obtain the values of the constants à and B̃ in each case. We now consider the difference of the two analytical expressions (T^visc1_μν-T^visc2_μν)_analytical and include it in Fig. <ref> (left), in dotted green.We can conclude that we obtain a very good estimate of the difference T^visc1_μν-T^visc2_μν. Moreover, we can also perform a fit to the solution of ideal hydrodynamics using the ideal dispersion relation, and obtain the analytical difference (T^visc1_μν-T^ideal_μν)_analytical that we plot in dotted grey, obtaining also a good estimate.To be precise, we perform the aforementioned fits using data from some time t>0 with tT≃10, and not from t=0. The reason is that we observe that the fit is better if we avoid the first instants of the evolution since there is a very small quasinormal mode that relaxes quickly and is not captured by (<ref>). It would be interesting to study in detail this quasinormal mode, but we leave it for future work. In principle à and B̃ can be obtained from the initial data analytically, as the initial data is also analytical. However, the fit that weobtain in doing so is not so good precisely because of the presence of the aforementioned quasinormal mode. Using the analytical solution we can now provide an answer to the question: even if the a_1, a_2 terms are not small compared to the shear term, is it possible that solutions obtained in different frames capture the same physics up to first order? In other words: can Criterion B be violated while Criterion C be satisfied? The answer is yes, and we have an example in which this is under control: In the dispersion relation (<ref>) the constant a_2 enters at O(k^4) (and a_1 enters atO(k^6)); thus, if we consider a frame with large value of a_2, the corresponding a_2 term in the stress tensor becomes large compared to the shear term but the analytical solution is barely modified because it only receives corrections atO(k^4) and hence the evolution of the stress tensor in different frames will be very similar. Even if we could consider these solutions as counterexamples to the statement that Criterion A implies Criterion B, they are obtained by choosing unreasonably large values of a_1 and a_2. If we restrict{a_1,a_2}to values that are close to saturating the inequalities (<ref>), then we can exclude these counterexamples. Below we provide further arguments in favor of this choice. §.§.§ A solution marginally in the effective field theory regimeUp to this point we have considered solutions that are in the regime of hydrodynamics by construction. In the experimental description of the QGP created in heavy-ion collisions, when the hydrodynamics codes are initialized, gradients might be large so it is unclear whether the QGP should be well-described by (viscous) hydrodynamics. Motivated by this picture, we now explore a system that is on the verge of the hydrodynamics regime. We consider another pair of simulations, also in the frames {a_1,a_2}={5,5} and {a_1,a_2}={10,10}, and initial data (<ref>) but now with momentum a factor of ten larger, k/T≃ 1.84.In Fig. <ref> (right) we show the relevant quantities of these two evolutions at x=0, and in Fig. <ref> (right)we depict the relevant quantities to evaluate the different criteria A, B and C in the whole spacetime. We observe that Criterion A is violated, that is, the system is slightly outside the regime of hydrodynamics since the ratio of the L_1 norms of the second order and first order terms is 18%. Furthermore, the ratio between the L_1 norm of the a_1 term and the shear term is 15%, so Criterion B is also violated. However, Criterion C is satisfied, that is, the two solutions obtained in different frames are close to each other since the ratio between the L_1 norms of T^visc1_xx-T^visc2_xx and T^visc1_xx-T^ideal_xx is 5.3%. This indicates that, according to Criterion C, the invariance of the physics up to first order under frame redefinitions is robust, even if the system is marginally in the effective field theory regime.The fact that Criterion B is satisfied or not in a situation in where the system is close to the boundary ofthe regime of hydrodynamics may depend on the specific values of {a_1,a_2}. If for a pair of values {a_1,a_2} the system is about to violate Criterion B, then choosing slightly larger values {a_1,a_2} will trivially lead to a violation of this criterion (assuming that the threshold value for acceptance is kept the same). Thus, the fact that Criterion B is satisfied or not in these limiting cases depends on the values of {a_1,a_2} chosen. Therefore, to avoid ambiguities, it is preferable to work with values {a_1,a_2} that are close to saturate the bound (<ref>).On the other hand, Criterion C is still robust in these circumstances.One would also like to understand how this system hydrodynamizes. That is, how long it takes for the system to become well described by hydrodynamics according to Criterion A.The answer is that it seems that this system does not hydrodynamize in times comparable to T or even much larger. Linear physics indicates thatthe solution will remain away from the effective field theory regime, as second order terms decay at the same rate as first order terms, with a ratio of amplitudes that is constant in time and thus does not decrease. It could be that non-linear effects change the picture at much later times.§.§.§ The choice of frameWe now comment on the choice offramein practical applications. Above we have discussed in detailed examples that show that physics up to first order in the gradient expansion is independent of the chosen hydrodynamic frames { a_1, a_2} as long as the system is within the regime of hydrodynamics. So, in principle, one could use any values of { a_1, a_2} that obey the hyperbolicitybounds (<ref>) and obtain equivalent physical descriptions up to first order.However, workingwith large values of { a_1, a_2} is not practical.The first reason is that the characteristic velocities of the PDEs depend on the values of { a_1, a_2}, and for larger values of { a_1, a_2} these velocities are smaller. Then, if there are velocities in the system that are larger than the characteristic velocities the code will crash; we provide further comments in the study of shockwaves below. The second reason is that larger values of{ a_1, a_2} will set more restrictive bounds on the size of corresponding terms, which may result in rather artificial violations of Criterion B.For example, in the case studied in Fig. <ref> (right), if we used{ a_1, a_2}={ 100, 100}, instead of { a_1, a_2}={ 10, 10}, we would obtain that the difference is already O(1) between the two simulations. If we have done the same in (Left) we would still be well in the regime of hydrodynamics. Thus, this example illustrates the fact that using larger values of { a_1, a_2} may set a more restrictive bounds for the validity of Criterion B.Recall that { a_1, a_2} have to satisfy the hyperbolicity bounds(<ref>). Thus, to avoid artificially large effects from changing frames it is convenient to work values of { a_1, a_2} that are close to saturating (<ref>). In fact, the equality region in (<ref>)was coined `sharply causal region', and it is interesting because the characteristics have exactly the speed of light and thus onecan in principle evolve shocks of arbitrarily large amplitudes <cit.>.Hence, for these reasons it is convenient to work in the sharply causal frames; it is a sweet spot satisfying several interesting conditions. One such example would be: { a_1, a_2}={25/4, 25/7}. A typical value used in this paper is { a_1, a_2}={5,5} which is not far from this sharply causal region, and while it is quite arbitrary, it is equally valid.§.§.§ Changing frame in the initial dataIn the previous analyses we performed simulations in two different frames {a_1,a_2}={5,5} and {a_1,a_2}={10,10}, using the same initial data (<ref>) in both cases. This was consistent because for that particular choice of initial data the effect of changing frames (<ref>) is exactly vanishing. In more realistic applications,e.g.,to describe experimental data in the context of heavy-ion collisions, one will be given initial data for the stress tensor, and not directly initial data for the evolution variables {ϵ,u_x} in the chosen causal frame. Thus, in practice, one has to obtain the initial data for the evolution variables in the working causal frame from a given stress tensor. Assuming that the initial stress tensor in the regime of validity of effective field theory, one possible way to proceed is the following: start from initial data for the stress tensor and diagonalize it to obtain {ϵ,u_x} in the Landau frame and then change frame using (<ref>). In generic circumstances the initial data will not satisfy exactly the ideal hydrodynamics equations and the change of frame expressions will be non-trivial at that initial time. However, if there are reasons to believe that the system is well described by hydrodynamics, then the ideal hydrodynamics equations should be nearly satisfied and corrections should appear in a hierarchy of ever decreasing terms. For this reason, considering initial data so that the time derivatives {∂_t ϵ,∂_t u_x} are chosen to satisfy the ideal equations seems to be a good approximation. Moreover, this choice allowed us to analyse the effect of changing frames in the evolutions without having to address the question of changing frame in the initial data.Other choices, such as vanishing initial time derivatives of the evolution variables , tend to result in configurations for which the ideal hydrodynamics equations are far from being satisfied at t=0, which could result in significant differences in the initial data in different causal frames. In this section we address the effect of changing frame in the initial data. Let us be more precise about our procedure tochange frames in the initial data. To change frames one uses the field redefinitions (<ref>), where the constants {a_1,a_2} in this expression specify the new causal frame. Note that this expression assumes that the system is in the regime of applicability of hydrodynamics sinceit ignores second (and higher) order derivatives. Therefore, to changefrom frame α to frame β, the constants in (<ref>) are {a_1^α-a_1^β,a_2^α-a_2^β}, with the quantities on the right hand sidein frame α and the ones on the left hand side in frame β. In particular, the change of frame from the Landau frame to the causal frame {a_1,a_2} is given by expression (<ref>) with the constants given by {-a_1,-a_2} and the quantities on the right hand side in the Landau frame. One could consider other procedures to change frames at the level of the initial data and they would differ from the one described here by second and higher order gradients. Of course, if the system is not in the hydrodynamic regime, then different procedures will result in significantly different initial conditions. Recall that to solve the first order viscous hydrodynamics equations we need to provide initial data for the time derivatives of the fundamental variables {∂_t ϵ,∂_t u_x}. We now comment on possible ways to consistently compute these time derivatives when changing the frame of the initial data. First, note that the time derivatives are of first order in gradients and hence they should be the same in all frames, up to higher order terms that should be ignored. In practice, if we happen to know the thermodynamic variables over a time interval[This would be the case if the experimental data is given as a time series, or we have computed the solution using some other scheme such as MIS.] then we can change frame in the entire interval and compute the time derivatives in the new frame variables. This is what we will do in the examples below. Alternatively, if one only has data at t=0, one can compute the time derivative of expression (<ref>) and either ignore the second time derivatives or impose the equations of motion at t=0 in the original frame to compute them.In order to explore the effect of changing frame in the initial data we consider two different exercises. First, we consider a numerical solution in a given frame (see Fig. <ref>) at t>0, change frames and thus obtain initial data to evolve the equations in the new frame. For concreteness consider the solution inFig. <ref>, which is in frame {a_1,a_2}={5,5}, at time t T≃ 17.1. We change to frame {a_1,a_2}={10,10} using (<ref>) and use this as initial data for an evolution in the new frame. In Fig. <ref> (left) we show the original evolution in frame {a_1,a_2}={5,5} at x=0, in solid lines, and the new evolution in frame {a_1,a_2}={10,10}, in dashed lines; the difference T^visc1_xx-T^visc2_xx is depicted in solid green. To carry out these simulations we computed the time derivatives for the initial data in the new causal frame {a_1,a_2}={10,10}; using instead the time derivatives computed in the original frame does not make a noticeable difference, as one might have expected.For comparison, we have also performed an evolution of the ideal hydrodynamics equations using initial data at t T≃ 17.1 from the solution in frame {a_1,a_2}={5,5} (using data in the frame {a_1,a_2}={10,10} does not make a difference).Notice that althoughthe ideal hydrodynamics equations are not exactly satisfied at t T≃ 17.1, the error terms are small, as they should be since the system is well in the regime of hydrodynamics by construction. The difference T^visc1_xx-T^ideal_xx in is shown in solid grey. From these results we conclude that the difference T^visc1_xx-T^visc2_xx in the two evolutions in different frames due to the change of frame in the initial data, see Fig. <ref> (left), iscomparable to the difference introduced solely upon time evolution in different frames studied in Fig. <ref> (left) where, recall, the change of frame was vanishing at t=0.Moreover, we conclude that criteria A, B and C are satisfied at a similar level in this situation. We repeated this exercise for initial data at some other times t T≃8.5, 25.6, obtaining similar conclusions.Motivated by the physics of the QGP, we now study a case which is marginally in the regime of hydrodynamics by performing asimilar exercise as the previous one but with momentum ten times larger, k/T≃ 1.84. In Fig. <ref> (right) we show the result of using initial data at time t T≃ 1.71.Again, we conclude that the change of frame in the initial data introduces a difference T^visc1_xx-T^visc2_xx which is comparable to the difference generated solely upon time evolution studied in the example in Fig. <ref> (right). We find that Criteria A and B are violated at a similar level as in the situation in Fig. <ref> (right), and Criterion C is still satisfied at a similar level. We repeated the same exercise at times t T≃0.85, 2.56 obtaining similar conclusions. Thus, we can conclude that the change of frame performed in this way in the initial data does not change the picture regarding the applicability of Criteria A, B and C in these solutions: they are satisfied or violated at a similar level as in the situation of Fig. <ref>.We present now our second example. Recall that the initial data (<ref>) was chosen so that the ideal hydrodynamics equations are identically satisfied (and hence effects of changing frames are identically zero). Here we want to study a situation where initially the equations of ideal hydrodynamics are far from being satisfied. To do so, we assume that initial data is in the Landau frame and modify (<ref>) by imposing ∂_t u_x|_t=0=0, that is: ϵ |_t=0 = ℰ( 1+ 0.01 cos(k x) ) ,∂_t ϵ|_t=0 = 0, u_x|_t=0 = 0 ,∂_t u_x|_t=0 = 0. Changing frames to our working causal frame {a_1,a_2} using the effective field theory prescription (<ref>) gives the following initial data: ϵ |_t=0 = ℰ( 1+ 0.01 cos(k x) ) ,∂_t ϵ|_t=0 = 0, u_x|_t=0 =3/16 a_2 η0.01 k sin(kx)/ℰ(1+0.01 cos(kx))^2,∂_t u_x|_t=0 = 0. One can consider the time derivatives in (<ref>) and (<ref>) in the Landau frame or in the causal frame, as the difference is of higher order; herewe have chosen to keep them in the Landau frame.The fact that the time derivatives at t=0 in (<ref>) (or in (<ref>)) vanish, necessarily imply thatthe ideal equations are not satisfied and the error terms are not small. This might not correspond to a physically relevant situation, where the system should be in (or close to) the regime of applicability of hydrodynamics, but it allows to assess the effect of theideal equations not being satisfied in the initial data in a situation in which this effect is large. Let us consider initial data (<ref>) in frame {a_1,a_2}={5,5}; the corresponding numerical evolution is shown in solid lines in Fig. <ref> (left). The same exercise in frame {a_1,a_2}={10,10} is shown in dashed lines, and the difference T^visc1_xx-T^visc2_xx is depicted in solid green. In addition we compute the solution of the ideal hydrodynamics equations with initial data (<ref>),(<ref>) and the corresponding difference T^visc1_xx-T^ideal_xx is shown in solid grey.[For the initial data of the ideal equations we have three natural posibilities: use the data in the Landau frame (<ref>),(<ref>) or in any of the two causal frames (<ref>),(<ref>). We computed the three cases to check that the difference does not change the conclusions, even if it is not very small. In Fig. <ref> we plot only the first case.] The difference T^visc1_xx-T^visc2_xx introduced by the initial data (<ref>) at x=0 can be seen in Fig. <ref> (left), where the green line intersects the vertical axis. Upon time evolution the amplitude of the difference T^visc1_xx-T^visc2_xx does not vary considerably. If we compare this example to the one in Fig. <ref> (left) we find that the initial data for which the ideal hydrodynamics equations are far from being satisfied initially, introduces a difference which is much larger than the difference T^visc1_xx-T^visc2_xx generated solely via time evolution (with vanishing initial change of frame) by more than one order of magnitude.On the other hand, in the example in Fig. <ref> (left), in which the initial is such that the ideal hydrodynamics equations are nearly satisfied, the difference in the initial data isof the same order as the difference generated upon time evolution, see Fig. <ref> (left).As we are using initial data for which the ideal hydrodynamics equations are far from being satisfied one may suspect that at initial times Criterion A is violated.However, Criterion A is satisfied at all times and in particular at t=0. This is not just the effect of integrating over the spatial domain and one period in time (which is the characteristic scale of the system), but even locally in time the amplitudes of ideal, first order and second order terms are each smaller than 10% the previous one, even at initial times.We find curious that in spite of the ideal equations are largely violated at t=0, even initially there still is a hierarchy in the gradient expansion. Criterion B is satisfied at all times. In this case we observe that at initial times, and for a timescale much smaller than the period, the amplitude of the a_1 term is even larger than the shear term, but it quickly decays. In spite of this initial transient, integrating over a period in time the conditions of Criterion B are satisfied.[This example suggest that it could be interesting to extend the definition of our criteria to consider integration in different scales. For example, if we integrated in scales ten times smaller than the period, criterion B would be violated at initial times.] The fact that the a_1 term is large at t=0 is a consequence of the choice of the initial data.Criterion C is satisfied at all times. However, if we compare locally in time the amplitudes of the differenceT^visc1_xx-T^visc2_xxwith T^visc1_xx-T^ideal_xx,we find that initially it is not smaller than a 10% and this is restored at times tT≃ 4. In Fig. <ref> (right) we repeat the previous exercise using similar initial data (<ref>) but with momentum ten times larger, k/T≃ 1.84, to consider a situation that is marginally in the regime of hydrodynamics.We find that the difference T^visc1_xx-T^visc2_xx along the evolution due to change of frame in the initial data is considerably larger than the difference introduced solely upon time evolution (that is, with vanishing change of frame at t=0) studied in Fig. <ref> (right).Due to this larger difference, in this case all criteria A, B and C are now violated for the times shown in the plot.§.§ Largeamplitude Gaussian perturbation In the previous subsection we studied solutions that were in the linear regime by construction,even if we solved the full non-linear evolution equations. We now extend these studies to solutions that are well in the non-linear regime.For this purpose we consider a thermal homogeneous state with a large Gaussian perturbation of the energy density as initial data: ϵ |_t=0 = ℰ( 1+ e^-x^2/2σ^2) ,∂_t ϵ|_t=0 = 0, u_x|_t=0 = 0 ,∂_t u_x|_t=0 =x e^-x^2/2σ^2/4 σ^2( 1+ e^-x^2/2σ^2). By tuning the widthσof the Gaussian we can have a system that is well in the regime of hydrodynamics (large σ, i.e., small gradients), or a system that is far from the regime of hydrodynamics (small σ, i.e., large gradients). As in (<ref>), the time derivatives at t=0, eqs. (<ref>) and (<ref>), are chosen such that the ideal hydrodynamics equations are initially satisfied.As in previous subsection, T and ℰ are the temperature and energy density of the thermal state, related by ℰ=3/4π^4 T^4.We first consider the evolution of the initial data (<ref>) with width σT≃1.45in the frame {a_1,a_2}={5,5}. This width is of the order of the microscopic scale and hence we expect that the system is on the verge of the regime of applicability of hydrodynamics. In Fig. <ref> (top, left) we show the evolution of T_tt component of the stress tensor. With the criteria defined in the previous subsection, we proceed with the analysis of the solution. We start by considering Criterion A. We perform this comparison for the T_xx component of the stress tensor (<ref>);in Fig. <ref> (bottom, left) we depict this component at x=0,and in Fig. <ref> (top, left) we show the second order terms and 10% of the shear term in T_xx in the whole spacetime.Note that since the initial data has vanishing velocity,initially the shear term is zero but the presence of spatial gradients implies that the second order terms are non-vanishing. Upon time evolution the system quickly relaxes to the regime of hydrodynamics, with first order terms much smaller than ideal terms, and second order terms smaller than shear term, thus satisfying Criterion A. More precisely, we find that the L_1 norm of the second order terms is larger than 10% of the L_1 norm of the shear term from t T=0 to t T≃ 1.57. After this time, the L_1 norm of the second order terms is smaller than 10% of the L_1 norm of the shear term. Therefore, we conclude that Criterion A is violated at initial times and restored at t T≃ 1.57; this may be considered as the hydrodynamization time. We may wonder why the system is initially away from the regime of hydrodynamics if the gradients are not very large. This is a consequence of the initial data: since the velocity is initially vanishing, the shear term is exactly zero while second order terms are non-zero. So, even for very large σ and very small gradients, we have to wait for the system to evolve for some time until first order terms become larger than second order ones. We continue by studying Criterion B. InFig. <ref> (left, bottom) we plot the a_1 term and the shear term at x=0. Moreover, in Fig. <ref> (middle, left) we plot the a_1 and a_2 terms together with 10% of the shear term in all spacetime. We find that the a_1, a_2 terms are smaller than 10% the shear term in most of the domain,except for small regions where the shear term vanishes.However, computing the L_1 norms of the a_1 anda_2 termsand the shear term shows that Criterion B is satisfied at all times.In order to employ criterion C we perform another real-time evolution with similar initial data but in frame {a_1,a_2}={10,10}. In Fig. <ref> (bottom,left) we show the result of this evolution in dashed lines.Recall that it is justified to use exactly the same initial data as the change of frame is exactly vanishing at t=0, as it satisfies the ideal hydrodynamics equations. We also perform an evolution using the equations of ideal hydrodynamics with similar initial data, eqs. (<ref>), (<ref>).We plot the difference T^visc1_xx-T^visc2_xx in absolute value in solid green in Fig. <ref> (bottom, left), and the difference T^visc1_xx-T^ideal_xx in solid grey. We also plot these two quantities in the whole spacetime in Fig. <ref> (bottom, left). We observe that the first difference is smaller than 10% the second difference in almost all the domain, and by computing the corresponding L_1 norms we conclude that Criterion C is satisfied.In the previous subsection we learned that the ratio between the a_1 terms in two evolutions in different frames {a_1,a_2}={5,5},{10,10} was about a factor of 4, according to the approximate expression (<ref>). Even if the approximate expression is not applicable in these solutions, as they are well in the non-linear regime, we still find a difference which is roughlya factor of 4.Let us also comment that in Fig. <ref>, top and middle plots are for the case {a_1,a_2}={5,5}. For the case {a_1,a_2}={10,10}, the top plot would be basically similar with Criterion A satisfied or violated in the same regions, but the middle plot would be qualitatively different: the a_1, a_2 terms are much larger, roughly by a factor of 4 and 2 respectively. This makes that Criterion B is slightly violated in this case, but it is restored at times t T≃ 4.55. This is expected as larger values of {a_1,a_2} make the a_1, a_2 terms trivially larger. Actually, the values {a_1,a_2}={10,10} could be considered large as they are far from saturating the inequalities (<ref>); for this reason we do not consider this as a counterexample to our main conclusion.We proceed now with another simulation with initial data (<ref>) and a smaller width, σT≃0.73, so that the system now has larger gradients and it is slightly outside the regime of hydrodynamics.We plot the result in Fig. <ref> (top, right) for the T_tt component of the stress tensor. In Fig. <ref> (bottom, right) we plot the different elements of the T_xx component of the stress tensor in solid lines.This case shares an important aspect with the QGP created in heavy-ion collisions, which is thatinitially both systems are marginally in the regime of hydrodynamics. Therefore,even though the two systems are very different, one may hope that the conclusions obtained here might be relevant for the implementation of the first order viscous hydrodynamic equations to describe the QGP in experiments. The system violates Criterion Aup to times t T≃ 12.3, when it is restored; this is the hydrodynamization time. The initial Gaussian explodes into two waves that travel in opposite directions, as we observe in Fig. <ref> (top, right). Initially, each of these waves has large gradients,which are dissipated as they travel, resulting in smoother profiles; at times t T≃ 12.3 they are smooth enough so that Criterion A is satisfied (in the plots we show times only up to t T≃ 6.84 but we have run the simulations for longer). Relevant plots are Fig.<ref> (bottom, right) and Fig. <ref> (top, right).Criterion B is slightly violated at initial times, and it is restored around times t T≃ 2.1, and satisfied from this time on. Relevant plots are Fig.<ref> (bottom, right) and Fig. <ref>(middle, right).In order to evaluate criterion C we perform another evolution in frame {a_1,a_2}={10,10} with similar initial data and also an evolution using the ideal hydrodynamics equations. We plot the difference T^visc1_xx-T^visc2_xx in absolute value in solid green in Fig. <ref> (bottom, right), and the difference T^visc1_xx-T^ideal_xx in solid grey. We also plot these quantities in the whole spacetime in Fig. <ref> (bottom, right). We find that criterion C is satisfied at all times.Thus, this is another example of a system that is marginally in the regime of hydrodynamics (actually slightly away according to our criterion A) and yet Criterion C is satisfied, which indicates the robustness of the frame independence of the first order physics. We have checked that Criterion C continues to be satisfied for values of σ all the way down to σT≃0.24. The main conclusions obtained from the analysis above are the following. We observe that if Criterion A is satisfied, then criteria B and C are satisfied.We find that even if Criterion A is slightly violated (and B is also violated), Criterion C is still satisfied. This suggests that the frame independence of the first order physics is robust and it applies slightly beyond the hydrodynamics regime. §.§.§ Changing frame in the initial data We now explore the effect of changing frame in the initial data. To do so,we perform a similar exercise as in Fig. <ref>; we start from the evolutions in the frame {a_1,a_2}={5,5}presented in Fig. <ref> and use the solution at a timeslice t>0 to obtain initial data for the frame {a_1,a_2}={10,10}via (<ref>). In Fig. <ref> (left, dashed lines) we present the same evolution as in Fig. <ref> (left) but now performed in frame {a_1,a_2}={10,10}, initialized with data from the frame {a_1,a_2}={5,5}at tT≃ 1.71. In Fig. <ref> (right) we present the same exercise for the case σT≃ 0.73. The conclusions are similar to those in the previous exercise described in Fig.<ref>: The difference T^visc1_xx-T^visc2_xxintroduced by changing frame in the initial data, Fig. <ref>, is comparable to the differenceintroduced by the time evolution, Fig. <ref>. Criteria A, B and C are satisfied at a similar level as for the evolutions with vanishing initial change of frame effects,Fig <ref>, with similar times for when each criterion is violated or satisfied. Moreover, for completeness we repeated both exercise for times tT≃ 3.42,5.13 obtaining similar conclusions. §.§ Shockwaves In this section we study the effects of using different causal frames in shockwave solutions. Shockwaves are discontinuous solutions of the ideal hydrodynamic equations, which are smoothed out once viscosity is included. The notion of shockwave may sound like a strong disturbance and even if viscous hydrodynamics provides a continuous profile, one could think that gradients are still large, suggesting that the solution is not in the effective field theory regime.However, this intuition is not correct. Shockwave solutions of viscous hydrodynamics are in the regime of validity of effective field theory as long as their amplitude is small<cit.>.[Small amplitude shockwaves admit analytical solutions <cit.>, in which the width is inversely proportional to the amplitude of the shockwave, and thus if the amplitude issmall, the width is large and gradients are small. This picture is confirmed by numerical results, in which we can consider shockwaves of finite amplitude and check up to which amplitude Criterion A is still satisfied. In the examples below we confirm that they satisfy Criterion A up to velocities of the order of v=0.66.] We start by considering a smoothed out version of the Riemann problem with initial data given by ϵ |_t=0 =ℰ[ 1+Σ-1/2( tanh(x-Δ/σ̃)-tanh(x+Δ/σ̃) ) ] ,∂_t ϵ|_t=0 = 0, u_x|_t=0 = 0 ,∂_t u_x|_t=0 =0. Since we impose periodic boundary conditions, we consider initial datathat is x↔-x symmetric. In the following plots we only show the region x>0.[We do not impose that the ideal equations are satisfied at t=0 because we will focus on the shockwave formed at later times, so for this purpose it is not relevant if the ideal equations are initially satisfied or not.] We perform a numerical evolution of the initial data (<ref>) with parameters ℰ=1, Σ=3, ΔT≃ 20.5, σ̃T≃ 0.684, on a spatial domain LT≃ 136.8 and frame {a_1,a_2}={5,5}. In this setup the energy ℰ and temperature T are the ones of the low energy homogeneous region of the initial data. The initial sharp profile evolves into a rarefaction wave that propagates to the left and a shockwave that propagates to the right, with a homogeneous region in between. See Fig. <ref> (top, left) for the profile of the T_xx component of the stress tensor. This is the expected solution in relativistic hydrodynamics <cit.>; studies of the Riemann problem in the case of first-order viscous hydrodynamics were done in<cit.>.In Fig. <ref> (top, right) we show the profile of the T_xx component of the stress tensor in the shockwave solution at times t T={20.5, 34.2, 47.9, 61.5}. In Fig. <ref> (bottom, left) we show the same profiles shifted to have their mid height at x=0 so that they appear superimposed. This plot shows how the profile of the shockwave evolves in time and it asymptotes to a final configuration at late times. Also, the velocity of the shockwave asymptotes to a terminal velocityv_shockwave≃0.66.We can compare the asymptotic solution that we obtain from our numerical evolutions with the solutions of the ODEs that follow fromconsidering the system in the local rest frame of the shockwave and assuming stationarity<cit.>. Under these conditions, the conservation equation for the stress tensor ∂_x T^xμ=0 can be easily integrated to obtain T^xt=C_1 and T^xx=C_2, where C_1 and C_2 are two real constants. Solving for {ϵ',v'} gives ϵ'=-4ϵ√(1-v^2)( (a_1-4)C_1 +((3a_2+4a_1-4)C_2-(4+a_2)ϵ)v-3(a_1+2a_2)C_1v^2+3a_2(C_2+ϵ)v^3 )/ 9a_1 a_2 η (v-c_1)(v-c_2)(v-c_3)(v-c_4), v'=-(1-v^2)^3/2( a_2(ϵ -3C_2)+3 (a_1+2a_2)C_1 v-3 ((4 a_1+a_2)C_2+a_2ϵ)v^2 +9a_1 C_1 v^3 )/9a_1 a_2 η (v-c_1)(v-c_2)(v-c_3)(v-c_4), where c_i, i=1,2,3,4 are the characteristic velocities,c_i=±√(a_1(2+a_2)± 2√(a_1(a_1+a_1a_2+a_2^2))/3 a_1 a_2) .Boosting the solutions of these ODEs so that the fluid in front of the shock is at rest, we obtain a profile for the shockwave that can be compared with our asymptotic solutions,dashed black curve in Fig. <ref> (bottom, left). This comparison confirms that indeed the late time asymptotic profile coincides with the solution of the ODEs.Thelatter are particularly interesting and useful because they are computationally cheaper (and more accurate) to obtain. For this reason, in the rest of this section we will use the solutions to the ODEs to study the effect of using different frames in shockwaves. The two asymptotic regions in a shockwave solution as well as its velocity are related to each other by the stress tensor conservation, which gives the Rankine-Hugoniot conditions. Viscous hydrodynamics provides a smooth transition between the two asymptotic states, but the latter and the velocity of the shockwave do not depend on using the viscous theory.Thus, the change of frames will only affect the specific shape of the profile butnot the asymptotic regions nor the velocity of the shock. In the following we study the profile of T_xx in a shockwave solution in a Lorentz frame such that the fluid in front of the shock is at rest; in the local rest frame of the shockwave this analysis would be trivial as T_xx is a constant.In Fig. <ref> (bottom, right) we show the different contributions of the gradient expansion to the T_xx component of the stress tensor.By computing the norms, we find that Criterion A is only marginally satisfied since the ratio of the norms of the second order and the shear terms is 9%. Therefore, according to this criterion, the solution is marginally within the effective field theory regime. In addition,the ratio of the norms of the a_1 term (a_2 term) and the norm of the shear termis 3.9% (7.3%). Hence, Criterion B is also satisfied. In order to studyCriterion C we construct a shockwave solution in the frame {a_1,a_2}={10,10} with the same asymptotics, see Fig. <ref> (left) in solid grey for the T_xx component of the stress tensor, together with the solution in frame {a_1,a_2}={5,5} in dashed black. We observe that both profiles are very similar to each other, see the inset for a detailed view of the difference. We note that Criterion C as previously defined is not applicable in shockwave solutions because in ideal hydrodynamics the shocks are discontinuous. In this case we use the following alternative criterion: we consider that the criterion is satisfied if the maximum difference between the two profiles divided by the difference between the asymptotic states is smaller than 5%.[As before, this particular choice of threshold is arbitrary.]In our solution this ratio is 0.48%, and hence this criterion would be satisfied.References <cit.> provide evidence that the first-order viscous hydrodynamics equations admit shockwave solutions as long as their velocity profile does not cross any characteristic velocity. We have verified that this is indeed the case for the chosen frames, {a_1,a_2}={5,5}, {10,10}, for which the characteristic velocities are always larger than the upstream velocity. We also verified that for frames with characteristic velocities that cross the velocity profile of the shockwave (in the local rest frame), the real-time evolutions crash and the ODEs do not seem to admit smooth shockwave solutions, in agreement with the results in <cit.>. Sharply causal frames, that is, frames saturating the inequalities (<ref>), are specially interesting as they allow to capture arbitrarily large shocks while respecting causality. In Fig. <ref> (right) we repeat the same exercise as above but in this case with two frames in the sharply causal region, namely {a_1,a_2}={4,4} and {a_1,a_2}={25/4,25/7}. We find that for these solutions Criteria A, B and C are satisfied. In this case the ratio of the maximum difference and the differences of the asymptotic states is 0.08%; this ratio is smaller than above as the two frames here are`close' to each other.It would be interesting to have a precise notion of the separation between two frames in the space of frames (with, for example, some metric), which could be used in the definition of Criterion C. Otherwisetwo solutions can trivially be very close to each other just because the parameters defining the corresponding frames are very close to each other,even if such solutions are not in the effective field theory regime.We leave this for future work. We now consider stronger shocks. We use the ODEs to obtain shockwave solutions with velocity v_shockwave=0.85 in the frames {a_1,a_2}={4,4} and {a_1,a_2}={25/4,25/7}. In Fig. <ref> (left) we displaythe corresponding T_xx component of the stress tensor, after performing a boost to a Lorentz frame in which the fluid in front of the shock is at rest and has energy ℰ=1. We find that criteria A and B are violated whilst Criterion C is satisfied. This indicates that the system is not in the effective field theory regime and yet the two profiles are still very close to each other, with a ratio of the maximum deviation to the amplitude of the shockwave of 1.4%.We note that the a_1, a_2 terms are not much smaller than the shear term, so in Fig. <ref> (right, dashed black) we plot the fullfirst order term T_xx^(1) in the stress tensor.We conclude that also in shockwave solutions criterion C is robust even if the system is exiting the effective field theory regime. On the other hand, for sufficiently large velocities Criterion C is also violated. For example, for solutions with v_shockwave=0.9999 in frames {a_1,a_2}={4,4} and {a_1,a_2}={25/4,25/7} we find that the ratio of the maximum difference to the amplitude of the shockwave is 41%.§ DISCUSSION In this paper we studied real-time numerical evolutions of the relativistic first-order viscous hydrodynamics equations for a conformal fluid. Our main goal was to explore the consequences of using different hydrodynamic frames (or field redefinitions) within the causal set of frames (<ref>) in practical situations.By performing evolutions with different sets of initial data and by considering specific quantitative criteria, we have made precise and provided evidence for the statement thatthe physics up to first order remains invariant under field redefinitions as long as the system is within the effective field theory regime.Our studies are motivated by the future implementation of the relativistic first-order viscous hydrodynamics equations to model physical systems of interest such as the QGP created in heavy-ion collision experiments or astrophysical systems like neutron star mergers. An important step towards thisgoal is to understand to which extent the physics is independent of the choice of hydrodynamic frame. Providing specific quantitative criteria for this assessment is the main motivation for this paper. We considered the following sets of initial data. First, we considered a small amplitude sinusoidal perturbation of a homogeneous thermal state. Such a well-controlled system allows to define the criteria that we then use in more non-trivial settings. We continued byconsidering large amplitude Gaussian perturbations of a homogeneous thermal state and finally shockwave solutions. We proposed three criteria, denoted by A, B and C, that assess different aspectsof our solutions.The detailed definitions of these criteria can be found in Section <ref>, and we summarize them here. They involve measuring the `size' of certain terms in the gradient expansion of the stress tensor in our solutions at a certain time t. To do so we will use the L_1 norm computed over the spatial domain and times {t-t̃,t+t̃}, where t̃ is a characteristic timescale of the problem. Here we use the L_1 norm for convenience, but other norms are possible and should give similar results. In practical applications one may need to be quantitative about what `much smaller' means; here we use 10%, but this threshold value is arbitrary and in practice it may depend on the initial conditions and/or the details of the fluid under consideration. In future work we will make this precise in the case of holographic fluids, where we have control over the microscopic theory. Criterion A. We measure the sizes of the ideal and first order terms on our solutions by evaluating the corresponding expressions in (<ref>). In addition, for each solution, we also measure the size of the second order terms by using the second line of (<ref>). In this way we compare the ideal, first and second order terms in the gradient expansion and wecheck if each term is much smaller than the previous one.This is the standard criterion in truncated effective field theories. Our criterion A is:the system is in the effective field theory regime at a given time t if, for all components of the stress tensor, the L_1 norm of the shear term is much smaller than the L_1 norm the ideal term, and in turnthe L_1 norm of the second order terms is much smaller thanthe L_1 norm of the shear term. Criterion B. We measure the size of the a_1, a_2terms in (<ref>) and compare them with the size of the shear term. The motivation for this criterion is as follows. The change of frame terms (<ref>) are proportional to the lowest order equation of motion, that is, the equations of ideal hydrodynamics (<ref>). Thus, by using the equations of first-order viscous hydrodynamics these terms are, on shell, equivalent to second order terms. Then, if the system is in the effective field theory regime, the frame dependent terms in (<ref>) should be much smaller than the shear term, which is frame independent. With this motivation, we propose the following criterion: the physics to first order is independent of the chosen causal frameat a given time t if, for the spatial components of the stress tensor, the L_1 norms of the a_1 and a_2 terms are much smaller than the L_1 norm of the shear term.Criterion C. We consider two evolutions in different (and sufficiently separated, at least in parameter space) causal frames, and a third evolution using ideal hydrodynamics; we compare the difference between the two viscous evolutions with the difference between one viscous evolution and the ideal evolution. In this way we compare the effect of changing frame with the effect of including first order viscous terms in the hydrodynamics description.Our specific criterion is:the physics to first order is independent on the chosen hydrodynamic frame at a given time t ifthe L_1 norm of the difference between the stress tensors obtained from two evolutions performed in different causal frames,T^visc1_μν-T^visc2_μν, is much smaller than the L_1 norm of the difference between the stress tensors obtained with ideal hydrodynamics and one of the viscous evolutions, T^visc_μν-T^ideal_μν. The main conclusion of this paper is that in all solutions that we have studied, if Criterion A is satisfied, then criteria B and C are are also satisfied; this is a quantitative and precise version of the statement that if the system is in the effective field theory regime, then the physics to first order is independent of the arbitrarily chosen frame.Thus, we confirm that it is consistent to truncate the hydrodynamic gradient expansion to include only the viscous terms. This is a non-trivial result since each of these three criteria assesses a different aspect of what it means for a solution to be `in the effective field theory regime'. One might have thought that a given solution is indeed well described by viscous hydrodynamics only of the three criteria are simultaneously satisfied. We have shown that this is not the case and Criterion A is sufficient to imply to other two.From the main conclusion and from a practical perspective,one could think of the terms a_1, a_2 as mere regulators of the equations (that make them well posed) and work as if we were in the usual Landau frame, as the choice of frame difference is a higher order effect. However, this is only applicablewhen the system is well within the effective field theory regime, and one must be careful with this statement if it is not clear that the system isin that regime.One possibility that could happen is that Criterion B is satisfied and the terms a_1, a_2 are small, but the effects of choosing different frames accumulate over sufficiently long times and they eventually become large. That is, can Criterion B be satisfied and C violated? In our solutions we have always observed that if Criterion B is satisfied, then Criterion C is also satisfied.Regarding Criterion B, even if in most cases we have found that when Criterion A is satisfied then Criterion B is also satisfied, we have found some counterexamples; for example see discussion after eq. (<ref>). However, these counterexamples only happen when the values of {a_1,a_2} areunreasonable large. If we restrict the values of {a_1,a_2} to be close to saturating the inequalities (<ref>), then these counterexamples can be ruled out. Moreover, in these solutions we found that Criterion C is satisfied, even if the a_1, a_2 terms are not small compared to shear term.In the actual description of the QGP created in heavy-ion collision experiments, when the hydrodynamic codes are initialized, there are situations in which gradients are large. This motivates us to consider solutions in which the system is marginally in the effective field theory regime, that is, Criterion A is almost violated or slightly violated.[See Figs. <ref>(right), <ref>(right), <ref>(right), <ref>(right) and <ref>(top).] We want to test if the physics to first order is independent of the arbitrarily chosen frame even under these circumstances.A generic conclusion from our studies is that even if the system is not well within the regime of hydrodynamics, that is, Criterion A is slightly violated,the change of frame still does not significantly affect the viscous evolutions compared to the ideal evolution, that is, Criterion C is still satisfied; we also observe that Criterion B is also slightly violated.Therefore, we conclude that the physics to first order is robust against changes of frame according to Criterion C, even in situations where the system is on the boundary of the regime of applicability ofeffective field theory. We emphasize that it is convenient in practice to choose hydrodynamic frames that saturate the inequalities (<ref>), because it allows to evolve arbitrarily strong shockwaves. Moreover, this choice also guarantees that the field redefinition terms are as small as possible. In the hydrodynamic studies of the QGP, the initial data for the stress tensor is obtained by some prehydrodynamic procedure. If we want to implement the relativistic Navier-Stokes equations in this context, one needs to address the question of constructing initial data in a certain causal frame.One way to proceed is to diagonalize a given stress tensor and find the data for the evolution variables in the Landau frame; then, use the prescription of effective field theory (<ref>) to find the corresponding quantities in the causal frame. Therefore, one needs to address the effects of changing frames in the initial data.To study the effects of changing frames in the initial data, in this paper we have used solutions obtained in a certain frame for a time interval to construct initial data at some t>0 in a new frame using (<ref>), seeFigs. <ref> and <ref>. The lesson that we have learned from this exercise is that the change of frame in the initial data has a small effect in the subsequent evolution. That is, the difference in the two evolutions in different frames due to the change of frame in the initial data is comparable to the difference introduced upon time evolution in different frames when using similar initial data that satisfies the ideal equations. In these constructions, the equations of ideal hydrodynamics are nearly satisfied, which implies that the effects of changing frames in the initial data are necessarily small. We also studied a situation in which the the ideal hydrodynamics equations are far from being satisfied initially, see Fig. <ref>; this situation arises, for example, when the time derivatives in the initial data are set to zero. We found that this introduces large differences in the initial stress tensor under frame changes and this difference is sustained upon time evolution. To be precise, this difference is large compared to the difference generated solely upon time evolution with initial data that satisfies the ideal equations. This can be simply understood because having large error terms in the ideal equations implies that initial data in different frames is significantly different; then Cauchy stability implies that the subsequent evolutions in different frames will also differ significantly.In practical applications, this situation should not arise since one wants to initialize a hydrodynamic code to describe a system that is in the regime of hydrodynamics, and hence the ideal equations should nearly satisfied.We now comment on future directions. We have limited our studies to conformal fluids as a natural starting point to perform a detailed analysis of the consequences of using different causal frames. However, physical systems of interest like neutron star mergers or the QGP created in heavy-ion collisions are not conformal, and it would be interesting to extend our analysis to the non-conformal case.Dynamical evolutions of the relativistic first-order viscous hydrodynamics equations in the non-conformal case have been studied in <cit.>.An interesting direction for studies of relativistic hydrodynamics is to perform comparisons with microscopic real-time evolutions in quantum field theories. Such comparisons allow to explore quantitatively under which circumstances the effective field theory provides a good description of the microscopic theory. In particular, one would be able to estimate the acceptable threshold values for the different criteria that we have proposed. However, obtaining real time evolutions in interacting quantum field theories is in general a very difficult task. In this context, holography turns out to be a very useful tool. We can think of a specific holographic theory as a toy model, specific to certain quantum field field theories that are very different from QCD in many aspects, but useful nevertheless to obtain information about the real time dynamicsin a controlled way; such a computation is not feasible in QCD. The qualitative conclusions obtained from this analysis, even if obtained with holography, might apply more generally, and in particular to finite temperature QCD.We initiated this line of research in <cit.> and we will extend these studies in the future.This paper is part of a long term program that aims to contribute to the implementation of the relativistic Navier-Stokes equations in studies of physical systems of interest such as the QGP created in heavy-ion collisions or astrophysical systems. § ACKNOWLEDGMENTSWe thank Jorge Casalderrey-Solana, David Mateos and Alexandre Serantes for very useful discussions. Y.B. is supported by a Maria Zambrano postdoctoral fellowship from the University of Barcelona, by the “Unit of Excellence MdM 2020-2023” award to the Institute of Cosmos Sciences (CEX2019-000918-M), and by grants PID2019-105614GB-C22, 2021-SGR-872 and PID2022-136224NB-C22, funded by MCIN/AEI/ 10.13039/501100011033/FEDER, UE.P.F. was supported by a Royal Society University Research Fellowship (Grant No. UF140319 and URF\R\201026) and is currently supported by the STFC Consolidated Grant ST/X000931/1. § CONVERGENCE TESTS Our code uses finite difference stencils of order 6 for the spatial derivatives and a fifth order explicitRunge-Kutta-Nyström Generalized (RKNG) time integration method for the equations written in second order (both in time and space) form. See <cit.> for more details. In <cit.> we presented a convergence test showing a convergence factor of 4 or even higher. In this paper we use a 1+1 version of that code, and we present convergence tests for some representative runs of this paper. For the runs presented in this paper we typically use N_x=2048 points along the spatial dimension and Courant factor 0.1.For a set of representative runs, we perform a convergence test for which we use three different resolutions N_x/4,N_x/2 and N_x. In Fig. <ref> we show convergence tests for the runs presented in Figs. <ref> and <ref>, and in Fig. <ref>convergence tests for the runs presented in Figs. <ref> and<ref>. In some cases there is an initial transient, but after it in all cases the convergence factor is between 4 and 5, consistent with our method.JHEP
http://arxiv.org/abs/2312.16671v1
{ "authors": [ "Yago Bea", "Pau Figueras" ], "categories": [ "hep-th", "astro-ph.HE", "gr-qc", "nucl-th" ], "primary_category": "hep-th", "published": "20231227183401", "title": "Field redefinitions and evolutions in relativistic Navier-Stokes" }
In this letter, we prove the existence of resummed expressions for the diagonal of the heat kernel and the effective action of a quantum field which interacts with a scalar or an electromagnetic background. Working in an arbitrary number of spacetime dimensions, we propose an Ansatz beyond the Schwinger–DeWitt proposal, effectively resumming an infinite number of invariants which can be constructed from powers of the background, as well as its first and second derivatives in the Yukawa case.This provides a proof of the recent conjecture that all terms containing the invariants F_μνF^μν and F_μνF^μν in the proper-time series expansion of the SQED effective action can be resummed.Possible generalizations and several applications are also discussed—in particular, the existence of an analogue of the Schwinger effect for Yukawa couplings. Bayesian framework to infer the Hubble constant from cross-correlation of individual gravitational wave events with galaxies Sukanta Bose 0000-0002-4151-1347 January 14, 2024 ============================================================================================================================ § INTRODUCTION Understanding nonperturbative phenomena stands out as a paramount challenge in contemporary high-energy physics. While the perturbative approach to quantum field theory has become a standard tool, efficient nonperturbative techniques are still under active development. Systematic theoretical methods such as the functional renormalization group approach <cit.>, the Dyson–Schwinger equations <cit.> or simply lattice numerical techniques <cit.> are currently employed to gain valuable insights into the intricate realm of the quantum world. Investigating a single quantum field interacting with background classical fields appears perhaps as a relatively simpler problem. However, even in this semiclassical case, the available toolkit is relatively limited, often resorting to methods like the WKB approach <cit.>, the computation of instantons <cit.>, etc.In this semiclassical scenario, the classical fields are assumed fixed by sources that can be experimentally tuned at will. Notably, certain regimes provide an ideal testbed to probe quantum effects, as seen, for instance, in the context of QED in the presence of intense background fields <cit.>, or in settings involving static <cit.> and dynamical Casimir effects <cit.>. Another significant example is the case of gravitational background fields, with applications to cosmology, astrophysics and black hole evaporation <cit.>.Within the framework of single quantum fields interacting with classical backgrounds, we will explore a scenario involving a quantum, complex scalar field ϕ minimally coupled to an electromagnetic field 𝒜_μ and interacting with a scalar field H through a Yukawa term,S:= 1/2∫[x][] [ | (∇-𝒜(x)) ϕ(x) |^2+H^2(x) |ϕ(x) |^2 ].Here, X^2 = X_μ X^μ and we assume 𝒜 to be anti-selfadjoint. While the fields H and 𝒜_μ could in principle be quantized, our primary focus is on the semiclassical theory. Therefore we replace them with classical fields √(V) and A_μ (their vacuum expectation values), assuming only their regularity as needed. A few further comments are in order. First, V and A_μ are not going to be considered on-shell and Bianchi identities will be employed only to compare our results with the existing literature. Additionally, V may include a mass term for the quantum field ϕ (or, looking from the opposite perspective, if there were a mass term, it could be absorbed into V). Finally, it's worth stressing that we will work in flat space, leaving potential generalizations to curved space for future discussions. In addressing the current issue, standard quantization procedures can be applied. This leads us to the formulation of the one-loop effective action Γ, which can be straightforwardly connected to the quantum fluctuations operator 𝒬,Γ = LogDet[𝒬], 𝒬: =-(∇ - A )^2+V(x).Using the Schwinger–DeWitt (SDW) techniques <cit.>, we can recast the effective action in terms of the trace of the related heat kernel operator[This is an extension of Frullani's representation, which is valid for the logarithm of a quotient <cit.>.To be more precise, one could take the quotient with the quantum fluctuations operator corresponding to the background-free case.] K; in a -dimensional Euclidean space we obtain:Γ = -∫_0^∞[τ]/τ∫[x][d] K(x,x;τ), K(x,x';τ)= e^- τ𝒬(x,x';τ).These expressions enable us to employ the full range of mathematical tools available in the study of the heat kernel and derive the consequences at the level of the effective action. Our attention is particularly directed towards situations where the standard proper-time expansion of the heat kernel in terms of the Gilkey–Seeley–DeWitt (GSDW) coefficients turns out to be inadequate and one is forced to recur to a resummed version, as a first approach to analyze various nonperturbative aspects of complicated quantum field theories. A few results have already been established in this direction. For instance, in the realm of the first quantization, the heat kernel can be understood as a propagator in imaginary time, for which a resummation in the potential holds <cit.>. In quantum field theory, on the one hand, the covariant perturbation theory outlined in <cit.> is applicable in cases involving rapidly oscillating curvatures or potentials and allows, for example, to obtain beta functions in a momentum-like scheme <cit.>.On the other hand, the resummation of the Ricci scalar conjectured in Ref. <cit.> (and later proved in <cit.>) has been used to establish several properties of fermionic condensates in curved space <cit.>. Similar resummation techniques have also been employed in the investigation of Casimir self-interactions under spacetime-dependent boundary conditions <cit.> and, more in general, for background fields that are simple enough to obtain a closed expression either for the effective action <cit.> or for the corresponding heat kernel <cit.>.More recently, it has been conjectured that for large field strengths, the fundamentalinvariants ℱ=F_μνF^μν and 𝒢=F_μνF^μν appearing in the proper-time expansion of the (S)QED one-loop effective action can be resummed <cit.>, resulting in a local version of the Euler–Heisenberg prefactor in the effective Lagrangian, originally derived in Refs. <cit.>. The first main result in this letter is the statement and the proof of a generalized conjecture for Yukawa interactions, working in arbitrary dimensions and at the local level of the heat kernel. We will be able to perform a resummation of terms involving powers of the potential, together with its first and second derivatives. Our results for the Yukawa interaction will then provide a shortcut to our second important achievement,the proof of the generalized conjecture for SQED, again valid in arbitrary dimensions and for the diagonal of the heat kernel. Finally, the previous results are discussed; several applications and possible future lines of studies are also mentioned.Referring to the notation, we will use Planck units, for which ħ=1=c. § RESUMMATION OF THE HEAT KERNEL FOR A YUKAWA BACKGROUNDFor a Yukawa coupling, the heat kernel K(x,x';τ) defined in Eq. (<ref>) satisfies the heat equation [∂_τ - ∇_x^2 + V(x)] K_Y(x,x';τ) =0, together with the following initial condition in the propertime τ: K_Y(x,x',0^+)= δ(x-x').These equations are of course analogous to those satisfied by a Schrödinger propagator in imaginary time and the results we are going to obtain could thus be employed in a first quantization. Nevertheless, we will keep in mind the picture that V in Eq. (<ref>) is to be regarded as the background value of the field H, which interacts with ϕ through the Yukawa coupling.Note that we are employing a covariant notation even if we will be working in flat space.As already stated above, we are interested in the resummation of contributions involving V, as well as those containing first and second derivatives of it, in a precise way that we are going to state shortly. First, let us recall some well-known facts. In the standard approach, when the potential includes a mass termV(x) = M^2 + φ^2(x),the heat kernel can be written as K(x,x';τ)= 1/(4π)^/2 e^-τ M^2-σ(x,x')/2τΩ(x,x';τ),where σ(x,x')=(x-x')^2/2 denotes the Synge's function <cit.>. The function Ω_Y will in general depend on the background field φ(x) and its derivatives. As noted some time ago in Refs. <cit.>, the powers of the background field can be resummed replacing M^2 by V(x) in the exponent of the above equation[The replacement of M^2 byV(x') or by (V(x)+V(x'))/2 also works.]. After this replacement, Ω_Y will have a small propertime expansion with coefficients whosecoincidence limit will depend only onderivatives of the potential. Incidentally, the resummation of the Ricci scalar mentioned before can be achieved, in the case of spin-0 fields in curved spacetimes, by the replacement M^2→ M^2 +(ξ- 1/6)R, see Refs. <cit.>. The resummation of terms containing first and second derivatives of the potential is a more complex task.It is natural to use as inspiration the simplest possible case for which these contributionsdo not vanish, i.e. the case of a harmonic potential. Considering the results in Ref. <cit.>, we are led to the AnsatzK_Y(x,x';τ)= :1/(4π)^d/2e^-τ V(x')-1/4σ̃^μ A^-1_μν(x';τ)σ̃^ν -C(x';τ)/^1/2(τ^-1 A(x'; τ) )Ω_Y(x,x';τ),where we have defined the following quantities:σ̃_μ(x,x'): = ∇_μσ(x,x')+ B_μ(x';τ), A_μν(x;τ): =[ 1/γtanh(γτ)]_μν, B_μ(x;τ): = 2 ∇^νV [γ^-2( 1-(γτ))]_νμ, C(x,τ): =∇^μV [-τγ^-2 +γ^-3tanh(γτ)]_μν∇^νV +12[log(cosh(γτ))]^μ_μ,γ^2_μν: =2∇_μνV.As a short-hand notation, we write ∇^α_1⋯α_n X := ∇^α_1⋯∇^α_n X, which is symmetric in the present case since we are working in flat space. Furthermore, we will denote quantities evaluated in x' with a prime,for example, (γ')^2_μν:=2∇_μνV(x'). Finally, to avoid ambiguities, any time it is not clear from the context, we will explicitly provide the point (x or x') at which the quantities are evaluated.Several comments are in order with respect to Eq. (<ref>). On the one hand, when the third- and higher-order derivatives of the potential vanish, Ω_Y(x,x';τ)=τ^-d/2 is the exact solution for the heat kernel <cit.>. On the other hand, the Ansatz clearly goes beyond the SDW proposal, as one immediately realises that the exponential is already resumming contributions that would involve an infinite number of terms in the usual GSDW coefficients, e.g. the e^-τ V' term. More importantly, its intrinsic structure is essentially different from the usual e^-σ/(2τ) prefactor,which is independent of the background potential and whose dependence with τ is rather trivial. This will prove crucial in our proof below and it shows that our resummation is not justan exponentiation of GSDW coefficients, as considered in <cit.>. Replacing the Ansatz (<ref>) in the heat kernel equation, we get { - ∇^2+(γ' (γ' τ)-1/τ)_αβ∇^ασ(x,x') ∇^β+ 2 ∇^βV(x')(tanh(γ' τ/2)/γ')_βα∇^α+ 𝔖(x,x';τ)}Ω_Y (x,x';τ)={ -/2τ -∂_τ -1/τ∇^ασ(x,x') ∇_α}Ω_Y (x,x';τ),where for powers of matrices we are using the notation[δ^μν is the Kronecker delta.] (γ')^2_αβ:=γ'_αμγ'_νβδ^μν, which is understood to be extended for arbitrary functions through a series expansion; we have also defined the effective potential𝔖(x,x';τ): = V(x) -V(x')-∇^ασ(x,x') ∇_αV(x') -1/4∇^ασ(x,x')∇^βσ(x,x') (γ')^2_αβ.Not so unexpectedly, expanding Eq. (<ref>) in τ one obtains only even powers of γ,which enforces the GSDW coefficients to depend only on natural powers of the derivatives of the potential.At this point it is opportune to state our claim more precisely: we will prove that, after expanding Ω_Y in powers of the propertime, Ω_Y =:∑_j=0^∞ a_j(x,x') τ^j-d/2, none of the coincidence limits[a_j]:=[a_j(x,x')] := lim_x'→ x a_j(x,x') ,j≥ 0, will depend on the geometric invariants contained in the set 𝒦_Y={ V,δ^αβγ^j_αβ,∇^αVγ^j_αβ∇^β V,j≥ 0}.To simplify the discussion, we will refer to the elements of 𝒦 as chains. Of course, our assertion does not preclude the appearance of the first or second derivatives of the potential in an invariant in which a higher derivative is present.The proof goes as follows. First, using Eq. (<ref>), we expand the expression (<ref>) in powers of the propertime τ.Equating the coefficients with equal powers of τ, we obtain a relation between a_j+1 and the (derivatives of the) previous coefficients,-( j+1+∇_ασ∇^α) a_j+1(x,x') = (-∇^2 +𝔖) a_j(x,x') + ∑_n=1^⌊ j/2 ⌋B_2n/(2n)!( 4(2^2n-1) ∇^α V' (γ')_αβ^2(n-1)+2^2n∇^ασ(γ')^2n_αβ)∇^β a_j+1-2n(x,x'),where B_n is the nth Bernoulli number and ⌊·⌋ is the floor function. The expression (<ref>) enables one to recursively compute the coincidence limit of the heat kernel coefficients a_j, using the fact that the initial condition (<ref>) implies a_0(x,x')=1.In performing thecalculation, one also needs the derivatives of the previous coefficients,for which a recursive formula can be obtained differentiating Eq. (<ref>). In this way, an evidently well-defined ordering is created, which starts as [a_0], [a_1], [∇_μ a_1], [∇_μνa_1], [a_2], ⋯. It is worth noting that the calculation proceeds in a way analogous to the usual SDW approach; the difference resides in the appearance of the 𝔖 term and the second line in Eq. (<ref>),involving derivatives of possibly all the previous coefficients.Coming back to the proof, after taking the coincidence limit in Eq. (<ref>) one notices that none of the chains appears explicitly in the recurrence. Moreover, one can also take derivatives in Eq. (<ref>) and express ∇^α_1⋯α_n a_j in terms of derivatives of the previous coefficients; after taking the coincidence limit, no chain will explicitly appear. Indeed, the only possible source of these invariants are the contributions produced by 𝔖; the structure (<ref>) of 𝔖, however, is such that not only its coincidence limit but also those of the first and second derivative vanish, [𝔖]=[∇_μ𝔖]=[∇_μν𝔖] =0. Only third and higher-order derivatives of 𝔖 have nontrivial coincidence limits, containing at most third or higher-order derivatives of V. This is a crucial advantage of the Ansatz (<ref>) in comparisonwith the usual SDW expansion. Still, chains could arise implicitly generated by the structure of the (derivatives of the) previous coefficients. By inspection of Eq. (<ref>), we see that this is possible only if derivatives of previous coefficients or contractions thereof contain “half-chains” of the invariants in 𝒦,to wit γ^2l_αβ or γ^2l_αβ∇^β V with l= 0,1,⋯. By induction, taking as ordering the one dictated by the recursion in Eq. (<ref>), one can prove that this is not going to be the case.In effect, for the first element in the order, i.e. [a_0], this is true. Afterwards, taking the derivatives ∇^α_1⋯α_n on both sides of Eq. (<ref>) and the coincidence limit, on the LHS we will obtain the quantity to be computed at this order, namely ∇^α_1⋯α_n a_j+1, times a numerical factor. On the first term of the RHS, we will obtain ∇^μ_μ^α_1⋯α_na_j, which is a previous element in the inductive order, and therefore upon contraction can not contribute with a half-chain because of the inductive assumption. The other contribution in the first line, i.e. the term proportional to 𝔖, will not contribute because of the inductive assumption and the comments in the previous paragraphs. After derivatives and the coincidence limit are taken, the second line in Eq. (<ref>) will contain contributions which are of the schematic form (γ^2m)^α_1_β∇^βα_2⋯α_na_l for some m,l,n≥ 0. If this is going to contain a half-chain, then the half-chain could involve the (γ^2m)^α_1_β prefactor or not. If it does not, then the half-chain is completely contained in ∇^βα_2⋯α_na_l, which conflicts with the inductive assumption. If it does, then we have yet a contradiction, since to form a half-chain from (γ^2m)^α_1_β we forcedly need another half-chain. This finishes the perturbative proof of the resummation for the Yukawa case. The validity of our Ansatz in Eq. (<ref>) can be checked computing the first coefficients a_j; after expanding the whole expression (<ref>) in the propertime and taking the coincidence limit, one should obtain the usual GSDW coefficients. Our results are in agreement with Ref. <cit.> up to the third-order coefficients (the highest order it provides). Using the results in Ref. <cit.>, we can compute the coefficients up to the fifth order, which are also in agreement with our resummed expressions (see the comments in <cit.>).Note that no integration by parts is used in these comparisons.§ RESUMMATION OF THE HEAT KERNEL FOR AN ELECTROMAGNETIC BACKGROUNDConsider now the heat kernel of a Laplace-type operator as in Eq. (<ref>), in this case involving just an electromagnetic field. Upon expansion of the covariant derivative, the heat kernel's equationcan be recast in this case as [∂_τ - ∇^2 +2A_μ(x) ∇^μ + V_EM(x)] K_EM(x,x';τ) =0, where the effective scalar potential is given by V_EM(x): = ∇_μ A^μ-A_μ A^μ.We choose the Fock–Schwinger gauge for the electromagnetic potential, i.e.(x-x')^μ A_μ(x)=0,which will prove convenient for the following reasons.Using the results in Ref. <cit.>, we can show that the Fock–Schwinger condition is equivalent to the following series expansion of the electromagnetic potential around the point x':A_μ(x'+x)= ∑_n=0^∞1/n! (n+2) x^ρ x^μ_1⋯ x^μ_n∇_μ_1⋯μ_nF_ρμ(x').This formula provides an expansion of the potential in terms of the field strength; in particular, the derivatives of A_μ in the coincidence limit take the simple form [A^ν_;μ_1⋯μ_n(x)]=n/n+1 F_μ_1^ν_;μ_2⋯μ_n(x). Let us state our claim for this case: we will show that there exists a resummation such that no geometric invariant contained in𝒦_EM={ (F^n)^μ_μ, n≥ 0} appears in the coefficients [a_j] of the heat kernel. Since γ^2_μν=(F^2)_μν/4+( derivatives of F),resumming the contributions of V_EM to the heat kernel [including up to its second derivatives, as done in Eq. (<ref>)], we will be resumming the desired contributions. In effect, although there could arise contributions from the term in Eq. (<ref>) which is linear in A_μ, we will now show that this is not the case.Consider thus the Ansatz in Eq. (<ref>), substituting V with V_EM. Inserting this Ansatz in Eq. (<ref>), we obtain an expression which contains all the terms in Eq. (<ref>); in addition, it also possesses the following contributions, which come from theterm in Eq. (<ref>) proportional to A_μ (we are neglecting the unnecessary overall factors):2A_μ(x) ∇^μK_EM(x,x';τ)∼{2A^α(x) ∇_α - A^α(x) (γ' (γ' τ))_αβ∇^βσ(x,x') -2A^α(x) (1/γ'tanh(12γ' τ))_αβ∇^βV_EM(x') }Ω_EM (x,x';τ).This expression can be analyzed term by term as has been done for the Yukawa potential. Beginning with the third term in the RHS of Eq. (<ref>),a simple computation shows that, because of ∇^β V_EM, it contributes with an invariant containing derivatives of the field strength,no matter whether we consider Eq. (<ref>) or its derivatives.To deal with the second term in the RHS of Eq. (<ref>), it is enough to consider just the contributions in γ_αβwhich contain no derivatives of the field strength; taking an arbitrary number of derivatives we find[The subindex (n) labels to which term in Eq. (<ref>) we are referring.]∇^α_1⋯α_n( A_μ(x) ∇^μK_EM)_(2)→ F^βα_1 (F^2n)_β^α_2∇^α_3⋯α_nΩ_EM,n=0,1⋯.A chain could arise explicitly contracting with δ_α_1α_2. However, this term is antisymmetric in (α_1,α_2),given that it contains an odd power of the antisymmetric tensor F; given that the derivatives were symmetric in the same indices, it must vanish. If instead the chain arises implicitly from ∇^α_3⋯α_nΩ_EM, then it would be in contradiction with the inductive assumption. The last possible source of chains is the first term in the RHS of Eq. (<ref>).In this case we appeal once more to an induction proof. Following the ideas in the previous section, assume that half-chains of the form (F^n)_αβ do not appear in contractions of the derivatives of the coefficients up to the point in which we compute (derivatives of) the coefficient a_j. Then, (the derivatives) of the first term in the RHS of Eq. (<ref>) could contribute only in the form:∇^α_1⋯α_n( A_μ(x) ∇^μK_EM)_(1)→ F_α^α_1∇^αα_2⋯α_na_j-1.But then the only way to form a half-chainupon contractions is that they should be present in contractions of ∇^αα_2⋯α_na_j-1^(EM) itself, which would be impossible by assumption.Adding the fact that also in the electromagnetic case the initial condition (<ref>) forces a_0(x,x')=1, the proof is complete.We have checked the validity of our results by computing the first heat-kernelcoefficients.Without employing integration by parts but using the Bianchi identity, our results agree with Ref. <cit.> up to third order in the propertime, which is the highest order it provides. We can also compute the standard GSDW coefficients using the standard methods described in Ref. <cit.>; up to the fifth order and taking into account the comments in Ref. <cit.>, they agree with our results.Alternatively, one can use integration by parts to successfully compare with Refs. <cit.>, again up to and including the fifth-order coefficient. To successfully compare with Ref. <cit.>,it is important to note that the basis of invariants that they consider is overcomplete;indeed, using the Bianchi identities we can prove that two of them are related with the remaining ones:F^κλ F^μν∇_ρF_λν∇^ρF_κμ = 2 F^κλ F^μν∇_νF_λρ∇^ρF_κμ, F^κλ F^μν∇_νF_λρ∇^ρF_κμ = -F^κλ F^μν∇_λF_νρ∇_μF_κ^ρ + F^κλ F^μν∇_μF_κ^ρ∇_νF_λρ.Even after taking into account these equalities (or integration by parts) our result still does not agree with Ref. <cit.>. As far as we can see, the fact that they employ a derivative expansion should not be an obstacle and our results should coincide in the appropriate range of validity. Given our agreement with the other references, it seems probable that there could have been a numerical error in some intermediate steps of Ref. <cit.> or a subtlety in the application of the Wordline Formalism has passed unnoticed.§ DISCUSSION AND CONCLUSIONSSummarizing, we have proved that the diagonal of the heat kernel, both for a Yukawa or an electromagnetic background, satisfies the resummation formulaK(x,x;τ) =1/(4π)^/2e^-τ V +∇^αV [ γ^-3(γτ - 2 tanh(12γτ))]_αβ∇^βV / ^1/2((γτ)^-1sinh(γτ)) Ω(x,x;τ),where the explicit form of V, γ and Ω depend on the case under consideration. The prefactor in Eq. (<ref>) should be understood as a resummation of an infinite number of invariants. In the Yukawa case, it comprises all the invariants that can be built using solely the potential and its first or second derivatives, cf. Eq. (<ref>).If there exist physical reasons to dismiss second derivatives, one can take the smooth limit for small γin Eq. (<ref>) to obtainK(x,x;τ) =1/(4π)^/2e^-τ V +τ^3/12∇^αV∇_αV Ω'(x,x;τ),where now Ω'(x,x;τ) will depend in general on all the invariants built from ∇_μνV. In this case, the form of the prefactor could have been guessed from the expression of a_3 in the simpler SDW expansion.In the electromagnetic case, all the chains (F^n)^μ_μ, n=1,2,⋯ have been resummed. An immediate consequence is the proof of the conjecture of Ref. <cit.>,since all the invariants in Ω contain at least one derivative of the field strength,while ℱ and 𝒢 contain none.In four dimensions, one can reobtain the usual Euler–Heisenberg Lagrangian using the fact that invariants built only from the field strength can be expressed in terms of just ℱ and 𝒢 (see Ref. <cit.>).Coming back to generalities, it is important to emphasize that the result in Eq. (<ref>) is local, implying that it can be used to compute propagators in the coincidence limit or smeared traces Tr(f(x) K(x,x';τ)), with f(x) a sufficiently convergent function at infinity. Furthermore, it drastically reduces the number of invariants involved in an expansion of Ω in the propertime. For instance, consider the Yukawa case: the third order contribution contains four terms in the usual SDW expansion <cit.>, which reduces to just one in our setup.This reduction is more substantial for higher-order coefficients,for which it is known that the number of contributing invariants rapidly increases; for instance, [a_5] reduces from sixteento five terms.For small propertime τ, the leading behaviour of Eq. (<ref>) is as usual independent of the background potential and dictated by the dimension of the manifold, cf. expression (<ref>). Note that, as already mentioned,the prefactor in Eq. (<ref>) is smooth in the limit when γ (or γτ) tends to zero.Additionally, no singularities arise as long as γ has positive eigenvalues (which will be true if the Hessian of V is positive; this is rather intuitive if we think of the spectral decomposition of the heat kernel). Instead, if the Hessian of V possesses at least one negative eigenvalue λ_-, the determinant in Eq. (<ref>) develops an infinite number of poles in the propertime determined by the condition sin(√(|λ_-|)τ)=0.As we will shortly see, this is the quintessence of the Schwinger effect. To this end, compute the effective action as discussed in Sec. <ref>; in the Euclidean setup we haveΓ_ E=- ∫[x][] ∫_0^∞[τ]/τK(x,x;τ).If we want to analyze the Minkowskian situation, it is customarily to perform a Wick rotation, which basically consists in substituting t→ t and in simultaneously introducing a factorfor every single zeroth-component in all the involved tensors. This could give rise to negative eigenvalues in some region of the spacetime, thus creating instabilities in the Hessian of V, even if it was positive definite in Euclidean space. Hence, we haveΓ_ M∼∫[x][] ∫_0^∞[τ]/τ ^-1/2[(γ̃_ϵτ)^-1sinh(γ̃_ϵτ)] דregular”,where “regular” denote terms that we assume to be regular and nonvanishing at the poles of the determinant where τ≠0 (and whose real nature can not be unveiled unless we have a resummed/exact result), while γ̃_ϵ is related to the Wick-rotated second derivative of the potential, keeping a small parameter ϵ to circumvent the poles (it is natural to simply understand ϵ as stopping the Wick rotation just before reaching the axis of imaginary time).As a consequence, Γ_M acquires an imaginary contribution, which signals an instability of the vacuum and is associated with the creation of pairs; in effect, in the limit of a weak pair creation, the total pair creation probability reads P =2 ImΓ_M.In the electromagnetic case, since essentially γ_μν∼ (F)^2_μν, cf. Eq. (<ref>), this clearly happens when the electric field dominates over the magnetic field: this is the well-known Schwinger effect. However, the mechanism also gives rise to a scalar Schwinger effect,i.e. one in which the background is scalar.From our perspective, the unified origin of the scalar and electromagnetic Schwinger effect is patent. A simple example within our scalar approach is provided by inflaton models in which the scalar field slowly rolls downwith a temporal profile that could be approximated as linear. More complex models are also in the range of applicability of our results; for instance, the study of ultralight dark matter creation through a Higgs portal <cit.>. Another possibility, though technically inaccessible in the laboratories with the present technology, is to consider pulses of Higgs particles, consisting of a large, coherent number thereof,which would then create a suitable scalar background for pair creation. Coming to future work, it would be beneficial to explore how the structure of the Worldline's Green function, as discussed in Ref. <cit.>, precludes the formation of chains at higher orders in the derivative expansion. In our case, the key ingredient to perform the proof was the use of the Ansatz (<ref>), which filters into the perturbative computation through the effective potential 𝔖 in Eq. (<ref>). Further developments could involve complementing the ideas in this article with numerical implementations of the Worldline Formalism, offering a semi-analytical (or semi-numerical) approach to obtain nonperturbative results; by this, we mean that a large piece of information can be added analytically before applying the numerical methods, as in the renormalization process to obtain a two-point function <cit.>. Implementing the factorization that we have shown to be valid in this letter could be employed at least to reduce the computation time, similarly to what happens in the quantum mechanical situation presented in Ref. <cit.>.Extending the framework to Yang–Mills and Proca-like backgrounds seems possible under reasonable assumptions. Another promising idea regards higher-derivative operators: it is known that for these operators,which appear in theories of supergravity and super Yang–Mills <cit.>,the small propertime asymptotics is intrinsically modified. It would be interesting to see how these modifications affect the possible resummations.From an experimental perspective, our resummed version of the proper-time expansions paves the way for an enhanced analysis of several observationally accessible physical phenomena: on the one hand, given the local nature of the expansions, they could provide an alternative formulation to the locally-constant field approximation frequently employed in strong field QED for the computation of the pair creation rate <cit.>, presenting a valuable prospect to the pressing need in the field of improved approximations <cit.>. On the other hand, they also explicitly harbour electromagnetic nonlinearities and hence they can be straightforwardly associated with testable nonlinear QED processes, such as light-by-light scattering <cit.> or vacuum birefringence <cit.>.§ ACKNOWLEDGMENTSSAF acknowledges the support from Helmholtz-Zentrum Dresden-Rossendorf. SAF and FDM acknowledge the support from Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) through Project PIP 11220200101426CO. SAF and UWH acknowledge the support of UNLP through Project 11/X748. The work of VV has been partially funded by Next Generation EU through the project “Geometrical and Topological effects on Quantum Matter (GeTOnQuaM)”. The research activities of CGP and VV have been carried out in the framework of the INFN Research Project QGSKY. The Authors extend their appreciation to the Italian National Group of Mathematical Physics (GNFM, INdAM) and to the COST Action CA18108 “Quantum gravity phenomenology in the multi-messenger approach” for their support.
http://arxiv.org/abs/2312.16303v1
{ "authors": [ "S. A. Franchino-Viñas", "C. García-Pérez", "F. D. Mazzitelli", "V. Vitagliano", "U. Wainstein Haimovichi" ], "categories": [ "hep-th", "math-ph", "math.MP" ], "primary_category": "hep-th", "published": "20231226191607", "title": "Resummed heat kernel and effective action for Yukawa and QED" }
UTF8gbsn Gabriel A. Oio [email protected] 0000-0002-7938-6107]Gabriel A. Oio Chinese Academy of Sciences South America Center for Astronomy (CASSACA), National Astronomical Observatories of China (NAOC),CAS, 20A Datun Road, Beijing 100012, ChinaY. Sophia Dai (戴昱) [email protected] 0000-0002-7928-416X]Y. Sophia Dai (戴昱) Chinese Academy of Sciences South America Center for Astronomy (CASSACA), National Astronomical Observatories of China (NAOC),CAS, 20A Datun Road, Beijing 100012, China0000-0001-6800-3329]C. G. Bornancini Instituto de Astronomía Teórica y Experimental, (IATE, CONICET-UNC), Córdoba, Argentina Observatorio Astronómico de Córdoba, Universidad Nacional de Córdoba, Laprida 854, X5000BGR, Córdoba, Argentina0000-0001-7634-1547]Zi-Jian Li Chinese Academy of Sciences South America Center for Astronomy (CASSACA), National Astronomical Observatories of China (NAOC),CAS, 20A Datun Road, Beijing 100012, China National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, 100012 Beijing, China School of Astronomy and Space Sciences, University of Chinese Academy of Sciences, Beijing 100049, China Active galactic nucleus (AGN) driven outflows can have a significant impact on the evolution of the host galaxy. In this work, we compare the properties of galaxies that hosts AGNs with and without outflows. Our sample consists of 103 AGNs identified by mid-IR color-color selection, and confirmed with optical spectroscopy at a redshift range of 0.3 ≲ z ≲ 0.9. We fit the   line using spectra from the zCOSMOS survey to identify and to study the occurrence of outflows.We find that ionized outflows are present in ∼25% of our sample, with the largest incidence at the highest   and X-ray luminosity bins. The fastest outflows are found in the more extended and massive galaxies. We do not observe a difference in the star formation rate of AGNs with outflows compared to AGNs without outflows.From visual inspection and non-parametric morphological studies, we obtain that outflows are preferentially observed in galaxies with disk-type and elliptical morphologies. § INTRODUCTION Active galactic nuclei (AGN) are intrinsically linked with the origin and evolution of galaxies. The relationships between black holes and various parameters related to their host galaxies are well known, but differ by orders of magnitude in their physical scale. Among them, we can name the relationship between black hole mass and bulge mass <cit.>, between black hole mass and velocity dispersion <cit.>, and even with the total mass of the host galaxy <cit.>. The co-evolution of these different components is believed to be driven by feedback <cit.>. This feedback regulates the accumulation of mass in galaxies by heating and/or removing the gas that would otherwise be used to form stars, and by ejecting matter as mass is added to the central black hole.This process, where warm ionized and cold molecular outflows are being driven by an AGN accreting near the Eddington limit, is usually referred to as “quasar mode" <cit.>.There are numerous approaches in the literature to define outflows from the emission lines. One way that has been used to find outflowing gas emission in AGNs is by considering the line-of-sight velocity offset (v_off) of narrow emission lines with respect to the systemic velocity of their host galaxies <cit.>. This approach is based on the idea that there is a stratified narrow line region (NLR) where, in its innermost regions, high ionization lines such as  are blueshifted and more turbulent (broader) than the low-ionization lines produced in the external emitting regions and also with respect to the stellar absorption lines that are indicative of the systemic velocity of their host galaxies. <cit.>. Given that the forbidden  emission lines cannot be produced in the high-density, sub-parsec scales broad line regions (BLR) of AGNs, they are usually considered to be a good tracer of the extended ionized gas. Particularly in AGNs, the  line profile can show a broad, blue-wing asymmetry, which is usually attributed to an outflowing gas. Therefore, some works focus on the multi-component fitting of  line and characterize the outflow by its blue-wing properties <cit.>. In contrast, some authors have chosen to characterize the outflow from its full line profile, using so-called non-parametric definitions <cit.>. <cit.> measured the  line width containing the central 80 per cent of flux (W_80), in a sample of nearby, luminous, obscured, radio-quiet quasars. In their work, they used this line width as an estimate of the outflow velocity, and found that most objects showed a blue excess in their line profiles, with a median outflow velocity of 760 km s^-1.In a similar manner, <cit.> employed the same approach to measure the line width of the emission lines  and . Their objective was to demonstrate the prevalence of outflows in a sample of 16 luminous type II AGNs. Here, they found extended  emission, with W_80 ranging from 600 to 1500 km s^-1.Some results obtained in numerical simulations suggest that there is a stage in the evolution of galaxies where violent interactions and mergers are more frequent. In these evolutionary scenarios the first instances of mergers would show galaxies with an obscured nuclei by large amounts of gas and dust. Then a phase would follow where the AGN drives outflows that expel the surrounding material and reveal an unobscured AGN <cit.>. The relationship between gas outflows or inflows in AGNs and the obscuration measured by optical spectral line widths or hard/soft X-ray emission is still not well understood. We might expect that obscured AGNs selected in the X-rays, according to high neutral hydrogen column densities or high hardness ratio values, calculated as HR=H-S/H+S, where H and S are the count rates in the hard and soft bands, to also have outflow signatures <cit.>.<cit.> using the column density (N_ H) estimated from X-ray spectral analysis showed opposite results. Inthe same way <cit.> found no evidence that the highest ionized gas velocities are preferentially associated with X-ray obscured AGN. <cit.> in a study of X-ray selected (z∼0.05) AGN taken from the BAT AGN Spectroscopic Survey (BASS) found that the occurrence of outflow detections in unobscured type 1 and type 1.9 (with narrow lines except for a broad component seen in Hα) AGNs is twice that of obscured (type 2) AGNs.Mergers and interactions have long been considered a triggering mechanism for AGN <cit.>. Because of the increase of nuclear star formation and nuclear activity in mergers <cit.>, it is reasonable to consider that the incidence and properties of outflows may depend on the morphology and/or environment of the galaxy.However, major mergers alone are not sufficient to account for the entire AGN population <cit.>, as they appear to be associated mainly with ultra-luminous infrared galaxies (ULIRGs) <cit.>. That is, only the most luminous AGN phases are associated with major mergers, while less luminous AGNs appear to be driven by secular processes <cit.>.Our goal is to test whether the host galaxy of AGNs behaves differently in the case with and without outflow signatures. Therefore, in this work, we will discuss and compare the properties of galaxies hosting mid-IR selected AGNs with and without outflow signatures. This paper is structured as follows: Section <ref> summarizes the selection criteria for the sample used in this work and how we select and measure outflows. In section <ref> we describe some of the AGN and host galaxy properties such as  luminosity, X-ray emission, stellar-mass, star formation rate (SFR), rest-frame colors and morphological properties.We discuss our results in section <ref>, and in section <ref> we summarize our results.Throughout this work we will use the AB magnitude system <cit.> and we will assume a ΛCDM cosmology with H_0=70 km s^-1Mpc^-1, Ω_M=0.3, Ω_Λ=0.7.§ SAMPLE SELECTION §.§ Parent Sample Due to the diversity of the AGN population, no single AGN selection technique is complete. A specific technique may correctly identify one population of AGNs while missing others. For example, a selection based solely on X-ray or optical emission may overlook AGNs that are obscured by interstellar gas or dust. In contrast, MIR selection methods are more effective at detecting this population because they are less affected by extinction <cit.>. Thus we have chosen to compare the host galaxies of AGNs selected in the IR to avoid other selection biases. We derived a sample of mid-IR (MIR) selected and optically confirmed AGNs from the work of <cit.>. Briefly, in that study, they investigated the properties of host galaxies of AGNs selected based on near-IR emission and MIR criteria, which were subsequently confirmed spectroscopically. The different selection techniques have varying completeness and reliability at selecting AGN <cit.>.In this work, we will focus on the AGN sample selected using the criteria of <cit.>, which presents a high level of completeness <cit.>, and confirmed through optical spectroscopy using the mass-excitation (MEx) diagram <cit.> in order to obtain more confidence in the sample. <cit.> chose the AGN sample from the COSMOS field <cit.> using the COSMOS2015 catalog [The catalog can be downloaded from <ftp://ftp.iap.fr/pub/from_users/hjmcc/COSMOS2015/>] <cit.> and the zCOSMOS redshift survey <cit.>. The COSMOS2015 catalog provides photometric data across multiple wavelengths, ranging from UV to mid-IR, as well as estimates of stellar mass obtained using  <cit.> following the method described in <cit.>. For their analysis, <cit.> utilized the Spitzer IRAC and MIPS bands: 3.6, 4.5, 5.8, 8.0, 24, and 70µm <cit.>. The authors selected sources within the spectroscopic redshift range of 0.3≤z_ sp≤0.9 from the zCOSMOS DR3-bright catalog, ensuring reliable redshift estimates. This catalog provides a confidence parameter that ranges from insecure and probable redshift (Class 1 and 2, respectively), one broad AGN redshift (Class 18), one line redshift (Class 9), and secure and very secure redshift (Class 3 and 4, respectively).Each confident parameter is also assigned a confidence decimal, which is derived fromrepeat observations and by the consistency or otherwise with photometric redshifts. The confidence decimal ranges from .1 (spectroscopic and photometric redshifts are not consistent at the level of 0.04(1+z)), .3 (special case for Class 18 and 9, consistent with photo-z only after the redshift is changed to the alternate redshift), .4 (no photometric redshift available) to .5, (spectroscopic redshift consistent within 0.04(1+z) of the photometric redshift). In the work of <cit.> they selected all sources with classes 3.x and 4.x, where x can take the values 1, 3, 4, and 5. Using the photometric data, they performed a preselection of sources that satisfied the <cit.> color-color criteria, employing the bands [4.5]-[8.0] vs [3.6]-[5.8]. <cit.> identified the positions of QSOs detected in the Sloan Digital Sky Survey (SDSS) on a log(S8.0/S4.5) vs. log(S5.8/S3.6) color-color diagram and proposed the following relation to select AGN samples based on their MIR colors: log(S_5.8/S_3.6)>0.1, log(S_8.0/S_4.5) > -0.2 ∧log(S_8.0/S_4.5) ≤0.8× log(S_5.8/S_3.6)+0.5 Here, ∧ represents the logical AND operator (refer to Figure 1 from ). A total of 490 AGN candidates with good redshift estimates fall within the Lacy wedge and constitute the parent sample for this study.As noted by several authors <cit.>, AGN IR-selection techniques can suffer from contamination by galaxies classified as star-forming (SF) or quiescent. Hence, it is advisable to apply a second selection criterion to ensure that the chosen sample predominantly consists of AGNs. Optical emission-line ratios can be used to diagnose the ionization mechanism, for example, using the BPT diagram <cit.>. However, within the redshift range of the parent sample, only the Hβ and   lines fall within the wavelength coverage of the zCOSMOS spectra.As an alternative to the BPT diagram, the MEx diagram utilizes the host galaxy mass and the quotient [O III] λ5007/Hβmaking it useful for selecting AGNs at intermediate redshifts. To measure the signal-to-noise ratio (S/N) of these emission lines, a single Gaussian profile fitting was performed on the Hβ and   lines. Only spectra with both Hβ and   S/N ratios greater than 3 were considered. After applying the IR selection criteria of <cit.>, 490 AGN candidates were selected. The selection criterion according to the MEx diagram yields a number of 122 AGNs and the restrictions on S/N on both  and  spectral lines, resulted in a final number of AGNs of 103.We then select possible outflow candidates following the methods described in Sec <ref>.§.§ Outflow Candidates§.§.§ Emission line measurements We are interested in measuring  as a tracer of possible outflows in our galaxy sample. In AGNs, this line is produced in the low-density narrow-line region, and any asymmetries are often attributed to outflowing (or inflowing) gas, with some parts of this wind obscured by surrounding dust. To measure the emission lines, we followed a two-step procedure.First, we shifted the spectra to the rest frame using the spectroscopic redshift values tabulated in the zCOSMOS catalog <cit.> and fitted the continuum using the Penalized Pixel-Fitting code (). Since the power-law component and Fe II emission are produced in the innermost regions of the nuclei, in type II AGNs, this contamination will be enshrouded by the surrounding obscuring dust. Therefore, we chose not to include these components in the continuum fitting <cit.>. Next, the emission lines were fitted using the ifscube package[https://github.com/danielrd6/ifscube/] <cit.>. ifscube is a Python-based software package primarily developed to perform analysis tasks in data cubes of integral field spectroscopy but has great flexibility to work with single-slit or fiber spectra. This package subtracts the stellar continuum obtained in the previous step and performs a constrained multiple-component fitting on the pure emission line spectrum. In the first iteration, Oxygen lines (λ4959,5007) and Hβ were fitted with a single Gaussian component. The velocity from the Gaussian fit of  was assumed as the systemic velocity, and this value was reintroduced into bring each spectra in our sample to a frame of reference based on the observed wavelength of the peak of their  line. Then, a second fit to the emission lines was performed, this time allowing up to two Gaussian components for the oxygen lines, while a single Gaussian component was sufficient for Hβ in all cases. We present example fits for different cases of emission line profiles in Figure <ref>. We show two prominent cases of   asymmetric emission, one with a red wing and one with a blue wing, as well as a typical example of a spectrum with a symmetrical line profile (i.e., single Gaussian).To analyze the kinematics of the emission lines we employed a non-parametric scheme similar to the one utilized by <cit.> and described in detail in <cit.>. In this approach, we solely consider the cumulative function of the synthetic line profile. Like <cit.>, we do not attribute any physical significance to the individual Gaussian profiles. The non-parametric velocity definitions employed in this study are the line width at 80% of the flux (W_80) and the velocity offset (Δ v), which is measured from the velocities corresponding to the 5th and 95th percentiles of the overall emission-line profiles. Specifically, Δ v is calculated as Δ v = (v_05 + v_95)/2. We chose to work with these values because W_80 and Δ v are more sensitive to asymmetries in the base of the line profile compared to, for instance, the full width at half maximum (FWHM). Both W_80 and Δ v were measured from the fitted line profile of . §.§.§ Line constrains: outflow selection As mentioned in section <ref>, for our selection criteria we fitted the   lines with up to two Gaussian components, namely a “core" component and a broader “asymmetric" component. As summarized in section <ref>,there are several somewhat arbitrary ways to select optical emission line outflows. To decide whether a second component was necessary or not we imposed the following criteria:* The flux of both Gaussian components must be at least 1σ greater than the noise measured in the vicinity region of   (5030 - 5080 Å).* The velocity difference between the two components should be greater than the sum of their respective velocity errors.* The line width W_80 must be greater than 300 km s^-1. That is we require both Gaussian profiles to be detectable over the noise of the spectra, and for the asymmetric component to be distinguishable from the core component in terms of velocity shift. The third criteria is introduced to ensure that we are in the presence of a broad asymmetrical profile, with the imposed velocity cut being approximately twice the instrumental velocity dispersion. Lastly, we required that the flux of both the  and  lines to be at least 1σ above the noise of the vicinity of the line. By applying these criteria, we retained a total of 91 AGNs, 26% (24) of which exhibited an asymmetric profile. Out of the 24 AGNs, 21 displayed a blue asymmetry, while 3 objects showed a red wing (Δv > 0 km s^-1), indicating possible outflows and inflows, respectively. Although we decided not to fit a possible Fe II emission component, we took special care with objects exhibiting a red asymmetry. For these three objects, we re-fitted the continuum by including an Fe II template, but in no case was this component necessary according to our fitting model.If we apply stricter selection criteria, for instance, if we require that each Gaussian component's flux of  be greater than 2σ and 3σ, we are left with 18 and 15 asymmetric profiles, respectively. The different choice of S/N requirement in the Gaussian profile does not significantly change our sample or the following conclusions. At 3σ it is dropped from the asymmetry sample the galaxy COSMOS ID = 810540, which is the object with the highest red asymmetry (Δv = 118 km s^-1, W_80 = 573 km s^-1). The most significant change in our sample arises from disregarding the velocity difference between the two components. If we were to disregard this criterion, the final sample of potential outflows would consist of 46 objects (instead of the 24 selected), representing 50% of the total.We estimated the uncertainties for the fitted line parameters by performing 1000 Monte Carlo iterations for each spectrum. The errors for each Gaussian parameter were then determined as the 1σ dispersion obtained from the 1000 Monte Carlo runs.In Figure <ref>, we present the relation between W_80 and Δ v. The majority of objects with asymmetry exhibit negative values of velocity difference, indicating potential outflows. The distribution of velocity dispersion and velocity differences shows a wide range of values for objects with asymmetric profiles, while it is narrower for objects with no discernible asymmetries. In the following we will characterize the distributions by their median values and adopt the error as the standard deviation from the mean. The distribution of Δ v, shown in the top panel of Figure <ref>, has a median value of -89 ± 100 km s^-1 for AGNs with asymmetry, and a median value of 2 ± 14 km s^-1 for galaxies with no asymmetry. The median value of W_80 is 634 ± 190 km s^-1 for galaxies with asymmetric profiles, and a median of 567 ± 132 km s^-1 for objects with no asymmetry.Our estimated outflow velocities align with those found in previous studies. For example, <cit.>, who examined a large sample of galaxy pairs involving AGNs and SF galaxies, used W_80 as a measure of outflow velocity, taking it as 1.088×FWHM. They reported a mean value of approximately 700 km s^-1 for all AGN subsamples. <cit.> measured  and  lines in a sample of obscured QSOs and found a median value for W_80 of 752 km s^-1. From a sample of 16 type 2 AGNs observed with Integral Field Unit (IFU), <cit.> found W_80 values ranging between 600 and 1500 km s^-1. In the case of highly luminous type II QSOs (log(L_)>42.5), the velocity widths are considerably higher, with W_80 ranging from approximately 1000 to 5500 km s^-1<cit.>. As depicted in Figure <ref>, there is a tendency for galaxies with larger blue asymmetries to exhibit higher velocity dispersions for the ionized gas. Henceforth, we disregard galaxies with red asymmetries and refer to AGNs with outflows as those objects with Δv < 0 km s^-1.§ AGN AND HOST GALAXY PROPERTIES §.§  luminosityWe calculate the AGN luminosity from [OIII] to test if the AGNs with and without outflow features have intrinsically different luminosities. The luminosity for the  emission line was computed with the standard formula:L_ = 4π d_L^2/(1+z)f_where d_L is the luminosity distance and f_ is the  line flux. As in <cit.> we have not corrected the  line flux for extinction. The distribution of the  luminosity (L) for the sample of 91 galaxies is shown in Figure <ref>. There is a tendency for galaxies with asymmetric profiles towards higher  luminosity values, presenting a median value of log(L)=41.9 (erg s^-1) against a median value of log(L)=41.1 (erg s^-1) for the L  of galaxies with no asymmetric profile. The greater fraction of AGNs with asymmetrical profiles as a function of L is evident in the top panel of Figure <ref> where we show the incidence for each type of galaxy per [OIII]λ5007 luminosity bin. In the Figure, uncertainties on the incidence fraction are given by the binomial beta distribution quantile technique with a 1σ (68.3%) confidence interval <cit.>, the error in luminosity value for each bin are given by the standard deviation of the mean. Some authors find a good and even strong correlation between W_80 and L <cit.>, however we found no significant correlation between these parameters. Considering the relation between W_80 and L and without making any difference on AGNs with and without signs of an outflow we find a Spearman correlation coefficient S_p = 0.1 with a p_value = 0.3, if we consider the sample with asymmetrical profiles S_p = 0.01 with p_value = 0.9, and for the objects with symmetrical profiles S_p = -0.08 and p_value = 0.5. §.§ X-ray propertiesWe investigate whether there is a relationship between the incidence of AGN with and without outflows and X-ray luminosity and hardness ratio.The COSMOS2015 catalog <cit.> also provides information about X-ray photometry. In their catalog they include fluxes and fluxes errors from the previous Chandra COSMOS Survey (C-COSMOS) <cit.>, X-ray detected sources from XMM-COSMOS <cit.>, the matches with the NuSTAR Extragalactic Survey <cit.> and also with the Chandra Cosmos-Legacy Survey <cit.>. The latter catalog is a 4.6 Ms Chandra program over 2.2 deg^2 of the COSMOS field, containing 4016 X-ray sources down to a flux limit off 2×10^-16erg s^-1cm^-2 in the 0.5-2 keV band.For our sample of 91 AGNs we found 34 matches with the XMM-COSMOS catalog, 29 matches with the C-COSMOS, 5 matches with NuSTAR COSMOS survey and 42 with the Chandra COSMOS-Legacy Survey. The catalog of <cit.> encompasses all of the previous X-ray matches so we decided to use the fluxes and hardness ratio provided by this catalog.In Figure <ref>, we present the distribution of the X-ray luminosity for the AGN selected galaxies with and without outflow signatures calculated as:L_X = 4π d^2_Lf_x(1+z)^Γ - 2  erg s^-1 Here, d_L is the luminosity distance in cm, f_x is the X-ray flux in units of erg s^-1cm^-2 in the hard-band and the photon index was assumed to be Γ = 1.8 <cit.>. In the top panel of Figure <ref> we plot the fraction of AGNs with (closed diamonds) and without (open circles) asymmetric profiles as a function of the X-ray luminosity. We can see that the incidence is practically constant except for the bins of the highest L_X luminosity where we see an increase in the fraction of AGNs with outflow. This result is in agreement with the work of <cit.> where studying a sample of 89 AGN at z ≳ 0.6, they find a larger fraction of AGNs with high wind velocities (W_80 > 600 km s^-1), for the objects with L_X > 10^43 (erg s^-1). Later, <cit.> confirmed these results in a larger sample of X-ray AGNs (∼ 500 SDSS/X-ray objects), with outflow signatures in their optical spectra. They also found a higher fraction of AGNs with outflows at the highest X-ray luminosity bins. As in the case with  luminosity, we found no significant correlation between the kinematic parameters and the X-ray luminosity.We also want to compare whether there is a preference for AGN with outflows to be found in X-ray obscured (i.e., high neutral hydrogen column densities or high hardness ratio values) with the AGN without optical outflow signatures. The hardness ratio (HR) is defined as:HR = H - S/H + S,where S refers to the net count rate in the soft band 0.5-2 keV and H is the count rate in the hard band 2-7 keV. So stated, HR provides an indication about the flatness of an X-ray spectrum.In Figure <ref> we plot the HR as a function of the 2-10 keV luminosity for the objects in our sample with X-ray emission. Several authors made use of the HR to set apart obscured from unobscured sources in the X-rays at all redshifts <cit.>. This is due to the the fact that soft X-ray emission of obscured AGNs tend to be absorbed, while hard X-ray are able to escape.We have taken as limiting value HR=-0.2 (dashed horizontal line in the plot) which correspond for a source with a neutral hydrogen column density, N_H > 10^21.6 cm^-2 <cit.>.The vertical dashed line indicates a typical limit used in the X-rays to separate AGNs from QSOs <cit.>. On the right panel we show the distribution of HR for objects with detected outflows (blue) and without outflows (red). As it can be seen in the figure, the distribution of HR is skewed towards softer spectra. Taking into account the low number of X-ray detection in AGNs with outflow, their distribution seems to be bi-modal with a mean value for HR=-0.09±0.28 with a slightly higher number of obscured sources (55%). Inversely, 43% of AGNs without outflows have HR≥-0.2 (i.e., obscured). While the obscured sources with outflows seem to be evenly distributed in the areas corresponding to AGN and QSO, there seems to be a preference for the unobscured sources with optical outflows to be located in the AGN region, being 37% of them found in this area. In contrast, we can also see in that figure that in general the most luminous sources with outflows show a harder x-ray spectra than the most luminous sources without outflow.§.§ Stellar mass Feedback is thought to be the main mechanism behind the co-evolution of the AGN host galaxy and its supermassive black hole. This would be responsible for regulating star formation and galaxy growth in AGNs. One of the main methodologies to determine the stellar-mass and the SFR of galaxies is through modeling the ultraviolet (UV) to infrared (IR) spectral energy distributions (SEDs) of galaxies.Because the beam sizes of FIR/(sub)mm detectors are very large, the fluxes from individual galaxies are sometimes difficult to measure if they are close to other sources. This introduces heavy source confusion (blending) which makes it difficult to correctly measure the SED and determine the SFR. <cit.> developed a new method called “super-deblending" approach for obtaining prior-fitting multiband photometry for FIR/(sub)mm data sets in the GOODS-North field.For this reason, in this work we use stellar mass and SFR determinations obtained in the Super-deblended catalog in the COSMOS fields by <cit.>. In the work of <cit.> they applied this method to 194428 galaxies in the COSMOS field, covering data fromSpitzer,Herschel, SCUBA2, AzTEC, MAMBO and NSF's Karl G. Jansky VLA at 3 GHz and 1.4 GHz. They use SED fitting techniques following the same approach as the one presented in <cit.>; namely they include four SEDcomponents in the fitting procedure: 1) a stellar component <cit.> with a Small Magellanic Cloud attenuation law; 2) a mid-infrared AGN torus component<cit.>; 3) a dust continuum emission from the <cit.> library; 4) a power-law radio continuum with an evolving _qIR=2.35×(1+z)^-0.12+log(1.91) <cit.>. In this section we make use of their compiled values for the stellar mass <cit.> and SFR_IR computed from the integrated 8-1000μm infrared luminosities resulting from the FIR+mm SED fitting, assuming a Chabrier IMF <cit.>, and excluding any AGN contamination as derived from the SED fitting. Figure <ref> shows the median values of outflow velocity in bins of stellar mass on the left panel, and the median star formation rate as a function of W_80 on the right. We observe a clear trend of higher velocities in  line toward higher galaxy masses with the outflows being detected in galaxies with stellar masses higher than log(M_star (M_⊙))=9.4. AGN driven outflows are often invoked to explain the quenching of star formation by ejecting the interstellar medium (ISM) gas and preventing the cooling and infall of intracluster medium (ICM) gas on larger scales <cit.>.We find a median star formation rate of log(SFR_IR (M_⊙ yr^-1))= 1.3 ± 0.3 for AGNs with outflows, and log(SFR_IR (M_⊙ yr^-1))= 1.2 ± 0.3, for AGNs without outflows. We see on the right panel of Figure <ref> that there is no impact on the star formation rate throughout the velocity range studied.To continue investigating any possible difference in the star formation among AGNs with and without outflows, we now turn our attention to the specific star formation rate (sSFR=SFR/M_*). We compare the distribution of the relative sSFR in comparison to the mean sSFR of the whole sample in Figure <ref>. We have divided the sample taking as limiting value W_80 = 750 km s^-1 which is the median W_80 for AGNs with outflow. We can see that in both cases the relative sSFR for AGNs with outflow is lower than for AGNs without outflow. The bigger difference appears to be at higher velocities where their median values of difference from the mean sSFR are -0.3 ± 0.2 and -0.02 ± 0.3 respectively. This result, together with the mass scaling relation, could hint towards a feedback impact in the most massive galaxies. However, this difference is not significant enough as to claim a quenching in the star formation due to the outflow. Our results agree with those by <cit.>, who studied three cosmological simulations with AGN feedback and found that AGNs are preferentially found in galaxies with high gas fractions and sSFR. According to this, the outflows observed in AGNs do not necessarily imply the quenching of star formation, even if this negative feedback occurs over long timescales.§.§ Quiescent and star forming galaxiesWith the help of color-color diagrams, we can study whether AGNs with and without outflows are found in galaxies with different stellar populations. Rest-frame color-color diagrams have been largely used to separate populations of quiescent and star-forming galaxies <cit.>. <cit.> introduced a selection method using dust corrected optical colors, and considering the optical and near-IR emission of galaxies at redshifts z ≤ 2, the rest-frame U-V versus V-J color-color (hereafter UVJ) diagram. The rest-frame NUV - r^+ versus r^+ - J diagram introduced by <cit.>, was presented as an alternative to the UVJ diagram of <cit.>, as NUV - r^+ is more sensitive to the history of star formation activity <cit.>, while the r-band is more sensitive to the amount of stellar mass, formed over the course of a galaxy's history. Figure <ref> shows the rest-frame M_NUV - M_r vs M_r - M_J color-color diagrams for the AGNs with and without asymmetric component (blue and red symbols, respectively). The solid line separate regions occupied by quiescent and star-forming galaxies as defined by <cit.>, where galaxies with M_NUV - M_r > 3(M_r-M_J)+1 and M_NUV - M_r > 3.1 are considered as quiescent. The blending between dusty star-forming galaxies and quiescent galaxies is avoided with this selection given that dust absorption would shift star-forming galaxies along a diagonal axes from the bottom left to the top right of Fig. <ref>. With the increased fraction of red galaxies observed across time <cit.>, it has been postulated an evolutionary path for galaxy populations from blue to red. In the color-magnitude diagram there is a region in between the blue and red galaxy populations, called Green Valley (GV) which would be inhabited by a transition population called Green Valley galaxies <cit.>. The boundaries used to define this region vary among authors, with the limiting values being e.g., 4<M_NUV-M_r<4.5 <cit.>, 3.5<M_NUV-M_r<4.5 <cit.>, 3.2<M_NUV-M_r<5 <cit.> or by a linear relation considering other bands as in <cit.> with 2(M_V-M_J)+1.1 ≤ (M_NUV - M_V) ≤ 2(M_V - M_J) + 1.6. In this work as in <cit.>, we consider a ± 0.5 mag. shift from the limit defined by <cit.>, shown in Fig. <ref> with dashed green lines. In top and right panels we also include the corresponding color distributions for each sample. We find that 59% of the AGNs with outflows and 74% of AGNs with symmetrical  profiles are located in the star-forming region, while 41% of the galaxies with outflows and 24% without outflows reside in the green valley region, and only two objects can be found in the quiescent zone (both without outflow). As it can be seen, we find an excess of 15% for galaxies with star-formation and no evidence of outflows in their optical spectra. On the other hand, galaxies with outflows are 17% more likely to be found in the GV than AGN with no outflows. Regardless, the color distributions of AGNs with and without outflows are indistinguishable. For the M_r - M_j colors for AGNs with outflows and without outflows we obtain a median value of 0.8 ± 0.2 and 0.8 ± 0.3, while for the M_NUV - M_r colors we get a median of 2.6 ± 0.7 and 2.6 ± 0.9 respectively. In Fig. <ref> we also show the objects with detected X-ray emission as filled points, and dashed histograms. For the AGNs with no outflows and X-ray emission we find the following percentages: (SF, GV, quiescent)=(77, 19, 4)%. For the sample of AGNs with outflows we find (SF, GV, quiescent)=(58, 42, 0)%. When constraining our sample to objects with X-ray emission we obtain more than double AGNs with outflow in the green valley region, with respect to AGN without outflow, while the large majority of AGN without outflows and X-ray emission can be found in the SF region. §.§ Morphology§.§.§ Sérsic profilesIn order to address whether the occurrence of outflows is related to their host galaxy morphology, we perform three independent analysis derived from observer-frame optical data.First, we study the host galaxy morphological properties as given by the Sérsic index, making use of the data provided by the Advanced Camera for Surveys General Catalog (ACS-GC) <cit.>. This catalog used publicly available data obtained with the Advanced Camera for Surveys (ACS) instrument on the Hubble Space Telescope to construct a photometric and morphological database. The imaging data used to construct the ACS-GC was collected from four surveys covering 469,501 galaxies: the All-wavelength Extended Groth Strip International Survey <cit.>, the Great Observatories Origins Deep Survey <cit.>, the Cosmological Evolutionary Survey <cit.>, and the Galaxy Evolution from Morphologies and SEDs <cit.>. The data used in this work correspond to the COSMOS survey which covers an area of 1.8 deg^2 in the F814W filter, with a limiting AB magnitude of 26.0. At the mean redshift of our sample (z = 0.59) this broadband filter images covers the spectral region corresponding to Hβ + λλ4959,5007, which is closest to the  feature we used to find the outflow features.In <cit.> they employed an automated fitting method called GALAPAGOS <cit.> to measure structural parameters such as the Sérsic index <cit.>. With a search radius of 5 arcsec, we find a total of 100 objects from our initial sample (of 103 objects) present in this catalog. As in <cit.>, we also exclude objects with n = 0.2 and n = 8 that are likely to correspond to erroneous fits or systematics, therefore we constrain the Sérsic index to be in the range 0.2 < n < 8. With this restriction and after applying the line signal cut described in Section <ref>, we are left with 75 objects. We separate the galaxies into two main classes: (1) late type galaxies, spirals or disc dominated with 0.2 < n < 2.5; (2) early-types or elliptical galaxies, with 2.5 < n < 8 <cit.>. According to their Sérsic index values and without making any distinction regarding their  profiles, we find a percentage of 45 ± 5% disc dominated or late type galaxies and 55 ± 5% corresponding to early type galaxies.While for the 13 galaxies with  asymmetrical profiles and valid Sérsic indexes we obtain that 23_-7^+14% of them correspond to late-types/spiral galaxies, 77_-14^+7% early-types or elliptical galaxies. Among the 62 galaxies with symmetrical line profiles we find an even distribution of late type or disc dominated galaxies (50 ± 6%) and early types or bulge dominated galaxies (50 ± 6%).We estimated the uncertainties for the computed fractions (percentages) according to a Bayesian approach using a 68.3% (1σ) confidence interval, as explained in <cit.>. §.§.§ Non-parametric morphologyContinuing with our morphological analysis, we use in second place quantitative measures of the distribution of light. By doing so, we can avoid a-priory assumptions about the distribution of the light. Here we use three morphological parameters that are commonly used in non-parametric methods for galaxy classification, the Asymmetryindex (A) <cit.>, the Gini coefficient (G) <cit.> and the moment of the brightest 20% of galaxy flux (M20) <cit.>.The Asymmetry index of a galaxy is determined by rotating the image by 180^∘ and then subtracting it from the original image, and then adding up the absolute value of the differences in intensity at each pixel location. This total value is then compared to the original flux of the galaxy. The Gini coefficient, which was initially proposed as an economic indicator to assess wealth distribution within a population, has also been employed in astrophysics to quantify the inequality in the distribution of light across pixels in a galaxy. A value of Gini=1 would imply that all the light is concentrated in a single pixel, whereas a Gini coefficient of 0 indicates that the light is uniformly distributed across all pixels. M20 is a measure of the concentration of light in a galaxy. It is defined as the second order moment of the brightest 20% of a galaxy's pixels.It is a useful tool to distinguish between normal galaxies and non-symmetric objects, and to identify galaxies that have recently undergone a merger. We use the morphological parameters from the catalog presented by <cit.>. This catalog provides information on non-parametric diagnostics of galaxy structure derived from images from the Hubble Space Telescope ACS, for 232022 galaxies up to a limiting magnitude I_AB = 25.In Figure <ref> top panel, we plot Asymmetry versus Gini coefficient for the sample of AGNs with symmetrical  profiles (red circles) and for AGNs with outflows (blue diamonds). We include dividing lines between regions of predominantly irregulars, spirals, and elliptical morphological types taken from <cit.>. The solid line between irregular and spiral galaxies is defined as log_10(Asymmetry)=2.353×log_10(Gini)+0.353, while the solid line between spiral and elliptical galaxies is defined as log_10(Asymmetry)=5.5×log_10(Gini)+0.825. We also include a horizontal dashed line to separate the region where mayor mergers are expected, defined by log(Asymmetry) > -0.46 <cit.> and a vertical dashed line at log(Gini)= -0.3 which separates late-type from early-type galaxies according to <cit.>. In Fig. <ref>, bottom panel, we show the Gini parameter against M20. Here, we include dividing lines between the regions of mergers, Sb/Sc/Irr and E/S0/Sa galaxy types taken from <cit.>, according to the following definitions, Mergers: G > -0.14×M20 + 0.33; Early(E/S0/Sa): G ≤ -0.14×M20 + 0.33, and G > 0.14×M20 + 0.80; Late(Sb/Ir): G ≤ -0.14×M20 + 0.33, and G ≤ 0.14×M20 + 0.80. We note in both panels of Fig. <ref>, that AGNs with outflow signatures are preferentially located in the region corresponding to early-type/elliptical galaxies, with 95_-9^+1% of them found in the area defined by <cit.> and 86_-11^+4% inside the areas defined by <cit.> and <cit.>. While for the AGNs with no asymmetric component in , 55_-4^+7% are early type galaxies according to <cit.> and <cit.>, and 41_-5^+6% according to the limits of <cit.>. We do not obtain any galaxies with outflows in the region corresponding to mergers, and 14_-2^+5%, 7_-1^+4%, and 4_-1^+4% of AGN without outflows are mergers in accordance to <cit.>, <cit.> and <cit.> respectively.§.§.§ Visual ClassificationIn this section we apply a visual classification criterion to estimate the morphology of the AGN host galaxies. We classify the galaxies in our sample into three visual classes: elliptical (bulge dominated), spiral (disk dominated) and irregular/merger. This classification aims at describing the dominant morphology of the host galaxy. The objects were examined independently by the authors using the HST ACS F814W band images. To assign a galaxy to each category we requested a simple majority of votes. In Figure <ref>, we show the fraction of galaxies with and without outflows (blue and red symbols respectively) in each morphological type. We compute the error in each class according to a Bayesian approach using a 68.3% (1σ) confidence interval, as explained in <cit.>. We obtain that AGNs with outflows are found preferentially in galaxies with elliptical (38_-9^+11%) and disk (43_-9^+10%) morphology with a slightly higher incidence than AGNs with no outflows in the elliptical class type. Conversely, the fraction of AGNs in merger (or irregular) galaxies with outflows (19_-5^+11%) is lower than the AGNs in irregular host galaxies with no outflows (27_-4^+6%).By comparing with the non-parametric morphological classification, we find that the separation limit of <cit.> recovers a 97% of the elliptical galaxies, while the separation of log(Gini) > -0.3 of <cit.> recovers 90% of them. In the case of visually classified spiral (late types) galaxies, we see that they present a large scatter in their Gini index values, with ∼60% of them found in the region for early-types. This result is in agreement with the works of <cit.> and <cit.>. A possible explanation is that spiral galaxies with a prominent bulge or significant AGN contribution will have a greater Gini concentration parameter and a lower M20 thus leading to a miss-classification. § DISCUSSION§.§ Outflow incidence We can see from Figures <ref> and <ref> a higher incidence of AGNs containing outflow at increasing  and X-ray luminosity. This is consistent with previous works, where a higher fraction of galaxies with outflows are found at higher luminosities <cit.>. We would expect that the correlation reported between bolometric luminosity and outflow velocity <cit.>, to be also present with the   luminosity. However, we do not find a clear trend between the outflow velocity and the total   line luminosity nor with the X-ray luminosity. This could be partly explained because of the low   line luminosity of our objects, while the trend is more noticeable at increased luminosity ranges. <cit.> studied the merger impact on the outflow properties, derived from the  line profile, at a similar luminosity range than our sample. They report that a noticeable increase in the average outflow velocity is evident only in the most luminous   bin. To have a broader perspective of the place our objects occupy within the context of the overall type 2 AGN population, in terms of their   luminosities and velocities, we show in Figure <ref> a comparison with previous works. In this Figure, we compare our sources with the 2920 type 2 QSOs at a redshift range of 0.4 ≲ z ≲ 0.65, and   luminosities in the range of log(L)∼41-44 erg s^-1, from <cit.> (grey symbols); the ∼560 luminous (log(L) ≳41.6 erg s^-1) from <cit.> (green crosses); and nine luminous (log(L) > 42 erg s^-1) type 2 QSOs with redshifts 0.1 < z < 0.5 from <cit.> (orange squares). We compute the median values of velocity in luminosity bins for our objects with outflows (large blue symbols), and without outflows (large red symbols), and also combining all the data-sets plotted here (black symbols). We can see that, although with large dispersion, our objects seem to follow the overall W_80 - luminosity trend. §.§ Compaction related to outflowsWe revisited the M_NUV - M_r vs M_r - M_j color-color diagram as a function of the morphology of their host-galaxy given by the visual classification in Figure <ref>. We can see that galaxies classified as irregular/mergers show bluer colors with median values of (M_r - M_j) = 0.72 and (M_NUV - M_r) = 2.44, and most galaxies with outflow and merger signatures are found in the SF region (80%). Spiral galaxies are mostly located in the zone demarcated for SF with 18% of them in the Green Valley. We see that AGNs with outflow and early type morphology, are evenly located in the GV and SF regions. Noticeably we find 12 (41%) early type objects with “blue" colors, populating the demarcated area for star-forming galaxies.These objects might belong to the population of galaxies known as blue ellipticals, where the star formation is being driven by secular gas accretion processes <cit.>.The gas compaction in massive galaxies could also trigger the AGN activity <cit.>. In the compaction scenario <cit.>, galaxies live through one or more blue nugget phases which a minimum in gas depletion time and a maximum in gas fraction are reached. In Figure <ref> we plot the relation between the outflow velocity (W_80) and the half-light radius (R_50). We see a clear positive trend of faster outflow velocity with larger R_50. Meaning that the more extended (and more massive) galaxies, displays the strongest outflows. From the distribution of R_50, on the top panel of Fig. <ref>, we obtain that outflows are preferentially found in AGN with the most compact light distributions. The median half-light radius for galaxies with outflows is R_50 = 1.8 ± 1.1 kpc and for galaxies with no outflows R_50 = 2.7 ± 1.3 kpc. When segregating according to their morphology, this effect is more noticeable in elliptical galaxies with outflows where their median R_50 is 1.5 ± 0.6 kpc, while elliptical galaxies with no outflows have a median R_50 = 2.0 ± 0.6 kpc. That is, elliptical galaxies with outflows present a half-light radius of ∼80% the size of AGN of the same morphological class with no outflows. This is consistent with the scenario proposed by <cit.> where obscured AGNs are most likely found in star-forming galaxies that have undergone a process of dynamical contraction. In this scenario, a galaxy's core becomes more compact due to an episode of intense gas inflow, therefore forming a massive bulge with a high gas fraction and star-formation <cit.>. The inflow of gas can also sustain accretion onto the central supermassive black hole, which can trigger the formation of an active galactic nucleus (AGN) with moderately sub-Eddington luminosities. This in turn would favor the formation of the observed AGN-driven outflows.To further explore how the link between outflows and AGNs is driven, and their impact on their host galaxy, we will need a larger sample of AGNs. This will be achieved thanks to wider AGN surveys, such as those that will be performed by the James Webb Space Telescope (JWST).§ SUMMARY In this paper, we have studied the galaxy host properties of a sample of AGNs with and without outflow signatures selected on the basis of the MIR color-color diagram proposed by <cit.> and the line diagnostic diagram which relates the /Hβ line ratio and the stellar mass known as mass-excitation diagram <cit.>. We summarize the main results as follows: * The outflow incidence increases with  luminosity. Despite this, we do not see a significant trend relating the outflow velocity and the line luminosity. * We also observe a slight trend of higher outflow incidence towards higher X-ray luminosity, with the caveat of having a low statistical significance. The hardness ratio distribution are practically identical for AGNs with and without outflow, together with the previous we can not claim an influence of the inner disc or corona in the triggering of the outflow. * We do not see a significant difference in the host stellar mass distribution nor their star formation rates, between AGNs with outflow signatures and those without them. There is a trend of increasing gas velocities with higher host galaxy mass. * By inspecting the M_NUV - M_r vs. M_r - M_j color-color diagram, we find that in the Green Valley region there is a higher fraction of AGNs with outflows. The majority of AGNs without outflow (∼75%) are found in the star forming region. * Morphological analysis from Sérsic index and non-parametric measurements results in a majority of AGNs with outflows found in galaxies with early type, bulge dominated morphology. From visual inspection we obtain similar fractions (∼ 40%) of AGN with outflows residing galaxies with disk and elliptical morphological types. We conclude that automated processes to classify the morphological type of a galaxy, fail to correctly identify almost half of the late type galaxies. * It is favored a scenario of dynamical compaction which brings gas into the central part of the galaxy. This would supply the gas necessary to be accreted into the central black-hole, triggering the AGN and the observed outflows.From our results we can not infer a significant impact on the host galaxy given by the outflow. The feedback claimed by theoretical works is not so evident in our studied sample. AGN feedback if and when present must likely be a local phenomenon, and not galaxy-wide. On the other hand, large-scale properties of the galaxy such as its mass and morphology, might contribute on the likelihood of observing an AGN-driven outflow. § ACKNOWLEDGMENTS This work is sponsored by the National Key R&D Program of China for grant No. 2022YFA1605300,the National Nature Science Foundation of China (NSFC) grants No.12273051 and 11933003. This work was partially supported by the Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) and the Secretaría de Ciencia y Tecnología de la Universidad Nacional de Córdoba (SeCyT).Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme ID 179.A-2005 and on data products produced by TERAPIX and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium. Based on zCOSMOS observations carried out using the Very Large Telescope at the ESO Paranal Observatory under Programme ID: LP175.A-0839 (zCOSMOS). Astropy <cit.>, Matplotlib <cit.>, ifscube <cit.>§ INDIVIDUAL SPECTRAL FITSIn this Section, we present the spectra with multiple component Gaussian fitting for the sample of 25 galaxies with  asymmetric profile chosen by the criteria defined in section <ref>.aasjournal
http://arxiv.org/abs/2312.15957v1
{ "authors": [ "Gabriel A. Oio", "Y. Sophia Dai", "C. G. Bornancini", "Zi-Jian Li" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231226084443", "title": "Host galaxy and nuclear properties of IR-selected AGNs with and without outflow signatures" }
refs.bibbasicstyle=,breaklines=true, [sjson]json frame=lines, framesep=2mm, linenos, breaklines, [mlmd]text frame=lines, framesep=2mm, linenos, breaklines, [stext]text breaklines=true, breaksymbol=, [nobreak=true]stextenv stext stextlabelfont=bf [itemize]leftmargin=* [enumerate]leftmargin=* ruleenenumerate3 [ruleen,1]label=Rule *: , leftmargin=*, labelsep=0pt, itemsep=0pt[ruleen,2]label=Rule ruleeni.*: , leftmargin=*, labelsep=0pt, itemsep=0pt [ruleen,3]label=Rule ruleeni.ruleenii.*: , leftmargin=*, labelsep=0pt, itemsep=0ptrulezhenumerate3 [rulezh,1]label=规则*:, leftmargin=*, labelsep=0pt, itemsep=0pt[rulezh,2]label=规则rulezhi.*:, leftmargin=*, labelsep=0pt, itemsep=0pt [rulezh,3]label=规则rulezhi.rulezhii.*:, leftmargin=*, labelsep=0pt, itemsep=0ptpositioning, arrows.metafigureFig.Fig. tableTableTable listingListingListing 1, 2, 4, 5, 6]YANG Zijie 3, 6]YIN Yongjing 2, 4, 6]KONG Chaojun 2, 4, 6]CHI Tiange 2, 4, 6, ]TAO Wufan 3, 5, 6, ]ZHANG Yue 2, 4, 5, 6, 7, ]XU Tian [1] Fudan University, Shanghai, China[2] Key Laboratory of Growth Regulation and Translation Research of Zhejiang Province, School of Life Sciences, Westlake University, Hangzhou, Zhejiang, China[3] School of Engineering, Westlake University, Hangzhou, Zhejiang, China[4] Westlake Laboratory of Life Sciences and Biomedicine, Hangzhou, Zhejiang, China.[5] Research Center for Industries of the Future, Westlake University, Hangzhou, Zhejiang, China[6] Shennong Program, Westlake University, Hangzhou, Zhejiang, China[7] Changping Laboratory, Beijing, China[] Corresponding authors {yangzijie, taowufan, zhangyue, xutian}@westlake.edu.cn UTF8gbsn PART:*Supplementary information [title=Supplementary references, heading=bibnumbered]
http://arxiv.org/abs/2401.00020v1
{ "authors": [ "Zijie Yang", "Yongjing Yin", "Chaojun Kong", "Tiange Chi", "Wufan Tao", "Yue Zhang", "Tian Xu" ], "categories": [ "cs.AI", "cs.DB", "cs.IR" ], "primary_category": "cs.AI", "published": "20231227184827", "title": "AI-driven platform for systematic nomenclature and intelligent knowledge acquisition of natural medicinal materials" }
[E-mail me at: ][email protected] Dpto. Física Aplicada, Campus Muralla del Mar, Universidad Politécnica de Cartagena. 30202 Cartagena, Murcia (Spain) In this paper, weusedifferential forms to prove a number of theorems of integral vector calculus that are rarely found intextbooks. Two of them, as far as the author knows, have not been published before.Some possible applications to problems in physics are shared includinga general approach for computing net forces and torques on current-carrying loops that yields insights that are not evident from the standard approach.Recovering seldom-used theorems of vectorcalculus and their application to problemsof electromagnetism. Antonio Pérez-Garrido January 14, 2024 =========================================================================================================== § INTRODUCTIONIntegral vector calculus theorems provide powerful tools for solving problems involving electric and magnetic fields. These theorems, such as Gauss's theorem∮_∂ V𝐀· d𝐬=∫_V∇·𝐀dv,where 𝐀 is a vector field and ∂ V is the boundary surfaceof the volume V, and Stokes's theorem∮_∂ S𝐀· d𝐫 =∫_S (∇×𝐀)· d𝐬,where ∂ S is the bounding curve of the surface S,allow us, for example, to switch between integral and differential version of Maxwell equations. Other vector integral theorems are much less known, due, likely, to their lower relevance. Even so, they can be useful in understanding electromagnetic fields. Three examples are the integral identities∮_∂ S𝐀× d𝐫=∫_S[(∇·𝐀)d𝐬-∇(𝐀· d𝐬)]=-∫_S (d𝐬×∇)×𝐀,∮_∂ S fd𝐫=-∫_S∇f × d𝐬,and∮_∂ S(𝐂· d𝐫) 𝐀=∫_S d𝐬·(∇×𝐂-𝐂×∇)𝐀,for any scalar field f and vector fields𝐀 and 𝐂. These theorems are rarely found in textbooks about vector calculus. Furthermore, using (<ref>) and (<ref>) and the expression for the vector triple product, as detailed later, we also derive a sort of a corollary:∮_∂ S𝐂×(𝐀× d 𝐫) =∫_S∇(𝐂·𝐀)× d𝐬 +∫_S[d𝐬·(∇×𝐂-𝐂×∇)]𝐀. In all these identities,the orientation of surfaces and curves is crucial. This is a standard issue in vector integral calculus thus in subsequent discussions we will not make further mention about this subject, but it is implicit that one must be cautious when doing calculations. This work has two purposes.The first is to show how differential forms can be used to easily demonstrate the identities in Eqs.(<ref>-<ref>).This is done in Sect. II, which can be skipped by readers who are not already familiar with differential forms.The second purpose is to show how these identities can be applied to problems in electromagnetism to reveal new insights that can be beneficial for teaching physics to advancedundergraduate students, which is done in Sect. III. § DEMONSTRATION OF IDENTITIESDifferential forms are a powerful framework in theoretical physics as demonstrated, for instance, in their application to quantum mechanics in a paper by Hoshino<cit.>. Additionally, differential forms demonstrate their robustnessfor vector analysis and for teaching and understanding Maxwell's equations and the principles of electromagnetism <cit.>. In this section, weemploy differential forms as tools for demonstrating identities (<ref>-<ref>).Readers who are inspired to learn more about differential forms in order to understand these demonstrations are urged to consult severalexcellent introductions to the subject <cit.>.In this paper, we restrict to ℝ^3.Let A be a 1-form, dx, dy, dz and, , orthonormal bases for 1-forms and vectors, respectively, so a 1-form A can be written as a linear combination of 1-form basis A=A_xdx+A_y dy+ A_z dz=𝐀· d𝐫,where 𝐀 =A_x + A_y +A_zandd𝐫=dx +dy +dz .Henceforth, the symbol ∧ is not written between elements of the basis of 1-forms; we will write, for example, dxdy as a short for dx∧ dy, thusdxdy=-dydx anddxdx=dydy=dzdz=0.We are going to use the Stokes's theorem that, formally, says: let M be an oriented n-dimensional smooth manifold with boundary ∂ M and let's suppose we have an (n-1)-form α defined on M, the integral of its exterior derivative, dα, over M is equal to the integral of α over the boundary, ∂ M, of M:∫_∂ Mα=∫_M dα. The steps to follow for all demonstrations are very simple: compute the exterior derivative of the integrands of the left hand side of Eqs. (<ref>-<ref>)and apply (<ref>), in other words, we are going to show that those integrandsare the potentials for the integrands of the right hand side of their respective equations. §.§ Identity (<ref>) We can write 𝐀× d𝐫 as adifferential form,𝐀× d𝐫=(A_ydz-A_zdy) +(A_zdx-A_xdz)+ (A_xdy-A_ydx ).Actually, (<ref>) is a vector-valued 1-form. We only have to calculate the exterior derivative of (<ref>) to complete the proof.d(𝐀× d𝐫)= [(A_ydx+ A_ydy)dz-(A_zdx+A_zdz)dy]+ [(A_zdy+ A_zdz)dx-(A_xdx+A_xdy)dz]+[(A_xdx+ A_xdz)dy-(A_ydy+A_ydz)dx].We are using the already mentioned fact that after the exterior derivative, the product between differential forms is the anti-symmetric exterior product so dxdx=dydy=dzdz=0.Regrouping terms in (<ref>) we haved(𝐀× d𝐫)= [ (A_x+A_y+A_z)dydz-(A_xdydz+A_ydzdx+A_zdxdy)] +[ (A_x+A_y+A_z)dzdx-(A_xdydz+A_ydzdx+A_zdxdy)]+[ (A_x+A_y+A_z)dxdy-(A_xdydz+A_ydzdx+A_zdxdy)],which can be written in a more compact formasd(𝐀× d𝐫)=(∇·𝐀)d𝐬-∇(𝐀· d𝐬),where d𝐬 is a vector valued two-form representing the differential surface element, given by: d𝐬=dydz +dzdx +dxdy .If we use the vector triple product identity, the relation (<ref>) can be also written asd(𝐀× d𝐫)= -(d𝐬×∇)×𝐀.Using the generalized Stokes's theorem (<ref>) we arrive to find∮_∂ S𝐀× d𝐫=∫_S d( 𝐀× d𝐫),or ∮_∂ S𝐀× d𝐫=-∫_S (d𝐬×∇)×𝐀,i.e., equation (<ref>). This theorem can be also demonstrated applying Stokes's theorem (<ref>) to vector field 𝐂=𝐁×𝐀 for a constant vector 𝐁 and using a number of vector identities <cit.>.§.§ Identity (<ref>)This identity is the most widely known of (<ref>-<ref>) and can befound in some textbooks. Following the sameprocedure than before, we express fd𝐫 as: fd𝐫=fdx+fdy+fdz. Subsequently, we calculate the exterior derivative of (<ref>): d(fd𝐫)= ( fdy+ fdz)dx+ ( fdx+ fdz)dy+( fdx+ fdy)dz= ( fdzdx- fdxdy)+ ( fdxdy- fdydz)+ ( fdydz- fdzdx)= -∇f× d𝐬,where we evaluate the differential area element, again,as in expression (<ref>). Thus, we have that d(f d𝐫) = -∇f × d𝐬. This outcome leads us to Eq. (<ref>) upon applying the generalized Stokes's theorem (<ref>).∮_∂ S fd𝐫=∫_S d(fd𝐫) ⟹∮_∂ S fd𝐫=-∫_S∇f × d𝐬. As already stated <cit.>, the theorems (<ref>–<ref>) can be writtenina unique andcompact expression∮_∂ S d𝐫*𝐠= ∫_S(d𝐬×∇)* 𝐠,where the asterisk (*) denote dot, cross and ordinary products, respectively. In the latter case 𝐠 is, actually, a scalar g.§.§ Identity (<ref>)As before, we calculate the exterior derivative of (𝐂· d𝐫)𝐀 and apply (<ref>). Firstly, we develop the expression as a vector valued differential form:(𝐂· d𝐫)𝐀=(C_xdx+C_ydy+C_zdz)(A_x𝐢+A_y𝐣+A_z𝐤).For our purposes it is enough to make the exterior derivative of x component and extend the result to the other two components,d[(C_xdx+C_ydy+C_zdz)A_x]= C_x(∂ A_x/∂ ydydx+∂ A_x/∂ zdzdx)+A_x(∂ C_x/∂ ydydx+∂ C_x/∂ zdzdx)+C_y(∂ A_x/∂ xdxdy+∂ A_x/∂ zdzdy)+A_x(∂ C_y/∂ xdxdy+∂ C_y/∂ zdzdy)+C_z(∂ A_x/∂ xdxdz+∂ A_x/∂ ydydz)+A_x(∂ C_z/∂ xdxdz+∂ C_z/∂ ydydz),and, after regrouping terms, the right hand side of (<ref>) reduces to[(C_y∂/∂ x-C_x∂/∂ y)dxdy+(C_z∂/∂ y-C_y∂/∂ z) dydz +(C_x∂/∂ z-C_z∂/∂ x) dzdx.+ . (∂ C_y/∂ x-∂ C_x/∂ y)dxdy+(∂ C_z/∂ y-∂ C_y/∂ z) dydz +(∂ C_x/∂ z-∂ C_z/∂ x)dzdx]A_x.For the other two component we only have to change A_x in (<ref>) with the corresponding component A_y or A_z. Thus, we can check thatd((𝐂· d𝐫)𝐀)= (d𝐬·(∇×𝐂-𝐂×∇)) 𝐀,where d𝐬 is given by (<ref>) and d𝐬·(𝐂×∇) is the 2-form scalar operator given byd𝐬·(𝐂×∇)=(C_x∂/∂ y-C_y∂/∂ x)dxdy +(C_y∂/∂ z-C_z∂/∂ y) dydz +(C_z∂/∂ x-C_x∂/∂ z) dzdx. Thus, appealing again to Stokes theorem (<ref>),∮_∂ S(𝐂· d𝐫) 𝐀=∫_Sd((𝐂· d𝐫)𝐀),and, after substitution, we arrive∮_∂ S(𝐂· d𝐫) 𝐀 =∫_S (d𝐬·(∇×𝐂-𝐂×∇)) 𝐀.§.§ Corollary (<ref>)Using the vector triple product expression we develop the integrand in the left side of (<ref>)as𝐂×(𝐀× d𝐫)=(𝐂· d𝐫)𝐀-(𝐂·𝐀)d𝐫.For the first term in the right hand side of (<ref>) we employ (<ref>) and, for the second term, (<ref>), thus completing the demonstration of corollary (<ref>).The author has been unable to find either (<ref>) or (<ref>) in any published document, paper, or textbook. § APPLICATION OF THEOREMS TO PROBLEMS IN ELECTROMAGNETISMIn this section, we apply the established identities, (<ref>–<ref>), to address problems in electromagnetism.By applying these identities, we aim to unravel some aspects of the physics that may remain obscured when employing conventional approaches.§.§ Calculation of the magnetic force on a closed current The magnetic force, d𝐅, acting on an infinitesimal segment d𝐫, given by (<ref>), of a loop carrying current I, can be expressed as:d𝐅 = Id𝐫×𝐁.For a macroscopic current, the net force is determined by summing up all the infinitesimal forces,𝐅=I∮_∂ S d𝐫×𝐁,where ∂ S is the closed curve defining the current loop, and the boundary of the surface S. Applying the first equality in Eq. (<ref>) yields:𝐅 =I∫_S ∇(𝐁· d𝐬),where the term involving∇· 𝐁 has been omitted because the magnetic field is divergence free and d𝐬 is given by (<ref>). We can expand (<ref>) to𝐅=I∫_S[(B_xdydz+B_ydzdx+B_zdxdy).+ (B_xdydz+B_ydzdx+B_zdxdy)+ .(B_xdydz+B_ydzdx+B_zdxdy)].In a uniform magnetic field, all the derivatives of 𝐁 are zero, and therefore the net force, 𝐅, over the loop is zero. We also see that the net force will have a non-zero component along an axis only if a component of the magnetic field is not uniform in that direction. Let see some particular cases of application of (<ref>).§.§.§ Planar loopIn this case, we can select a frame of reference with its z axis orthogonal to the plane of the loop, as depicted in Fig. <ref>. The expression (<ref>) reduces to 𝐅=I∫_S(B_z𝐢+B_z𝐣+B_z𝐤)dxdy,as dydz=dzdx=0. Consequently, B_x and B_ydo not affectthe net force acting on the current loop. In theparticular case that B_z only depends on z,B_z=B_z=0, B_z 0,we find the counter intuitive expression𝐅=I∫_SB_z𝐤dxdy,indicating the emergence of a net magnetic force parallel to the z axis generated by non-uniformities in B_z along the z axis. This seems to contradict (<ref>), where variations along z axis are not taken into account since the integral, in this case, is carried out for a fixed z. This apparent contradiction is clarified when we use the fact that the magnetic field is divergence-free, thus B_z=-(B_x+B_y),and (<ref>) is given now by𝐅=-I∫_S (B_x+B_y)𝐤dxdy. Now consider the case of a planar loop placed in the xy plane, as before, but with B_zdepending linearly on coordinates:B_z=b_xx+b_yy+b_zz.After applying (<ref>) we have a magnetic force given by𝐅=IS(b_x𝐢+b_y𝐣+b_z𝐤),where S is the area of the surface enclosed by the loop.§.§.§ Magnetic field with a fixed directionIf the magnetic field points only along the z-direction, see Fig. <ref>, we obtain once again the expression (<ref>), which is now valid even for a non planar current loop. Since B_x=B_y=0, the terms dydz and dzdx are discarded, although they can have non-zero values in the general case, as the loop may not be flat.In this particular case we haveB_z=0,since the magnetic field is divergence free. We are left with the expression𝐅=I∫_S(B_z𝐢+B_z𝐣)dxdy,where we can observe that the magnetic force will be perpendicularto the magnetic field and in the direction of magnetic field non-uniformities. In the particular case that the magnetic field has a linear dependence on coordinates:B_z=b_xx+b_yy,the magnetic force has the form𝐅=IS_ net(b_x𝐢+b_y𝐣),where S_ net is the projected area of the surface enclosed by the loop onto the xy plane. In general, S_ net is the given by:S_ net=∫_S𝐤· d𝐬.When the enclosed surface is flat,S_ net=𝐤·𝐒, where 𝐒 is the vector representing the surface enclosed by the loop, i.e., a vector orthogonal to the surface and with a magnitude equal to the area of the surface. §.§ Torque on a current-carrying loopIn textbooks, the torque on a flat rectangular electric current loop under the influence of a uniform magnetic field is usually calculated and then generalized so that it can be applied to any loop. This generalization consists of decomposing the current loop into a sum of infinitesimal current loops, so that the rectangular loop formula is valid for any of thoseinfinitesimal loops. Here instead we derive the general form of this expression for a uniform magnetic field 𝐁, regardless of the shape of the loop.We start be integrating the torque about the origin experienced by an infinitesimal segment of the loop.The force over an infinitesimal current is given by (<ref>) and its torque or moment of magnetic force, d𝐌, about the origin of the frame of reference, isd𝐌 = 𝐫×(Id𝐫×𝐁) = I𝐫×(d𝐫×𝐁),where 𝐫 is the position vector of the current segment.Using the corollary (<ref>) in the case of a uniform magnetic field, the total magnetic torque on an arbitrary current loop can be written𝐌 =I∮_∂ S𝐫×(d𝐫×𝐁)=-I∮_∂ S𝐫×(𝐁× d𝐫) =-I∫_S∇(𝐫·𝐁)× d𝐬- I∫_S[d𝐬·(∇×𝐫-𝐫×∇)]𝐁.Taking into account that the magnetic field is uniform and that ∇×𝐫=0, the torque reduces to𝐌 =-I∫_S∇(𝐫·𝐁)× d𝐬=-I∫_S ∇(xB_x+yB_y+zB_z)× d𝐬= I∫_S d𝐬×(B_x+B_y+B_z)=I(∫_Sd𝐬)×𝐁.That, for a planar loop becomes𝐌=I𝐒×𝐁,where 𝐒 is the vector representing the surface enclosed by the loop. Equation (<ref>) is usually written as 𝐌=𝐦×𝐁,where 𝐦=I𝐒 receives the name of magnetic moment.An interesting case occurs when considering a planar loop exposed to a non uniform magnetic field but with a fixed direction, Fig <ref>. In this situation, we can select a frame of reference with the z axis aligned with the magnetic field direction, and with the xz plane perpendicular to the loop. The angle between the normal to the loop and the direction of 𝐁 is denoted α. Similar to the previous case, the magnetic field is dependent solely on the variables x and y:𝐁=B_z(x,y)𝐤.Using (<ref>) and after performing some algebraic manipulations, we derive the expression for the magnetic torque about the origin of the reference frame as follows: 𝐌=I∫_S { zB_zcos(α)𝐢 +[B_zsin(α)-zB_zcos(α)]𝐣.+. [ (xB_z-yB_z)cos(α)-2zB_zsin(α) ]𝐤} ds,where we have dxdy=cos(α) ds and dydz=sin(α) ds.It's worth noting that, in this case, the torque about apoint it is no longer independent of the position of that point.When the loop is orthogonal to the magnetic field, α=0, the torque is given by𝐌=I∫_S[ zB_z𝐢 -zB_z𝐣 + (xB_z-yB_z) 𝐤] dxdy.Eq. (<ref>) indicates that x and y components of the torque depend on the nonuniformities of the magnetic field in the y and x directions, respectively, while the z component depends on both.On the other hand, when the loop is parallel to the magnetic field, α=90^ o, the expression (<ref>), becomes𝐌=I∫_S (B_z 𝐣 +2zB_z𝐤)dydz,demonstrating that the torque now lacks a component in the x direction, and the y component remains independent of the location chosen as the point about which the torque is calculated.Eqs. (<ref>) and (<ref>) provide good qualitative insights into how the shape of the magnetic field can influence the magnetic torque over the current loop, and give hints about how these identitiescan offer a comprehensive framework for analyzing the behavior of current loops in different situations. § CONCLUSIONSWe have presented and demonstrated several theorems, (<ref>–<ref>), which are not commonly found in standard textbooks on the subject. Notably, a couple of these theorems, namely (<ref>) and (<ref>), have not been found by the author in existing publications. Hence, it is plausible that this paper marks the initial publication of these particular identities. The practical utility of these theorems has been demonstrated through their application toproblems in electromagnetism, where it is shown how the utilization of these theorems can offer a more profound understanding of the physics, shedding light on aspects that may remain hidden when employing more conventional approaches.§ ACKNOWLEDGEMENTSI would like to express my gratitude to Antonio Arocas for his meticulous review of the manuscript and his invaluable assistance in researching prior publications of these theorems in the literature on the subject. I also acknowledge support from the grantPID2020-120052GB-I00 financed by MCIN/AEI/10.13039/501100011033.unsrt
http://arxiv.org/abs/2312.17268v1
{ "authors": [ "Antonio Perez-Garrido" ], "categories": [ "physics.class-ph" ], "primary_category": "physics.class-ph", "published": "20231226202603", "title": "Recovering seldom-used theorems of vector calculus and their application to problems of electromagnetism" }
Autonomous Docking Method via Non-linear Model Predictive ControlRoni Permana Saputra Research Center for Smart MechatronicsNational Research and Innovation AgencyBandung, IndonesiaEmail: [email protected] ORCID: https://orcid.org/0000-0001-6989-8830https://orcid.org/0000-0001-6989-8830Eko Joni Pristianto Research Center for TelecommunicationNational Research and Innovation AgencyBandung, IndonesiaEmail: [email protected] Midriem Mirdanies Research Center for Smart MechatronicsNational Research and Innovation AgencyBandung, IndonesiaEmail: [email protected] Dayat Kurniawan Research Center for TelecommunicationNational Research and Innovation AgencyBandung, IndonesiaEmail: [email protected] 14, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The recycling of waste electrical and electronic equipment is an essential tool in allowing for a circular economy, presenting the potential for significant environmental and economic gain. However, traditional material separation techniques, based on physical and chemical processes, require substantial investment and do not apply to all cases. In this work, we investigate using an image classification neural network as a potential means to control an automated material separation process in treating smartphone waste, acting as a more efficient, less costly, and more widely applicable alternative to existing tools. We produced a dataset with 1,127 images of pyrolyzed smartphone components, which was then used to train and assess a VGG-16 image classification model. The model achieved 83.33% accuracy, lending credence to the viability of using such a neural network in material separation. § INTRODUCTIONIn a report released by the United Nations University (UNU) in 2020, the global generation of waste electrical and electronic equipment (WEEE) was estimated at 53.6 million tons annually, or 7.3 kg per capita, with WEEE being the fastest-growing solid waste stream in recent years (from 9.2 million tons in 2014 to a projected 74.7 million tons annually by 2030) <cit.>.The context of WEEE generation also includes a high degree of informality in end-of-life management, with only 17.4% being properly documented and disposed of through formal means, primarily due to technological challenges in collection and recycling faced by the actors involved in this process <cit.>.From this scenario, the report emphasizes that recycling is a fundamental strategy for minimizing the environmental and societal impacts of the WEEE generation, as it is an essential component of the 2030 Agenda for Sustainable Development under the following United Nations Sustainable Development Goals: Goal 3 (Good Health and Well-being), Goal 6 (Clean Water and Sanitation), Goal 8 (Decent Work and Economic Growth), Goal 11 (Sustainable Cities and Communities), Goal 12 (Responsible Consumption and Production), and Goal 14 (Life Below Water). Over the past decade, there has been a concentration of scientific efforts to find recycling solutions for WEEE. Typically, methods established in the metallurgical industry are adapted for WEEE processing. It is the case of the company Umicore, considered a global benchmark in the field, which has its processes based on copper and lead metallurgy, adding only 15% of WEEE to the primary ores and recovering only the most precious metals, such as gold and silver <cit.>. At the base of the recycling chain, the collection and handling of WEEE are still carried out using inefficient methods, with a predominance of manual labor and massive wastage of valuable components <cit.>. The ongoing work presented here stems from research efforts to increase the efficiency of processes and valorization of WEEE (in this case, applied to smartphones) in the early stages of recycling, where waste is managed by small and medium-sized recyclers lacking appropriate technology for these tasks. One of the main objectives is to minimize the need for manual handling of WEEE by automating the processes. By improving the efficiency of the early beneficiation stages (collection, sorting, and pretreatments), the downstream metallurgy processes are expected to be positively impacted in terms of material recovery. The project begins with processing whole smartphones in pyrolysis furnaces (degradation of polymers in the absence of oxygen, generating byproducts with high energy value) to open the devices and release the components that will be subsequently separated. This approach considers all the components in the recycling chain, and no material is wasted. In Fig. <ref>, degraded smartphones resulting from pyrolysis are shown. Following the pyrolysis process, a critical step is separating batteries from other electronic components (screens, printed circuit boards (PCBs), and metal parts) because the methods for recovering materials from batteries differ from those used for other components. Batteries of smartphones contain high concentrations of lithium, which is considered a critical and strategic material for many nations and companies due to the limited availability of primary ores and limited international supply <cit.>. The recycling of batteries emerged as a strategy to mitigate this problem, but specific methods of recycling are needed. If the batteries are not separated from the other WEEE, the lithium is lost in the slag of the typical pyrometallurgical processes, or diluted among other elements, hindering hydrometallurgical approaches <cit.> <cit.>. At this point, the challenge of separating batteries was addressed through a machine learning strategy for image-based component separation, which has the potential to achieve efficient separation without hindering the subsequent recycling processes through metallurgical separation processes. The general idea is to allow the industry to perform these activities automatically, in which a detector and a mechanical sorting device could be coupled to a conveyor belt to separate components.The rest of this paper is organized as follows. Section <ref> reviews the few related works that aim to sort waste using automated learning-based solutions. The proposed methodology is described in Section <ref>. The results are presented and discussed in Section <ref>. Section <ref> concludes this paper and indicates future works. § RELATED WORKSImage-based waste separation has gained traction recently <cit.>, with predominant applications in separating urban waste such as paper, plastic, glass, and metals.Lu and Chen <cit.>, in a literature review from 2022, found eight studies applying artificial neural networks for waste separation.Traditional backbones are trained and applied to the waste separation task in most works found.Bobulski and Kubanek <cit.> developed a convolutional neural network (CNN) to segregate different types of plastics for recycling, achieving an accuracy of over 99%. Zhang et al. <cit.> used a DenseNet169 <cit.> to segregate different kinds of household waste (glass, paper, textiles, metals, and plastics) and achieved 82% accuracy in their tests. For WEEE applications, some studies specific to the field were published between 2022 and 2023. Yang et al. <cit.> applied the YOLOv4 network <cit.> to classify WEEE that potentially have internal batteries from those that do not, such as laptops and printers, achieving an accuracy of 90.1%. Lu et al. <cit.> used the YOLOv3 network <cit.> to detect and separate previously disconnected electronic components from PCBs, such as capacitors and transistors, with accuracies exceeding 90%. To the best of our knowledge, the present work is the first to use CNNs to classify WEEE dismantled components focusing on recycling. We note that sorting pyrolyzed components turns out to be a challenging task since the individual components lack characteristic texture and shape. Separating these components (screens, PCBs, metals, and batteries) is a critical step in the recycling routes for most WEEE, and automating this step, besides being an operational and academic innovation, can generate significant increases in profitability and material recovery efficiency. Specifically, we aim to achieve high accuracy in battery separation, creating a material stream concentrated in batteries. In other words, to prevent other components from being misclassified as batteries and to prevent batteries from being misclassified as other components. § METHODOLOGY To carry out the project associated with this paper, 123 smartphones were gathered through collecting campaigns and the support of partner companies and research projects. In addition, 27 detached smartphone batteries were also gathered with support of partners. No restrictions were imposed on any characteristics of the devices, except for the requirement that they adhere to the smartphone-style. The following sections provide a detailed overview of the methodological steps applied here, which can be observed in Fig. <ref>.§.§ Smartphone pyrolysisThe smartphones were processed in a batch electric resistance furnace. The process conditions were as follows: nitrogen atmosphere, temperature of 600°C, ambient pressure, heating rate of 300°C/h, and residence time of 1 hour. Fig. <ref> provides examples of the materials that result from the process. The material was also submitted to a screening with a 2cm opening to remove small particles. These small particles, after the battery separation, can be sent back to the flow of other components (PCBs, glass, and screens). The technique of pyrolysis, applied to WEEE, generates various benefits to the downstream recycling chain. By the degradation of polymers, encapsulated metals are liberated, and the mass concentration of valuable materials increases, facilitating the posterior chemical attacks. In addition, magnets are demagnetized, the total mass to be processed decreases, metals are kept in their reduced form (preventing oxidation), and energy valuable liquids and gases are generated. The technique of pyrolysis is a hotspot in recent research, being applied to many WEEE as the first step of recycling routes. Pyrolysis furnaces are commonly used in the treatment of wastes, especially for organic wastes, and are expected to conquer more space in the WEEE recycling industry in the middle term. §.§ Image capture The image capture took place with the material randomly selected. The decision was made to position various components in a single image to facilitate future study of the image-based detection task. The present article, however, tackles the multiclass classification problem.In total, 300 3072 × 3072 colored images were captured. Each image contains at least one battery, one piece of metal or PCB, and another random piece. The pieces were randomly placed on a background prepared in gray, black, or white (in equal proportions) under variable ambient lighting (no artificial illumination is used). The images are captured perpendicularly, roughly 50 cm from the background. Fig. <ref> shows an example image captured on a white background with one battery, one PCB, a small piece of glass, and another small metal piece.After capturing the images, the components were flipped to display the reverse side, their positions were swapped, and another shot was captured. Thus, for each component, two images are captured, one for each face, without duplication. After capturing both images, the components were stored separately to prevent repetition.§.§ Image annotation and pre-processingThe images were annotated using the Roboflow platform <cit.>. All components in each image were manually outlined with a polygonal drawing tool. The components were then assigned one of four classes: Metal Piece, Battery, Glass, or Printed Circuit Board (PCB). These four classes are omnipresent in every smartphone. Indeed, after pyrolysis, they are visually the only ones to remain in gross granulometry along with the batteries.Very small components in some pictures were not annotated, as they were considered to have little or negative contribution to learning. They were instead left as part of the background, acting as visual noise elements that will also be present in real-life applications. The annotated images were exported in an oriented bounding box (OBB) format <cit.>. Then, for each component, a square, horizontal bounding box was circumscribed about the component's original OBB. The contents of these square BBs were then exported and resized to 500×500 pixels, thus composing a dataset of individual component images. Fig. <ref> illustrates this process for the components shown in Fig. <ref>. In total, the dataset contained 1,127 images. The images were split in a 70:20:10 ratio between training, validation, and test sets, respectively, with the split happening after the cropping process, given the uneven distribution of different components within the original images. The distribution of these images among the different classes and subsets is shown in Table <ref>. Note that, due to the small size of the dataset, we avoided discarding any component images, despite the uneven quantities of different materials present in the images. Because of this, the entire dataset is slightly unbalanced, presenting 19.25%, 21.83%, 26.62%, and32.30% of metal piece, PCB, battery, and glass instances, respectively. §.§ Neural network training For the desired image classification model, the VGG-16 <cit.> architecture was used as a backbone, being chosen due to its shallow depth being appropriate for the small dataset used <cit.>. We only adjust the last VGG-16 layer to the desired number of classes.The network we used was pre-trained on the ImageNet dataset <cit.>. Our ablation study conducted the impact of pre-training, which significantly improves results, as discussed later. To account for the small dataset, data augmentation was used. A random combination of transformations was applied to each image for each training epoch. More precisely, we considered: * Rotation, within ± 45;* Shear, within ± 5;* Zoom, up to 20%;* Channel shifts, within ± 10;* Horizontal flips;* Vertical flips.The shear, zoom, and channel shift transformations were constrained within narrow value rangesso as to not deform the images to the point of impairing learning.The training was conducted on an NVIDIA 4070 Ti GPU. The training was set to run for a maximum of 100 epochs, with early stopping enabled with a patience value of 10 epochs. This condition monitored the validation accuracy so as to export the model from the epoch where it achieved its highest value in that time frame. Batch size 32 was used, with categorical cross entropy as the loss function and the Adam optimizer with a learning rate 10^-3.§ RESULTS AND DISSCUSSION§.§ Training resultsIn this paper, we consider the overall accuracy,and per-class precision and recall as figures of merit. The training accuracy is calculated asA = N_True/N_Total,where N_True is the number of correct predictions by the model and N_Total is the total number of predictions made by the model.The precision of a given class is calculated asP = N_TP/N_TP + N_FPand recall asR = N_TP/N_TP + N_FN,where N_TP is the number of true positives guessed by the model (that is, elements of the class in question which were correctly labeled as such), N_FP is the number of false positives (elements incorrectly labeled as belonging to that class), and N_FN is the number of false negatives (elements belonging to that class which were labeled as something else).The resulting model was trained for 20 epochs (due to the early stopping), with epoch 10 yielding the best results. It achieved a training accuracy of 82.49% and a loss of 1.3541. Meanwhile, the validation accuracy reached 79.28%, with a validation loss of 1.9309. Fig. <ref> shows the graphs for Accuracy and Loss over time for training and accuracy metrics.Running an inference test with the model on the test dataset resulted in a mean precision of 77.29% and a mean recall of 77.35%. The confusion matrix of this test is given in Table <ref>, while the precision and recall metrics for each class are shown in Table <ref>.These results show that the network did gain the ability to generalize what it learned, and show promise for even more robust results from this classification network if trained on a larger dataset, given the small size of the one used. It should also be noted that the high contrast in precision and recall values between classes is likely attributable to the unbalanced nature of the dataset, seeing as the lowest values are associated with the two least represented classes (Metal Piece and PCB). Expanding the dataset in subsequent works would allow for eventual discarding of images so as to test training the model with a balanced dataset. Regarding the materials flow for recycling, the model's performance was surprisingly successful, even more so considering that many opportunities for improvements and alternative tests are still to be considered. It is essential to highlight that the study's primary goal is to separate batteries. Thus, the results of precision (90.32%) and recall (93.33%) for the class of batteries determine the study's success.Of the thirty tested batteries, twenty-eight were correctly identified as batteries, and two were considered PCBs (93.33% of recall). As stated, batteries must be separated to recover lithium through specific metallurgical processes <cit.>. Batteries falling into the flow of PCBs preclude the recovery of this element. In the flow of the class of batteries, there were 28 batteries, two metal pieces, and one PCB (precision of 90.32%). The two metal pieces in the flow of batteries do not hinder the recyclability of any of the two materials. Indeed, batteries contain a metal casing <cit.>, easily treated in the typical recycling route of batteries. The one PCB in the flow of batteries means that the valuables materials present at the PCBs (gold, silver, and others) would not be recovered in their typical recycling route. Finally, the model efficiently removed the glass from other flows of materials (83.33% of recall). This is an important outcome, considering that glass is not a valuable material and only increases the mass of the materials needing treatment <cit.>.§.§ Ablation studies§.§.§ Pre-training Because the background of the images in our dataset is quite simple (nearly smooth white, gray, or black surface) and the pyrolyzed materials do not present color variation, we questioned the necessity of pre-training. It is reasonably assumed that pre-training the model on large datasets such as ImageNet <cit.> might be excessive for this setup problem.As such, to verify the validity of using a pre-trained network, a version of the model was trained with randomly initialized weights to compare its performance with the pre-trained model. All the training conditions were identical between models other than the initial weights. The randomly initialized model trained for 32 epochs, achieving its best results at epoch 22, with a training accuracy of 46.98% and loss of 1.2000, and a validation accuracy of 45.95% and loss of 1.2115.The training graphs are shown in Fig. <ref>, with the accuracy graph showing its learning process was inconsistent.In the inference evaluation with the test dataset, the model had a mean precision of 38.71% and recall of 40.49%. In fact, it did not label a single metal piece component correctly, as can be seen in Table <ref>. The precision and recall for each class is shown in Table <ref>. Thus, the results with the randomly initialized network are drastically worse than the pre-trained network, justifying the choice to use a pre-trained model.§.§.§ Binary classificationBecause the work mainly emphasizes separating batteries from other materials, we also questioned if we could not optimize learning for only detecting what is and isn't a battery, especially considering that the battery class has the most visually consistent appearance. For that reason, we trained a version of the model with only two classes: Battery and Other (including PCBs, glasses, and metal pieces). This model was also pre-trained on ImageNet, and all training conditions except for the classes were equal to those of the first model. The resulting model trained for a total 20 epochs, with the best one being epoch 10. It had a training accuracy of 94.71% and loss of 0.2481, with a validation accuracy of 91.89% and loss of 0.2816. The confusion matrix is shown in Table <ref>, and the class-wise precision and recall are shown in Table <ref>. The training graphs are shown in Fig. <ref>.The drawback of the binary approach is that it is not possible to understand which component is contaminating the flow of batteries and therefore improve the performance of the model based on this information. As discussed in Section <ref>, if the only class being incorrectly sorted alongside batteries is metal pieces, then there is no impairment to the recycling process. However, pieces of glass would have potential to hinder the recyclability of the batteries, and PCBs would mean the loss of valuable elements. § CONCLUSION In this work, we present the possibility of using an image classification neural network as a cheaper and more efficient alternative in separating materials for recycling WEEE. We created an image dataset with 300 pictures of assorted components from pyrolyzed smartphones and an adapted version of this dataset for image classification with 1,127 individual images of these components. We then trained an image classification model on this dataset to differentiate between metal pieces, battery, PCB, and glass components, achieving an overall accuracy of 82.49%.The experiment results show promise for using a neural network to separate WEEEs, especially considering the small dataset used. Future works could expand the goal of this study to a detection problem, creating a model capable of both classifying and locating multiple components at once in real-time, which could even further allow for a compact and efficient automated separation system. The relatively high accuracies obtained with a lightweight network could also feasibly allow for low-cost yet accurate embedded systems to be developed for this purpose. Future works could look into testing other models—both alternative architectures and possibly detection models—as well as investigating the impact of the unbalanced dataset and ways to mitigate it, such as with class-specific augmentations or by using weighted loss functions.Even considering the study's limitations, the model used achieved its purpose (separation of batteries) with high accuracies, with significant potential to be implemented on recycling routes and increase their effectiveness in the valorization of the residues. This approach can be implemented in a recycling line coupled with a mechanical sorting system. A camera pointing perpendicularly on a conveyor belt could feed a detection system that informs the mechanical devices to separate the target components. It is important to highlight that most electronics usually contain the same components as smartphones (metal pieces, glass, PCBs, and batteries). Thus, this approach has the potential to be applied to other types of devices, such as laptops and tablets. In addition to the simple separation of batteries, more studies can be carried out to refine the separation of the other classes of materials with high accuracy. Finally, the study created a dataset that could serve as a basis for the future development of models for similar cases. The fact that the dataset was annotated with polygonal masks also allows the annotations to be exported in different formats for different purposes, such as fine-grained semantic segmentation. We also intend to add more images to the dataset and, ultimately, make it public.§ ACKNOWLEDGMENTSWe acknowledge the financial support from the Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul (FAPERGS - Brazil), the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq - Brazil) under the project UFRGS 43545, and the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES - Brazil) under the project PROEX 88887.501183/2020-00.unsrt
http://arxiv.org/abs/2312.16626v1
{ "authors": [ "Álvaro G. Becker", "Marcelo P. Cenci", "Thiago L. T. da Silveira", "Hugo M. Veit" ], "categories": [ "cs.CV", "cs.AI", "cs.LG" ], "primary_category": "cs.CV", "published": "20231227161615", "title": "Sorting of Smartphone Components for Recycling Through Convolutional Neural Networks" }
Data privacy and silos are nontrivial and greatly challenging in many real-world applications. Federated learning is a decentralized approach to training models across multiple local clients without the exchange of raw data from client devices to global servers.However, existing works focus on a static data environment and ignore continual learning from streaming data with incremental tasks.Federated Continual Learning (FCL) is an emerging paradigm to address model learning in both federated and continual learning environments. The key objective of FCL is to fuse heterogeneous knowledge from different clients and retain knowledge of previous tasks while learning on new ones. In this work, we delineate federated learning and continual learning first and then discuss their integration, i.e., FCL, and particular FCL via knowledge fusion. In summary, our motivations are four-fold: we (1) raise a fundamental problem called “spatial-temporal catastrophic forgetting” and evaluate its impact on the performance using a well-known method called federated averaging (FedAvg), (2) integrate most of the existing FCL methods into two generic frameworks, namely synchronous FCL and asynchronous FCL, (3) categorize a large number of methods according to the mechanism involved in knowledge fusion, and finally (4) showcase an outlook on the future work of FCL. Federated Learning, Continual Learning, Federated Continual Learning, Knowledge Fusion, Spatial-Temporal Catastrophic Forgetting. Federated Continual Learning via Knowledge Fusion: A Survey Xin Yang, Member, IEEE, Hao Yu, Xin Gao, Hao Wang, Junbo Zhang, Senior Member, IEEE, Tianrui Li, Senior Member, IEEEXin Yang, Hao Yu and Xin Gao are with the School of Comput- ing and Artificial Intelligence, Southwestern University of Finance and Economics, Chengdu, 611130, China. E-mail: [email protected], [email protected], [email protected]. Hao Wang is with Nanyang Technological University, Singapore. E-mail: [email protected] (primary) or [email protected] Zhang is with JD iCity, JD Technology, Beijing, China, JD Intelligent Cities Research & Institute of Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, China. E-mail: [email protected]. Tianrui Li is with School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China.E-mail:[email protected]. ===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONDeep learning has been instrumental in the advances of both data analysis and artificial intelligence (AI) <cit.>. The algorithms have also been successfully used in almost all areas of applications in industry, science, and engineering <cit.>. However, traditional deep learning algorithms often require large amounts of training data and centralized training, which are usually impractical or even impossible in practical situations. For example, in the case of mobile devices, it is difficult to collect and transfer large amounts of data to a central server with limited bandwidth <cit.>. Centralized training is also infeasible due to data privacy and data silos because of the information security, related laws and regulations, and intensive competition among technology giants <cit.>. In recent years, federated learning (FL) has emerged as a positive response to the increasing need for privacy and other concerns in machine learning applications <cit.>. This newly emerging paradigm deploys multiple devices to train a global model collaboratively by uploading their models to a server while keeping the private data locally, as opposed to the traditional centralized data storage and model training approach. FL mainly involves privacy preservation, knowledge transfer, and collaboration among data owners, which are essential for many real-world applications of AI. After several years of development, FL has produced promising results, particularly in industries that handle sensitive data such as finance, healthcare, electronic communication and government affairs <cit.>. Despite its numerous advantages, FL confronts significant challenges. The first challenge is the lack of dynamism. The setting assumptions of most existing FL methods are somewhat strong, assuming all data and classes should be known prior and will be unchanged forever. It contradicts the dynamic nature of the real world, where data are collected in streams and the data could form sequential tasks.However, traditional FL would suffer from terribly temporal catastrophic forgetting (TemporalCF) when local models learning on new tasks. TemporalCF is a major challenge in Continual Learning (CL) <cit.>, which refers to changes in crucial parameters of a single model when learning on consecutive tasks, resulting in poor performance on previous tasks <cit.>. Apparently, FL would certainly encounter TemporalCF if each client learns on a sequence of tasks from a private local data stream.Another challenge of FL is data heterogeneity <cit.>, also known as non-independent and identically distributed (Non-IID) data. It refers to the differences in data distributions, feature spaces and labels across clients participating in the federated training process, aroused by variations in the data sources, data collection methods, data quality, etc. Data heterogeneity results in diverse knowledge extracted by local models. Knowledge of a model is often represented by parameters. The more different the data, the greater the parameter divergence. Simply aggregating these local models, like averaging parameters, can lead to critical parameters for certain local tasks being overwritten. Such that the aggregated global model may perform worse than the local models on the local test set <cit.>. We call this phenomenon spatial catastrophic forgetting (SpatialCF). It is worth noting that SpatialCF is a new concept we introduce in this paper, which is analogical to the TemporalCF.Federated Continual Learning (FCL) <cit.> breaks the static limitations of traditional federated learning on each client learning a task sequence. Beyond FL and CL, FCL has a fundamental problem called spatial-temporal catastrophic forgetting. On the local side, clients need to overcome TemporalCF induced by learning new tasks. On the server side, the server must address SpatialCF caused by aggregating different local models. Specifically, after completing the training of a task, the server aggregates local models into a global model. Then the clients use the global model as a foundational model for continual learning of the next task. Therefore, SpatialCF may exacerbate TemporalCF.In this paper, we enumerate several FCL algorithms profoundly to explore how they overcome spatial/temporal catastrophic forgetting or both. We evaluate them with extensive experiments and show that the training process of FCL is essentially a sort of knowledge fusion: fuse knowledge of previous tasks with current task's knowledge, as well as fuse heterogeneous knowledge from different clients. Specifically speaking, since uploading the entire local parameters or gradients to the server may cause privacy leakages <cit.> and slow down the aggregation due to the large communication cost <cit.>, FCL methods typically extract knowledge during local training, upload the extraction of local knowledge and explore more efficient model aggregation strategies.The main contributions of this paper are summarized as follows:* We discuss FCL thoroughly in this work. We raise the problem of spatial-temporal CF. We further define two evaluation metrics and conduct extensive experiments to evaluate the effect of spatial-temporal CF on FCL.* We propose two unified frameworks for FCL (i.e., synchronous and asynchronous FCL). The two frameworks can integrate most existing methods and lay two pipelines for future research.* We showcase a comprehensive survey of existing FCL methods by categorizing them with seven forms of knowledge fusion, namely rehearsal, clustering, regularization, parameter isolation, dynamic architecture, prototype and knowledge distillation.The rest of this paper is organized as follows. In Sec. <ref>, we provide an overview of FL and CL, and discuss the motivation for their integration, i.e., FCL. In Sec. <ref>, we introduce the problem of spatial-temporal catastrophic forgetting and demonstrate the weaknesses of traditional FL for this problem through experiments. In Sec. <ref>, we present two generic FCL frameworks: synchronous FCL and asynchronous FCL. In Sec. <ref>, we delve into various FCL methods and categorize them according to the forms of knowledge fusion, namely rehearsal, clustering, all gradients/parameters, parameter isolation, prototype, dynamic architecture and knowledge distillation. In Sec. <ref>, we briefly summarize our survey and highlight some potential research directions for future work in FCL.§ FEDERATED AND CONTINUAL LEARNING: AN OVERVIEW§.§ Federated Learning In recent years, the rapid development of network technology and AI has brought about a growing concern regarding the potential risks of privacy disclosure. As citizens and companies become increasingly aware of personal privacy and data ownership, governments have taken measures to address privacy risks and security threats <cit.>. Notably, the European Union issued the General Data Protection Regulation <cit.> in 2018, which represents the first bill on data privacy protection. In May 2019,San Francisco banned the use of face recognition by government agencies to eliminate the hidden dangers caused by technology <cit.>. In addition to laws and regulations, information asymmetry, an important factor in improving competitiveness, is also against the needs of traditional deep learning for centralized data storage and processing. On the one hand, many data are of poor quality and lack labels, while on the other hand, data are scattered across various data subjects and enterprises, creating data silos that cannot be connected <cit.>. In brief, there is an insoluble dilemma of protecting data privacy and breaking data silos in the traditional centralized machine learning paradigm.FL has been proposed as a potential solution to this dilemma <cit.>. FL is a distributed learning approach where multiple devices or nodes collaboratively train a shared model without sharing their raw data <cit.>. It originated from FedAvg <cit.>, which allows each client to use local data to train their own local model, then local models will be aggregated as a global model by weighted average based on the proportion of data, the global model will then be distributed to clients who take part in aggregation and help to train a new initial model in the next round of communication. The core idea of FL is to break down data silos while preventing privacy breaches by fusing extracted knowledge, instead of raw data, of edge devices <cit.>.Since then, various FL algorithms have been proposed. However, recent studies have evidently shown that uploading full gradients to the server fails to protect the raw data. <cit.> proposes Deep Leakage from Gradients (DLG), an algorithm that can obtain original local data from publicly shared gradients.<cit.> shows that the ground-truth labels can be extracted with 100% accuracy under DLG attack. <cit.> demonstrates the feasibility of reconstructing images at high resolution from the knowledge of uploaded parameter gradients. Other forms of attacks also pose threats to the security and privacy of FL. Attacks conducted during the inference phase are called evasion or exploratory attacks <cit.>. <cit.> firstly devised an active inference attack called Generative Adversarial Networks (GAN) attack against deep FL models. It allows the adversarial participant to train a GAN to generate prototypical samples of the targeted private training data.To further enhance privacy protection, researchers have explored alternative information forms to exchange instead of original gradients. We summarize alternative approaches in FCL settings in the later section. According to the different distribution characteristics of data, existing FL methods can be categorized into horizontal federated learning (HFL), vertical federated learning (VFL), and federated transfer learning (FTL) <cit.>. HFL refers to the case where clients share a similar feature space but differ in sample space, which occurs in cross-silo and cross-device scenarios. VFL refers to the case where clients share a similar sample space but differ in feature space, which typically occurs in the cross-silo scenario. FTL refers to the situation that neither overlaps of feature space nor sample space are large, requiring some transfer learning approaches to assist FL training.It is worth mentioning that although FL has achieved remarkable success in addressing the data silo issues and privacy concerns, and it is recognized as a promising direction, data heterogeneity has become a major bottleneck of this paradigm.Data heterogeneity is an inherent challenge in FL, as the data distribution across different clients is typically Non-IID due to differences in data sources, data collection methods, and data quality.Local models trained with heterogeneous data are also heterogeneous, which leads to poor performance when aggregating these models to produce a global model. After distributing it to clients, its performance may not be as good as some local models <cit.>. We coin the term “spatialCF” to describe this phenomenon, since its intrinsic similarity with the scenario of CL–only a portion of data is available. Specifically, catastrophic forgetting in CL is due to the temporal unavailability of data, while heterogeneity in FL is caused by the spatial unavailability of data. §.§ Continual Learning In reality, the entire dataset is not available all at once but rather arrives continually in the form of task streams. The sequential training on new tasks often results in overwriting parameters of the model, leading to a significant decline in performance on previous tasks <cit.>. The goal of CL is to alleviate forgetting by integrating the knowledge learned from previous tasks and the current task <cit.>. Several techniques have been proposed to address the challenges of CL, including regularization, parameter separation, dynamic architecture, replay, and knowledge distillation <cit.>. Regularization techniques, such as Elastic Weight Consolidation (EWC) <cit.>, aim to constrain the changing of important parameters to prevent catastrophic forgetting. Parameter separation techniques, such as PackNet <cit.> and PathNet <cit.>, only activate relevant parameters for a certain task and keep the network the same in scale. Dynamic architecture methods, such as Progressive Neural Networks (PNNs) <cit.> and Expert Gate <cit.>,add parameters for new tasks while leaving the old parameters unchanged. Some studies <cit.> consider parameter separation and dynamic architecture techniques to be in the same category, namely parameter isolation, as they both reduce interference between new and old tasks by isolating parameters. The only difference is whether the structure of the network is dynamic or fixed. Replay-based methods, such as Experience Replay (ER) <cit.> and Generative Replay (GR) <cit.>, store and replay previously seen data to mitigate forgetting. Knowledge distillation methods, such as Knowledge Transfer via Distillation (KTD) <cit.> and Learning without Forgetting (LwF) <cit.>, transfer knowledge from previously learned models to new models periodically to mitigate forgetting.We believe the primary challenge faced by FL is comparable to CL: integrating knowledge learned from multiple clients' data. The purpose of CL is to integrate knowledge from different time into a single model, conducted sequentially. In contrast, FL aims to fuse knowledge from multiple clients into one global model, carried out in parallel. Both CL and FL share the goal of fusing diverse knowledge. To some extent, FL can be seen as a sort of traditional CL but across different machines. Evidently, CL methods would be helpful in resolving FL problems. Based on the analysis above, many researchers have tried to adapt CL methods to FL settings and achieved quite good results. Shoham etal. <cit.> presents their adaptation of the EWC <cit.>, a classic regularization-based approach in CL, to the FL scenario and refers to it as FedCurv. By adding a penalty term to the loss function, all local models are compelled to converge toward a shared optimal. Xin etal. propose a new framework called FedCL <cit.>. FedCL follows the parameter-regularization continual training paradigm on clients, in other words, penalizing the important parameters of the global model for changing. Nevertheless, it is quite different from FedCurv because it estimates the importance weights on the proxy dataset on the server and then distributes them to the clients instead of estimating the importance weight of model parameters on clients and exchanging them as in <cit.>, which will bring at least twice the extra communication costs.SpatialCF in FL and the close relationship between FL and CL are also observed and demonstrated by other researchers. Casado etal. <cit.> creatively modifies the traditional FedAvg algorithm by adopting the concept detection and concept drift adaptation methods that deal with concept drift in CL and prove that their extended method outperforms the original one. Criado etal. <cit.> dig deeply the connection between concept drift in CL and Non-IID in FL, and provide an enlightening survey on handling Non-IID problem. Usmanova etal. <cit.>shows that the problem of catastrophic forgetting is critical in a pervasive computing application using FL. It is evident that the close relationship between CL and FL has been confirmed.After years of development, research on overcoming catastrophic forgetting in FL has been increasingly in-depth <cit.>. Since data do not arrive at each client at once, there is also TemporalCF. Besides, the task sequences of clients are heterogeneous and the time of uploading extracted knowledge to the server is diverse, making the problem even more complicated <cit.>. Therefore, both local and global models should be capable of fusing knowledge continually. FCL is proposed to address this issue. Adding CL to traditional FL can reduce the cost of data storage and model retraining when adding classes and reduce the risk of data leakage due to model gradient updates.§ SPATIAL-TEMPORAL CATASTROPHIC FORGETTING§.§ Problem DefinitionFCL suffers from catastrophic forgetting both in terms of time and space. On the one hand, the local model should have the ability to preserve previous knowledge while learning from the newest data collected progressively. On the other hand, the global model, which is obtained by aggregating local models, often performs worse than local models on the local dataset due to Non-IID data. Our research now finds that both types of catastrophic forgetting are actually caused by the unavailability of the entire dataset. In CL, the arrival of new data renders old data inaccessible, resulting in TemporalCF. In FL, the unavailability arises due to the distributed storage of data <cit.>, leading to SpatialCF. The mathematical notations involved are shown in Tab. <ref>. For client c ∈ C, where C denotes the total number of clients in FCL system, the local model θ_c is trained on its private task sequence𝒯^c={𝒯^c_1,𝒯^c_2,…,𝒯^c_N_c} . The objective of FCL is to train a generalized global model, which effectively avoids spatial-temporal forgetting for previous tasks of all clients. Formally, we measure the forgetting of the global model of s communication round θ_g^s on the task t presented at round r by the cross-entropy loss ℒ_CE(θ_g^r,θ_g^s)=-∑^n_t_i=1θ_g^r(x_i)logθ_g^s(x_i),where x_i∈[1,n_t] denotes the samples of task t. And the spatial-temporal forgetting of all seen tasks is defined as follows. The spatial-temporal forgetting at round s is the average loss ofall seen tasks:F(s,T)=-1/T∑^T_t=1∑^n_t_i=1θ_g^r(x_i)logθ_g^s(x_i),where T denotes the total number of all seen tasks at round s. We then verify that spatial-temporal catastrophic forgetting does exist and has a great impact on the performance of both local and global models. Moreover, these experiments also aim to identify the most significant factor that leads to the performance degradation of the models.§.§ Performance Evaluation§.§.§ SettingsWe conduct experiments using the classical FL algorithm FedAvg in four scenarios where the data distribution varies among clients. The aim is to explore the key factors leading to spatial-temporal catastrophic forgetting. The data distribution scenarios between clients are shown as follows:* The clients share the same classes, and their data distributions are consistent. In this scenario, each client is allowed to have the same set of data classes, and the number of samples for each class is identical across all clients.* The clients share the same classes, while their data distributions are different. This aligns with the previous scenario, except that the number of samples for each class varies across different clients.* The clients have different sets of classes, while there are overlapping classes among them. It allows each client to have different classes, but there are still some classes that are common to all clients, referred to as overlapping classes.* The clients have different sets of classes. Each client has a very unique set of classes and there is no overlapping class between clients. In each scenario, we further divide the task sequence handled by each client into three different settings: * IID: Each task includes all the classes, and the number of samples for each class in each task is equal.* Non-IID: Each task includes all the classes, but the number of samples for each class in each task is not equal.* Class-Incremental: Each task evenly distributes all classes, with each class appearing in only one task. For example, if there are a total of 30 classes and the client has 5 tasks, each task will have data for only 6 classes. Tab. <ref> illustrates our experimental settings. For our model, we set the first convolution layer in the Res-Net 18 <cit.> to 3×3 in experiments.§.§.§ DatasetsWe conduct twelve experiments on CIFAR-100 <cit.> in an FL system that has three clients and one server. Every client would be trained on three tasks sequentially.Each task's data is split into a training set and a test set in a ratio of 7:3. Every client maintains a large test set that consists of the test dataset from the seen tasks so far. Therefore, the evaluation of the current model and the aggregated global model will run on this large test set.For each scenario of data distribution among clients, the designed data distribution is different, as shown below: In scenario (A) and (B), the initial step involves randomly selecting 30 classes from a pool of 100. In Scenario (A), the data of each class is evenly distributed among three clients, ensuring an equal quantity of data for each client. Conversely, in Scenario (B), the data of each class is randomly assigned to different clients using a Dirichlet distribution. While all clients have data for every class, the sample quantities for each class are distributed randomly. In scenario (C), each client initially randomly samples 15 classes exclusive to itself. Subsequently, from the remaining classes, 15 overlapping classes are selected. The data for these overlapping classes is evenly distributed among all clients. As for scenario (D), there are no overlapping classes among clients. Therefore, each client possesses 30 classes exclusively owned by itself.According to Sec. <ref>, there are three further setting for each scenario and the detailed descriptions for each setting are as below: (IID) means that the sample quantity for each class is the same across all tasks, and there are no repeated samples. And (Non-IID) implies that the sample quantity for each class varies across different tasks, determined by a Dirichlet distribution. (Class-Incremental) signifies that each task contains data for only 10 classes, and there is no repetition of classes across tasks. The objective of our experiments is to identify the major factors causing the performance degradation of federated learning models. §.§.§ Metrics Since spatial-temporal catastrophic forgetting is a novel challenge that we first introduced, lacking measurements, we have designed three different metrics to assess temporal knowledge retention and spatial knowledge retention.Temporal knowledge retention. We use Knowledge Retention as measurement of forgetting. Temporal knowledge retention is designed to measure the extent to which local models retain knowledge of old tasks as they learn on the task sequence. Acc_i^(0,0) represents the accuracy of client i's local model trained on the first task testing on the test set of the first task. And Acc_i^(r,0) represents the accuracy of client i's local model trained on the r-th task testing on the test set of the first task. The ratio of these two values provides insight into how much knowledge the local model retains from the first task when it has completed training on the r-th task. Therefore, the spatial catastrophic forgetting can be expressed in Equ. <ref>.KR_t = Acc_i^(r,0)/Acc_i^(0,0)Spatial knowledge retention. Similarly, we can deduce the expression form of spatial catastrophic forgetting in Equ. <ref>. This metric is designed to measure how much local-specific knowledge is retained by the aggregated global model. A smaller value indicates that more local knowledge was overwritten during aggregation.KR_s = Acc_g^(r,r_i)/Acc_i^(r,r)where Acc_g^(r,r_i) is the accuracy of the global models on the r-th testset of client i. And the global model is obtained by aggregating the local models trained on the r-th task from all the clients. §.§.§ Experimental Results and AnalysisAt the beginning of the experiment, each client has a task sequence of three different tasks. After completing the training for a task, they will upload it to the server and participate in aggregation. Subsequently, they employ the received global model for continual learning on the next task.In Fig. <ref>, each row represents a type of data distribution across tasks, such as IID. Meanwhile, each column represents a scenario among clients, e.g., Scenario A. From the last row of Fig. <ref>, we can tell that when the task sequence is class-incremental, traditional FL methods have no ability to alleviate TemporalCF. On the contrary, if the distributions among tasks are entirely identical, the local model will consistently learn from similar data, continually enriching its knowledge. In this scenario, the model's performance on the initial test set may significantly improve, as it can accumulate and leverage similar patterns and features during the training process.As for SpatialCF, we observed that Scenario A exhibits the highest spatial knowledge retention, especially when the distribution among tasks is class-incremental. Conversely, Scenario D has the lowest spatial knowledge retention, particularly in the setting where tasks follow an IID distribution. When clients share the same classes and each client has an equal number of samples, local models become more similar, leading to enhanced performance after weighted averaging. The Class-Incremental setting reduces the complexity of each task, further minimizing the differences among local models. However, when each client has different classes, the heterogeneity among local models intensifies, causing crucial parameters to be overwritten during aggregation, resulting in a sharp decline in performance on local tasks.Based on the analysis, we can draw a very important conclusion: The divergence in distributions among clients leads to spatial forgetting, whereas differences in distributions among tasks result in temporal forgetting. The greater the variance in data among clients, such as differences in classes, the more crucial it becomes to design an effective knowledge fusion algorithm for amalgamating heterogeneous models, rather than a straightforward aggregation approach. A good FL algorithm must be able to effectively integrate knowledge extracted from various clients, especially retaining knowledge of seen classes to prevent performance degradation caused by catastrophic forgetting.§ FEDERATED CONTINUAL LEARNINGAfter the above analysis, the objective of FCL is to address spatial-temporal catastrophic forgetting due to the application of FL in the real world. The first is to ensure that local models do not forget the knowledge of previous tasks while learning new ones, which refers to overcoming temporal catastrophic forgetting. The second is to enable the global model to fuse all the heterogeneous knowledge across different clients without performance degradation on local test set , addressing the problem of spatial catastrophic forgetting.In traditional FL setting <cit.>, there are c clients C_1, …, C_c and one central server S. And client C_i, 1≤ i≤ c only has access to its own data D_i due to privacy concerns. Basically, there are three steps in one communication round: (1) Server S distributes the initial model or the global model from the last round to clients. (2) Client C_i would use its private data D_i to train its local model M_i based on the model from the server. (3) Server collects local model M_1, …,M_c then aggregates them to update the global model. The performance of the final global model should be very close to the performance of a centralized trained model.We then extend conventional FL to federated continual learning. Given c local clients, each of them {C_1,…,C_c} trains their local model on a private task sequence T_i={t_i^1,…,t_c^i_i}, where n_i represents the total number of tasks in this task sequence. In other words, in FCL the system needs to train c continual learning model with their task sequence. And the server will collect these local CL models and fuse them to form a global model, then distribute it to the clients. Obviously, the biggest challenge for FCL is to overcome the temporal forgetting caused by learning on a data stream, and the spatial forgetting happens when aggregating different local models on the server. §.§ Synchronous FCL Based on whether the task sequence of the client is the same, FCL can be preliminarily divided into two types: synchronous FCL and asynchronous FCL. Synchronous FCL refers that all clients share a common task sequence, as illustrated in Fig. <ref>. In other words, all clients would learn one task and learn the next task together. Generally speaking, there are only a few approaches specifically designed to deal with this setting, as it requires that the similarity of distinct task sequences of clients should be as high as possible. According to the limitations we mentioned above, we prefer to call this scenario CFL rather than FCL. The setting is idealized since it has strict limitations on the task sequences of different clients. We give the definition of synchronous FCL:Definition 1. A central server S and c clients, all the clients share the same task sequence 𝒯 = {𝒯_1, 𝒯_2, 𝒯_3, …, 𝒯_n} , where n represents the number of tasks. Global model θ_G^i is obtained after aggregating local models {θ_1^i-1, θ_2^i-1, …, θ_c^i-1} uploaded by clients after training on task i-1. This scenario was originally described in Bui etal. <cit.> in order to federated train Bayesian Neural Network and continually learn for Gaussian Process models. Ma etal. <cit.> firstly give a clear definition of Continual Federated Learning (CFL) to it. And they proposed a framework called CFeD, which employs knowledge distillation based on surrogate datasets to mitigate catastrophic forgetting both on the server-side and client-side. The most interesting point is that a part of clients are used for learning the new task and others are used for reviewing the old tasks in order to alleviate catastrophic forgetting. Guo etal. <cit.> present a unified framework, termed Continual Federated Learning, together with a novel client and time drift modeling approach, to capture complex FL scenarios involving time-evolving heterogeneous data.Due to its strict limitation on task sequences, synchronous FCL can not be further classified since it is up to one single client. If one of the clients runs a class-incremental task sequence, this whole FL system is class-incremental. However, we believe that the asynchronous one is worth more attention due to its practicality.§.§ Asynchronous FCL§.§.§ Federated Task-Incremental LearningAsynchronous FCL refers that clients will train their local models using distinct sequences of tasks and then send information to the server. And these sequences consist of tasks from a global task set in different orders. Subsequently, the knowledge from clients needs to be fused into one global model, then it will obtain the ability to perform well on all the tasks that have been seen by clients. Notice that no matter how different the order or number of tasks the clients process, in the view of the server, each client just processes one unique task. The definition of asynchronous FCL is as below:Definition 2. Give c clients { C_1, C_2, …, C_c} and one central server S. This system owns a global task set T = {T_1, T_2, …, T_n} where n represents the total number of tasks. Each client follows its individual order O_i, 1≤ i ≤ c to process T, which means that at the same time, clients would handle different tasks. At time step r, client C_i trains its local model θ_i^r on task T_O_i^r based on the global model θ_G^r-1, where T_O_i^r represents the r_th task in order O_i. Then, θ_i^r has learned knowledge of task T_O_i^r and tries to upload it to the server. The server S would fuse knowledge from client C_i with other clients' knowledge and previous knowledge from time 0 to time r-1. After fusion,the global model θ_G^r is obtained and it has knowledge of all clients from time 0 to time r. And S then distributes θ_G^r back to clients and clients train it on the next task.Here, we present an illustration of an ideal asynchronous FCL as shown in Fig. <ref>. Please note that we refer to this figure as an “ideal” asynchronous FCL because most existing FCL methods are primarily based on “aggregation” rather than “fusion”. Therefore, they are more like a combination of asynchronous and synchronous approaches: handling different task sequences and then aggregating local models to obtain the global model rather than fusing knowledge into global models. Apparently, asynchronous FCL is much more complex than the former one. As all clients share the same task sequence, there is no need to consider too much when aggregating models. However, in asynchronous FCL, the task sequences processed by clients are different, which poses greater challenges to overcome spatial catastrophic forgetting. On the one hand, knowledge needs to be layered, and on the other hand, different layers of knowledge need to be fused separately. But from another perspective, asynchronous FCL tends to be more practical because it is impossible to make the task sequence of all clients identical in reality.The objective of asynchronous FCL is explicit: the global model needs to retain the knowledge of clients as much as possible when fusing local models. In some applications with higher real-time requirements, new local knowledge uploaded by clients also needs to be fused into global model continually. Besides, new local model which is initialized on global model in next round would be better if it retains some local parameters in order to prevent dropping performance on local tasks. To distinguish it from a specific kind of asynchronous FCL framework mentioned later, we will refer to the one described in Definition 2. as “Federated Task-Incremental Learning". §.§.§ Federated Class-Incremental Learning. There's a special kind of asynchronous FCL called Federated Class-Incremental learning, which poses a greater challenge to heterogeneous knowledge fusion. Federated class-incremental learning involves an interesting setting where every client is expected to learn a growing set of classes and communicate knowledge of those classes efficiently with other clients, such that, after knowledge merging, the global model should be able to accurately discriminate between classes in the superset of classes observed by the set of clients. That is, the local model θ_i has the ability to identify class set A_i on client C_i, and θ_j also can identify class set A_j. After fusing on a server, the global model would be able to identify A_j ∪ A_i. Here, we give a form definition of federated class-incremental learning. Definition 3.Consider c clients, denoted as {C_1, C_2, …, C_c}, and one central server, denoted as S. Each client C_i, 1 ≤ i ≤ c, has its unique task sequence T_i, which can differ significantly from one client to another. A set of public classes, denoted as A_public, is accessible to all clients, while each client C_i has its private class set A_i. The primary objective of the local model θ_i is to incrementally learn to discriminate classes from the set A_i ∪ A_public.The task sequence of client C_i is denoted as T_i={T_i^1, T_i^2, …, T_i^n_i}, where n_i represents the total number of tasks on client C_i. The k-th task of T_i contains | A_i^k | classes, where A_i = {A_i^1∪ A_i^2∪…, ∪ A_i^n_i}.At time step r, the global model θ_g^r-1 can distinguish |A_g^r-1| classes. The server S then distributes it back to clients. Client C_i uses θ_g^r-1 as an initial model to train on its r-th task T_i^r. The local model θ_i^r should perform well in classifying classes from the set A_g^r-1∪ A_i^r.Finally, the server collects the local models from clients who participate in FCL and obtains a new global model θ_g^r, which can identify classes from the set A_g^r = {A_g^r-1∪ A_1^r ∪ A_2^r ∪…, ∪ A_c^r}.To our best knowledge, this area is novel and only a few papers come out, as it includes the common real-world problem of incrementally learning new classes of objects. We believe that research in this area has enormous potential in the future according to the analysis of our experiments, so we will provide detailed descriptions of the papers mentioned later to inspire further research.To better categorize the mentioned FCL frameworks, we provide Tab. <ref> to help readers classify them more effectively. § KNOWLEDGE FUSION IN FCLThis section mainly discusses a diverse form of knowledge fusion in FCL. As discussed before, simply uploading the local model gradients to the server not only poses privacy risks but also hinders knowledge fusion. Recent research presented at NeurIPS 2019 <cit.> demonstrated that in just a few iterations, a malicious attacker can fully extract the training data from gradients. Yang etal. <cit.> recover images by using the approximate gradient in gradient leakage attack against Unbiased Sampling-Based secure aggregation. Specifically, how to fuse the local knowledge uploaded by the clients on the server side (we call it collect and fuse on the server), and how to fuse the knowledge distributed by the server with local knowledge on the client side (we call it receive and fuse on clients), are the key issues in solving FCL problems. Recently, many researchers have noticed this and started to actively explore knowledge fusion forms other than the original gradient.Knowledge can be expressed in many forms. We find that in the FCL setting, local knowledge is hidden in three parts: data, models and outputs. We divide existing knowledge fusion methods into seven classes as shown in Fig. <ref>. As FCL is an emerging field, some of the papers mentioned below may be of traditional FL, but their techniques for mitigating spatial catastrophic forgetting can be easily extended to FCL. According to the mechanism involved in knowledge fusion, we summarize existing methods with seven parts: Rehearsal (Sec. <ref>), Clustering (Sec. <ref>), All Parameters (Sec. <ref>), Parameter/Layer Isolation (Sec. <ref>), Dynamic Architecture (Sec. <ref>), Prototype (Sec. <ref>), Knowledge Distillation (Sec. <ref>). §.§ RehearsalRehearsal-based approaches <cit.> alleviate catastrophic forgetting by storing exemplars of previous data and replaying them when training on new data. Rehearsal-based approaches have been proven to achieve the best results in most CL settings<cit.>, since they ensure the memory of old knowledge will not degrade through time by keeping exemplars.FedPMR (Federated Probability Memory Recall) <cit.> is a rehearsal-based FCL approach proposed by Wang etal., which consolidates the memory of old probability experience and solves probability bias occurring in previous tasks with Probability Distribution Alignment (PDA) module. To be more specific, the PDA module maintains a replay buffer to hold a small number of exemplars for past tasks and the corresponding original probability outputs.Casado etal. <cit.> extend their research in <cit.>. They divide the replay buffer on each client into two parts: a short-term memory part and a long-term memory part. The former is processed to check whether a global drift occurs, and the latter is consisted of past concepts. In previous FCL methods, replay buffer is private for each client and exemplars will not be shared in order not to break the privacy protocol. However, due to the data heterogeneity among clients, the direct application of replay fails to achieve excellent performance. Zizzo etal. <cit.>, therefore, creates a global replay buffer shared among all clients to mitigate forgetting more efficiently. Exemplars are processed with Laplace noise to avoid privacy leakage. They also consider a challenging setting of FCL, in which the clients dynamically participate in the FL system, and the fraction of participation varies in training rounds.Although a differential privacy mechanism can be employed to enhance security, rehearsal methods with shared buffers are prone to privacy breaches. Pseudo-rehearsal is thus introduced to FCL. The replayed data in this approach are generated to simulate the feature distribution of previous tasks. The advantages of this method are reduced memory requirement and enhanced privacy protection of old tasks.FedKNOW <cit.> acts in each client and extracts compact and transferable knowledge (instead of data) – the critical subset of model weights. When learning a new task, FedKNOW integrates it with the knowledge of its signature tasks, which are the new task’s most dissimilar tasks identified from local past tasks to prevent catastrophic forgetting, and the updated global model representing other clients’ current tasks in preventing negative knowledge transfer. By completing knowledge integration with polynomial time complexity, FedKNOW addresses the limitations of existing techniques by providing both high model accuracy and low communication overhead at the edge.Unlike naive rehearsal or pseudo-rehearsal approaches, GradMA <cit.> does not directly optimize on stored or generated exemplars but treats exemplars as inequality constraints and performs quadratic programming (QP) to correct the current gradient directions of global and local models in FCL setting iteratively. On the client side, GradMA utilizes various information, which includes the gradients of the local model in the previous round, the gradients of the global model, and the parameter differences between the local and global models in the current round, to adaptively correct the update direction of the local models. On the server side, GradMA utilizes the memorized accumulated gradients from all participants as constraints and performs QP to enhance the update direction of the global model. §.§ ClusteringClustering is a technique used in unsupervised learning to group similar data points together based on certain similarities or patterns within the dataset. Some FCL methods cluster the tasks for each client or directly cluster clients according to their similarity of task flows to enhance the effectiveness of model aggregation. Clustering in FCL is typically for personalization, which aims to precisely assign greater importance to the specific information of individual clients in order to balance the generalized knowledge learned from the entire set of clients with their unique characteristics <cit.>. There are two levels of personalization in FCL: client-level and group-level. Client-level personalization involves clients receiving the same global model from the server and adapting it locally to better fit their own tasks, as mentioned before. However, empirical evidence suggests that in practical scenarios, a single global model may not be able to accommodate the specific requirements of all clients <cit.>. Group-level personalization emerges to address this issue, by clustering the clients according to their similarity distance of tasks, and jointly training multiple global models to better accommodate the clients with Non-IID datasets <cit.>. We must take into account that the same input may produce different or even opposite outputs on different clients <cit.>. Therefore, it is necessary to consider adopting multiple global models. It is likely that this area of group-level personalization has not yet reached its full potential since it emerged a few years ago.<cit.> finds that there are groups of clients who have similar data distributions that can be clustered. It alleviates catastrophic forgetting by adding a clustering step to separate into groups according to the similarity of tasks among clients. The resulting global model after clustering will be more effective, as it excludes knowledge uploaded by clients with low similarity.Gradient Memory-based Federated Learning (GradMFL) <cit.> clusters the entire Non-IID data among clients into several IID data groups based on their similarity. Additionally, it introduces a collaborative training strategy for performing a series of tasks in hierarchical clusters, which incorporates gradient memory to mitigate the problem of catastrophic forgetting during the transfer of hierarchical knowledge. It is undeniable that the personalization strategy provides a new idea for promoting knowledge integration: The knowledge learned by each client can be divided into two parts: local knowledge and global knowledge. In the aggregation stage, only the global knowledge needs to be aggregated, which reduces communication costs and preserves privacy. If the similarity between the tasks of the clients is too low or even completely opposite, we can group users and obtain more than one global model. Although this increases computational costs, it can better aggregate similar knowledge and protect the system from a series of attacks <cit.>.§.§ All Gradients/Parameters FedAvg <cit.> brings out the concept of FL and proposes a simple weighted averaging approach to aggregate local model updates based on the amount of data. Most of the early works on FL <cit.> were simply making minor improvements to FedAvg, following its parameter transfer and aggregation-based paradigm. Even if the original data is not directly shared, the model parameters have the potential to reveal sensitive information about the training data <cit.>. Furthermore, this form fails to alleviate catastrophic forgetting of previously learned knowledge during the fusion process and does not facilitate the knowledge integration between local and global models, as it solely relies on weighted averaging of parameters. When confronted with class-incremental challenges, as shown in Sec. <ref>, the performance would significantly decrease.§.§.§ Regularization Regularization, a continual learning approach,aims to mitigate forgetting by adding a regularization term to the loss function to constrain the direction of parameter optimization, making past knowledge less likely to be forgotten<cit.>. Due to the intrinsic similarity between spatial and temporal forgetting in FCL, either of them can be alleviated with regularization-based approaches.Although this technique still uses all parameters to aggregate models, it adds regularization loss terms to the loss function to constrain the direction of parameter optimization, making past knowledge less likely to be forgotten. In FCL, preserving local knowledge from each client during the global model aggregation process requires the inclusion of corresponding regularization terms into the global model. Similarly, if the goal is to retain previously learned knowledge, adding regularization terms would be sufficient.FedCurv<cit.> firstly adopts this approach in FCL. It follows the method in EWC <cit.>,utilizing the diagonal of the Fisher information matrix to evaluate the importance of network weights for past tasks and adding a penalty term to the client-side loss function to ensure that local models can converge to a shared optimal solution.However, FedCurv only adopts regularization to handle statistical heterogeneity, leaving temporal catastrophic forgetting unsolved.<cit.> applies synaptic intelligence (SI) <cit.>, which is applied to preserve important model weights for training one center after another, to FCL settings in order to identify automatically brain metastasis (BM) as an exemplary case of multi-center collaboration.<cit.> proposes a novel federated continual learning method called FedSI, also integrating SI with FL. The authors design an optimization loss function that leverages knowledge from multiple sources. This function guides the local training of all local models over a common parameter space by combining two loss terms. The first term, which is the cross-entropy loss, is calculated with the local training dataset to learn its own data. The second term, which is the proposed CL loss, helps to alleviate the weight divergences of local models by controlling the difference between the local model and the other local models. This knowledge fusion strategy compels the combined global model to move closer to the global optimal solution.In <cit.>, Yao etal. also adopt EWC and propose federated learning with local continual training (FedCL) strategy to alleviate the weight divergence and continually integrate the knowledge on different local clients into the global model, which ensures a better generalization ability for the global model. In contrast to FedCurv, which estimates the importance weight of model parameters on clients and exchanges them, leading to at least twice the extra communication costs, FedCL adopts a different approach. Specifically, FedCL estimates the importance weights on the server using a proxy dataset and then distributes these weights to the clients. Then clients subsequently utilized to restrict the local training. This approach enables the federated model to acquire knowledge from clients while preserving its original performance.Regularization methods are more efficient when dealing with similar tasks since their main purpose is to limit the directions of parameter updates and achieve a shared solution. These approaches can not learn to discriminate classes from different tasks in the class-incremental scenario <cit.>. Therefore, to enhance knowledge fusion in both space and time dimensions, FCL requires more complicated strategies. §.§ Parameter Isolation Uploading all parameters or gradients may make local models expose to reconstruction attacks or model inversion attacks when the server is honest but curious<cit.>. As a result, in some methods, local parameters are partitioned according to special rules, and only a portion of them are uploaded. Intuitively, this technique can be combined with parameter isolation in CL, which dedicates different parts of parameters to each task, to jointly avoid privacy leakage and overcome forgetting. The parameter isolation approach in FCL can be split into two steps: local isolation and global knowledge fusion. Local isolation can be achieved through attention mechanism or parameter decomposition. Attention works by focusing on specific parts or features of input data that are deemed important for a particular task. Parameter decomposition is a CL approach proposed in <cit.>, which treats network weights as a sum of task-shared and sparse task-adaptive parameters and decomposes them to prevent forgetting and order sensitivity. Global knowledge fusion refers to the uploading and aggregation of the decomposed and selected information. Here, we provide a detailed description of how the attention mechanism and parameter decomposition work in FCL. §.§.§ AttentionThe attention mechanism, initially popularized in the field of computer vision, is a vector that directs the focus of perception. <cit.> incorporates it into recurrent neural network models for image classification. Subsequently, attention played an important role in the field of natural language processing (NLP) <cit.>. <cit.> expands the application of attention-based RNNs by proposing two novel mechanisms: global attention and local attention. Hence, it can easily be adapted to federated settings. The work of <cit.> pioneers the use of attention mechanism in the aggregation of multiple distributed models. To achieve knowledge fusion during server optimization, they introduce a layer-wise soft attention mechanism to capture the “attention" among local models' parameters. This approach enables the automatic selection of the weights for different client models, enabling them to minimize the expected distance between the server model and client models. <cit.> proposes FedAtt to tackle concept drift in distributed 5G networks and achieves excellent results. FedAtt takes the server as query and clients as keys, computes layer-wise attention scores, and minimizes the weighted distance between global and local models, which enables the FL system to adaptively evaluate and balance the contribution of each local model. Notice that unlike the original attention mechanism, which manipulates the input data flow, on the contrary, attentive aggregation in FCL usually works on model weights to control the intermediate results in the entire training procedure.Moreover, the attention mechanism is also used in Class-Incremental FCL.The proposed approach in <cit.> commences with the random selection of a balanced set of samples from each client, establishing a balanced pre-training dataset. The federated averaging technique is then applied to train the model, resulting in an initial global model on the server. Subsequently, the traditional FL framework is augmented with the iCaRL strategy <cit.>. Notably, a dual attention mechanism is integrated into this framework. The authors use a Channel Attention Neural Network model that integrates the SE module <cit.> into the Graph Convolutional Neural (GCN) network as the FL local model. This model allows for the identification of the significance of features for each client's overall samples during training and effectively reduces noise interference. The authors design a federated aggregation algorithm based on the feature attention mechanism to assign appropriate attention weights to each local model for the global model. These weights correspond to the model parameters of each layer of the neural network and act as aggregation coefficients to enhance the global model's ability to capture essential information and features.<cit.> further uses the attention mechanism to alleviate both local and global forgetting. Their preliminary experiment illustrates the existence and impact of local forgetting. On a single client, if there is no intervention, the previously learned knowledge will deteriorate quickly when the new tasks do not overlap with the old ones. This problem is not limited to the client side; it also affects the server side since the heterogeneity of the client-side data and aggregation can cause some knowledge to be forgotten. Moreover, the dynamic nature of tasks on the client side exacerbates this forgetting phenomenon. To address this, they propose a solution: Self-Attention (for only the current task) and Total Attention (for all observed classes).Hard attention, also called “mask", is also applied to FCL. <cit.> first uses hard attention in the Task-IL scenario of CL. The idea of partial initialization is firstly applied to FL by <cit.>. The authors have identified that a significant amount of local knowledge is lost when using the global model to initialize local models in FL. In order to address this issue, they have implemented a partial initialization approach that aims to preserve some local parameters during the initialization process. FedMask is based on a new encryption technology called “mask", which allows participants to share only a small part of their model (that is, mask) rather than complete model parameters. In each federated learning iteration, participants will use their own data to locally update the shared mask, and then send the updated mask back to the server. The server will aggregate the masks of all participants and apply them to the initialization of the global model in order to fuse global knowledge and retain local knowledge.§.§.§ Parameter DecompositionParameter decomposition approach is based on Yosinski etal.'s research <cit.>, which concludes that the feature extraction component (shallow layers) of a neural network is comparatively generic because they do not acquire task-specific knowledge. Intuitively, it can be applied in FL by partitioning parameters into task-specific and global parts to separate the private knowledge of local tasks and the common knowledge shared among all clients.In FedPER <cit.>, Clients train a Deep Neural Network (DNN) where the last few layers are not shared, and each client trains them separately while sharing the earlier layers as the global model. The last layers act as the personalization model, enabling different participants to obtain distinct results for comparable inputs. This approach not only facilitates the integration of global knowledge but also preserves local knowledge to some extent.<cit.> introducesVIRTUAL (VarIational fedeRaTed mUltitAsk Learning) to address multi-task learning in FL settings. Every client has a task-specific model that benefits from the server model in a transfer learning fashion with lateral connections. A part of the parameters are shared between all clients, and another part is private and tuned separately. The server maintains a posterior distribution that represents the plausibility of the shared parameters. Federated Weighted Inter-client Transfer (FedWeIT), a novel framework proposed by Yoon etal. <cit.>, addresses spatial-temporal forgetting in a different way. It decomposes network weights into two parts: global federated parameters and sparse task-specific parameters. It is derived from Additive Parameter Decomposition (APD) proposed in <cit.>, which alleviates forgetting by separating weights into task-shared and sparse task adaptive parameters, and keeping task adaptive parameters for previous tasks unaffected when training on new tasks. By taking a weighted combination of other clients' task-specific parameters, each client acquires selective knowledge from them. The sparse parameter selection technique not only reduces interference between clients but also facilitates efficient communication. FedWeIT effectively mitigates the interference between incompatible tasks, while facilitating positive knowledge fusion among clients during the learning process.The idea of parameter decomposition is also adapted in mobile edge computing (MEC) systems <cit.>. Cross-edge FCL <cit.> separates the knowledge of the local model into two kinds of parameters, in which the base parameters learn the general knowledge between different tasks, and the task-specific parameters learn the personalized knowledge of the current local task. And the cross-edge strategies deal with the relationship between the local old task of the dynamic device and the global task of the new FL system.<cit.> is the first one who applies FCL to NLP. The authors propose a novel framework, Federated Selective Inter-client Transfer (FedSeIT), which efficiently selects the relevant task-adaptive parameters from the historical tasks of other clients assessing domain overlap at the global server using encoded data representations while preserving privacy. The FedSeIT model decomposes the model parameters of each client into three distinct sets of parameters: (1) dense local base parameters, which cover and accumulate task-agnostic knowledge on the client's private task sequence, (2) sparse task-specific adaptive parameters, which capture task-specific knowledge for each task across different clients, and (3) sparse mask parameters, which allow the client model to selectively leverage global knowledge.In summary, this method divides parameters into relevant and unique parts. The knowledge fusion during the aggregation stage only involves activating the relevant parameters while keeping the client-specific part unchanged.§.§ Dynamic ArchitectureWhile the parameter decomposition approach requires a fixed capacity of the model, dynamic architecture approach allows network size or connection paradigm to change if needed. <cit.> designs a structured FCL framework based on NetTailor <cit.> , which involves a common backbone network trained on a large dataset such as ImageNet and the task-specific blocks interspersed between the layers. It allows the direct creation of links between the backbone network and specific layers, bypassing others. Mori etal. <cit.> find that most Horizontal Federated Learning (HFL) works such as FedProx <cit.> only use common features space and leave client-specific features unutilized. Their framework Continual Horizontal Federated Learning (CHFL) splits the network into two columns corresponding to common features and unique features, respectively. The first column is trained jointly using common features via vanilla HFL, while the second column is trained locally using unique features and lateral connections that leverage the knowledge of the first column, without disrupting the federated training process. In traditional continual learning, the parameter isolation approach can be categorized into fixed network structure and dynamic network methods. In the fixed network method, only the relevant parameters are activated for each task, without modifying the network structure <cit.>. In the dynamic network method, the network structure is modified to add new parameters for new tasks while keeping the old parameters unchanged <cit.>. However, the situation is different in FCL. The dynamic architecture method mainly focuses on changing the network connections rather than continually increasing the number of parameters, while the parameter separation method mainly focuses on separating client parameters from global parameters. Furthermore, the parameter separation approach is widely used in FCL, and there are many related studies on its application. For the above reasons, we classify the parameter separation approach and dynamic architecture approach as two separate categories, rather than following the way of grouping them under the broader category of parameter isolation. §.§ PrototypeThe prototypical network is a widely used approach in few-shot learning <cit.>, which learns a representation space where samples from the same class are clustered together. Given a few examples of a new class, the prototypical network can quickly learn to classify new samples into the corresponding class based on the similarity to the prototypes of each class. <cit.> attempt torepresent the knowledge of clients in FCL using this approach. Specifically, they try to compress concepts into relatively small vectors in a common embedding space. Each class has its own prototype. During the aggregation stage, clients upload their own prototypes to the server, and then the server combines and extends the categories of the same class. Subsequently, the merged prototypes are distributed to the clients, completing the knowledge exchange between local and global. Moreover, for privacy preservation, prototypes can be perturbed while transmitting. And it requires only a small amount of communication and reduces memory storage costs, leading to a more efficient federated system. Here are detailed introductions of these works. Notice that the above three papers are all about class-incremental FCL. The first attempt to systematically learn a global class-incremental model in FL settings is <cit.>. Dong etal. pave a new approach to tackle both local and global catastrophic forgetting, known as the Global-Local Forgetting Compensation (GLFC) model. In this model, they propose a class-aware gradient compensation loss to mitigate the local forgetting caused by class imbalances at the local level, which balances the forgetting of different old classes. They also introduce a class-semantic relation distillation loss to maintain consistent inter-class relations across different incremental tasks. To address global catastrophic forgetting, they design a proxy server that selects the best old global model for class-semantic relation distillation at the local level. At last, to ensure privacy preservation, the proxy server adopts a prototype gradient-based communication mechanism to gather perturbed prototype samples of new classes from local clients. These samples are then utilized to evaluate the performance of the global model and select the most optimal one. However, Li etal. <cit.> argue that the global-local compensation in <cit.>, is idealistically assumed and ignores some key issues. The first neglected issue is task overlap. True incremental tasks and pseudo-incremental tasks should be distinguished to avoid unnecessary overhead. There are three scenarios for client-side incremental tasks: full-covered, semi-covered and not covered. If the present classes of a specific client are fully covered by its previous classes, the client can be trained directly. The authors solve this problem by utilizing a double-ended task alignment table. Similar to the routing table, it exists on both the server and the client side. The table on the client side records all task classes covered up to the current task and is uploaded to the server along with the local model after local training. The table on the server side then participates in the knowledge fusion of local models. The second neglected issue is the aggregation of heterogeneous local models. Due to the intrinsic nature of continual learning, different clients have dimensionally identical feature extractors and structurally heterogeneous classifiers (the output layers). The authors put forward Pre-Alignment (PreA) and Post-Alignment (PostA) strategies to solve this problem. PreA method means the output layers of clients are pre-defined by the server before each round of federated training. PostA method means the server adjusts and aggregates the output layers according to the alignment table submitted by the client. Another approach to perform the aggregation of heterogeneous local models is to utilize partial masks for the specific knowledge of each client and a total fusion mask for common knowledge among clients <cit.>.Then Dong etal. update their work. The different forgetting speeds of old classes resulting from the data heterogeneity among clients can be significantly alleviated by LGA (Local-Global Anti-forgetting) <cit.>. Specifically, LGA surmounts both forgetting by a category-balanced gradient-adaptive compensation loss and ensures intrinsic category relations consistency within different incremental tasks with a category gradient-induced semantic distillation loss. Besides, perturbed prototype images of new classes are transmitted from local clients to the proxy server, reconstructed by the proxy server and processed via self-supervised prototype augmentation to pick the best old global model for semantic distillation. Global forgetting is thus overcome by improving the distillation gain of the category gradient-induced semantic distillation loss at the local side by providing the best global model from a global perspective.Hendryx etal. <cit.> call the class-incremental federated learning the federated reconnaissance. In traditional federated learning, the set of classes learned by each client is static. Federated reconnaissance, instead, requires that each client individually learn a growing set of classes and effectively communicate knowledge of previously observed and incoming classes with other clients. <cit.> proposes prototypical networks to address federated reconnaissance, which compresses concepts into relatively small vectors known as prototypes, enabling efficient communication. More important is that it enables fast knowledge transfer as it does not rely on gradient. Instead, first, a base model is trained on a set of base classes to provide the foundation for basic classification. The model is then deployed to clients, who are trained on instances of new or previously seen classes using local supervision. After each client has learned new classes, it is evaluated on the combined set of previous and new classes. The clients then share their knowledge of the new classes with a central server, which merges the knowledge and distributes it back to the clients. At this point, the server and clients have prototypes for each class, which can be used for classification. The most interesting point about this paper is that it can restore and load knowledge extracted from other tasks, as Knowledge Base used in CL <cit.>. §.§ Knowledge DistillationKnowledge distillation is a popular technique to fuse knowledge, the core idea of which is to distill the knowledge contained in an already trained teacher model into a student model <cit.>. Basically, the teacher model teaches students to construct the same mapping relationship by outputting the label Y corresponding to sample X, just like humans. This technique is widely used in FCL <cit.>. <cit.> designs a class-semantic relation distillation loss to maintain consistent inter-class relations across tasks during the extraction/distillation process. Inspired by Learning without Forgetting (LwF) <cit.>, Knowledge Distillation is implemented in class-incremental FCL <cit.>. Regarding the past model of the client as the first teacher model and the current client model as the first student model, the final loss for each client consists of a classification loss and a distillation loss, the latter is computed with the current model and previous model of the same client. Then, regard the global model as the second teacher model to make use of the server's ability to maintain a comprehensive knowledge base of all clients. The first teacher (the past model of a client) can improve the performance of a student on specific tasks, helping it maintain proficiency on previously learned tasks. On the other hand, the second teacher (the server) enhances the general features of a client model by transferring knowledge from all other clients and mitigating the risk of overfitting on new tasks.Moreover, FedKL in <cit.> decouples the training objective into a classification and a knowledge maintaining term. In the classification term, the local training is supervised by the original label of the data with a softmax-cross-entropy loss applied. In the knowledge maintaining term, the locally unavailable classes are supervised by the distillation of the global model with logic regression loss. <cit.> proposes FCCL (Federated Cross-Correlation and Continual Learning), whichutilizes knowledge distillation in local updating, providing inter and intra-domain information without leaking privacy. By utilizing unlabeled public data and implementing self-supervised learning techniques, heterogeneous models attain a generalizable representation while ensuring efficient communication. §.§ SummaryIt is worth noticing that some of the papers mentioned above may use more than a single technique to realize knowledge fusion. The following Tab. <ref> provides a detailed description of the methods employed in each model. In summary, to overcome the problem of spatial-temporal catastrophic forgetting in FCL, knowledge fusion is necessary. Knowledge can be represented in various forms: data, gradients, parameters, models and so on. How to fuse knowledge, especially fuse relevant knowledge, is the key to address challenges. Regularization realizes the common optimal solution by constraining the direction of parameter update, but if there is a huge difference between the two tasks, or even the opposite, then the performance will be even worse. The method of parameter isolation divides the parameters of a model into different parts based on specific rules, fusing relevant knowledge together without fusing irrelevant ones, which can further improve the generalization ability of the global model. Based on this idea, clustering divides clients into different groups through similarity, generating more than one global model, and avoiding the possibility of integrating irrelevant knowledge.By utilizing specific rules to divide the parameters of a model, the method of parameter isolation only fuses relevant knowledge, ultimately enhancing the generalization ability of the global model. This idea is further developed by clustering, which groups clients based on similarity to generate more than one global model and prevent the integration of irrelevant knowledge.The dynamic network structure method allows for dynamic adjustment of the network structure to accommodate more knowledge, but this puts higher demands on the design of the network structure. The knowledge distillation method conveys knowledge through the input and corresponding output of the models. The student model adjusts its output according to the guidance of the teacher model, making its output similar to the output of the teacher model, thus achieving knowledge transfer. Similarly, if encountering unrelated teacher models, its output may shift, resulting in the inability to complete tasks.Both prototype and rehearsal methods express knowledge through data. Replay past knowledge to learn from it, but in FCL settings, a simple replay may cause privacy breaches. Moreover, whether replaying data from other clients is useful for oneself is also a question. And prototype networks achieve knowledge sampling by compressing a type of data into prototypes in a sample space. If we want to use this method in FCL, then we must ensure the consistency of the sample space.Replay method mitigates forgetting by revisiting past knowledge, but in FCL setting, such a simple replay can cause privacy breaches. Additionally, it is unclear whether using data from other clients for replay is beneficial to one's own local training. In contrast, prototype networks use prototypes in a sample space to compress data and obtain knowledge samples. However, if we want to apply this method in FCL, it is necessary to ensure the consistency of the sample space.§ FUTURE WORK AND DISCUSSION In this survey, we explore the origins of FCL and discuss why federated learning and continual learning should be combined. Then we firstly define a fundamental and main problem in federated continual learning called “spatial-temporal catastrophic forgetting" and experiments have been conducted to verify the horrible impact on the performance of the global model, using FedAvg. In Sec. <ref>, we summarize two generic frameworks, namely synchronous FCL and asynchronous FCL. The focus of synchronous FCL lies in the aggregation of local models, which is a primary stage of FCL. Furthermore, asynchronous FCL tries to turn aggregation based into knowledge fusion based. We believe that knowledge-based FCL is the most promising future direction, although now existing methods are like a mixture of asynchronous and synchronous.In Sec. <ref>, we classify the existing knowledge fusion technologies of Federated learning according to the expression forms of knowledge in different stages of model training. The following issues are worthy of exploring and we believe they could form future avenues of research. Trustworthy FCL: One of the biggest challenges in promoting FL techniques is to strike a balance between privacy protection and model utility, which is also confronted by FCL. Generally, the enhancement of privacy comes at the cost of decreased model performance. To address this issue, Yang proposes Trustworthy Federated Learning to incorporate secure and reliable mechanisms into distributed federated modeling. <cit.> first reveals the incompatibility of privacy and utility in FL, set it as a multi-objective optimization problem, and applies the no-free lunch theorem in FL <cit.> to three privacy-preserving mechanisms, namely differential privacy, secure multi-party computation and homomorphic encryption. In addition to privacy and utility, an FL method should also consider the efficiency of algorithm, robustness of outliers, fairness of sampling, incentive mechanism and interpretability of the whole model <cit.>. <cit.> designs measurements of privacy leakage, utility loss and training cost for various privacy protection approaches, and finds Pareto optimal solutions to all considered objectives. The conflicts of objectives, especially the contradiction between utility and privacy, also exist in FCL. And the dilemma between stability and plasticity makes the problem more complicated and challenging. A possible solution is to seek the Pareto Front of multi-objective optimization and determine the equilibrium point based on the specific requirements of users, which is worth researching.Delayed Decision-Making: According to the intrinsic nature of deep learning, the performance of the model is generally weak at the beginning and gradually strengthens during the process of learning consecutively coming samples <cit.>. Obviously, if uncertain samples are subjected to delayed decision-making, the average accuracy can be significantly improved, which is the underlying assumption of three-way decision (3WD) <cit.>. 3WD divides the decision space into three mutually disjoint regions, respectively signifying positive, negative and uncertain, the decision-making of uncertain samples is delayed until the learner has sufficient capacity of predicting at a high level of confidence <cit.>. Some scholars have already incorporated 3WD into CL, taking advantage of the forward transfer of knowledge in CL <cit.> to improve the performance of the model. However, there is not yet any study incorporating 3WD into FCL. Rational Collaboration: Making all clients collaborate does not necessarily result in the best performance. FL is profitable only when the benefits outweigh the costs. Cui etal. <cit.> advances a rational collaboration method called "collaboration equilibrium" and proposes an approach based on Pareto optimization to identify the optimal collaborators. Besides, edge devices sometimes upload unreliable data either intentionally (the data poisoning attack) or unintentionally (low-quality data) <cit.>,which may cause difficulties in model aggregation. <cit.> introduces “reputation" as a metric and proposes a reliable device selection scheme based on reputation to achieve rational collaboration. Another way to achieve rational collaboration is to motivate the clients to use high-quality data in the training process. <cit.> implements a game theoretic mechanism to reward the federated participants only when they utilize high-quality data, and adopt an equilibrium strategy to ensure that the reward for each client achieves its optimal value. Due to the heterogeneity of old and new tasks in FCL, the problem of rational collaboration becomes more complex and has a greater impact on model performance. Convergence Time: Due to the need for multiple rounds of knowledge communication and model iteration, FL often implies longer convergence time <cit.>. Some methods used for overcoming catastrophic forgetting in CL, such as rehearsal and generative replay, can result in slower convergence <cit.>. As the combination of FL and CL, FCL still has room for improvement in this area. Theoretically, reducing convergence time is crucial for efficient FCL training, since it can enable quicker deployment of updated models and more efficient utilization of computational resources. And numerous rounds of communication can inevitably increase the risk of privacy breaches or security vulnerabilities because it extends the duration of the exposure of information and thus weakens the privacy and security guarantees of FCL. Besides, the users of edge nodes also expect a shorter convergence time. However, most of the existing works on FCL only focus on accuracy. It is necessary for researchers to take convergence time into account and develop new techniques to improve the efficiency of FCL algorithms. Recommendation System: <cit.> incorporates FL into the reinforcement learning model to provide a more secure daily schedule recommendation. <cit.> designs a distributed recommendation system via FL to enable a group of agents to collaboratively learn the tastes of various users. However, since user preferences and item catalog vary over time<cit.>, a federated continual recommendation system needs to be deployed to provide up-to-date recommendations that align with the current preferences of users. Large Language Model: The development of large language models (LLMs) has extensively promoted the application of artificial intelligence in various fields <cit.>. However, the risk of privacy leakage deters some companies from adopting LLMs. To pre-train LLMs in specialized domains as well as protect data privacy, researchers integrate FL into LLMs. <cit.> proposes a communicational efficient FL framework for pre-training BERT by training the shallower layers progressively. FedBERT <cit.> allows the clients to fine-tune the private NLP tasks individually after pre-training the global model. There are also some recent works exploring the use of LLMs to address the challenges in CL. <cit.> alleviates temporalCF by learning to prompt the pre-trained LLM dynamically. CPT <cit.> continually post-trains the LLMs with incremental unlabeled domain corpora to expand the knowledge of LLMs without forgetting. Post-training <cit.> refers to the technique that reduces bias introduced by non-review data and fuses domain knowledge into LLMs. <cit.> inserts capsule layers into LLMs to encourage knowledge transfer among tasks and isolate task-specific knowledge. Generally, despite its practical significance, the research on combining FCL and LLM is still limited <cit.>.Other Industries can also apply FCL techniques to their needs. <cit.> proposes a federated continual blockchain framework for physiological signal classification. <cit.> recently utilizes FCL to assist human auditors, since it can better satisfy the “real-time" assessment of digital accounting journal entries. <cit.> facilitates urban traffic forecasting with peer-to-peer FCL framework.As stated above, FCL is still a newly emerging research area, with many promising avenues yet to be explored. § ACKNOWLEDGMENTSThis work was supported by the Natural Science Foundation of Sichuan Province (No. 2022NSFSC0528), the Sichuan Science and Technology Program (2022ZYD0113), and the Beijing Natural Science Foundation (4212021). IEEEtran[< g r a p h i c s > ]Xin Yang (Member, IEEE) received the Ph.D. degree in computer science from Southwest Jiaotong University, Chengdu. He is currently a Professor at the School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics. He has authored more than 70 research papers in refereed journals and conferences. His research interests include machine learning, continual learning and multi-granularity learning. [< g r a p h i c s > ] Hao Yu received the B.S. degree from Southwest Petroleum University, Chengdu, China, in 2022. Now he is purchasing an M.S. degree from the School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics. His main research interests include federated continual learning and continual learning. [< g r a p h i c s > ] Xin Gao received the B.S. degree from Sichuan University, Chengdu, China, in 2021. Now she is purchasing an M.S. degree from the School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics. Her main research interests include federated continual learning and granular computing. [< g r a p h i c s > ] Hao Wang is a Research Fellow at Nanyang Technological University (NTU), Singapore. He received his Ph.D. in Computer Science from Southwest Jiaotong University. Before joining NTU, he was with Zhejiang Lab. He was also a Visiting Student with the University of Illinois at Chicago (UIC) for two years. His research interests include lifelong and continual learning, sentiment analysis, multi-view learning, NLP, and spatio-temporal data mining. [< g r a p h i c s > ] Junbo Zhang (Senior Member, IEEE) is a Senior Researcher of JD Intelligent Cities Research. He is leading the Urban AI Product Department of JD iCity at JD Technology, as well as the AI Lab of JD Intelligent Cities Research. Prior to that, he was a researcher at Microsoft Research Asia (MSRA). He has published over 50 papers in spatio-temporal data mining and AI, urban computing, deep learning, and federated learning. He serves as an Associate Editor of ACM Transactions on Intelligent Systems and Technology. He received a number of honors, including the Second Prize of the Natural Science Award of the Ministry of Education in 2021, the 22nd China Patent Excellence Award in 2021, the ACM Chengdu Doctoral Dissertation Award in 2016, the Chinese Association for Artificial Intelligence (CAAI) Excellent Doctoral Dissertation Nomination Award in 2016. He is a senior member of CCF and IEEE, a member of ACM.[< g r a p h i c s > ]Tianrui Li (Senior Member, IEEE) received the B.S., M.S. and Ph.D. degrees from Southwest Jiaotong University, Chengdu, China, in 1992, 1995, and 2002, respectively. He was a Post-Doctoral Researcher with Belgian Nuclear Research Centre, Mol, Belgium, from 2005 to 2006, and a Visiting Professor with Hasselt University, Hasselt, Belgium, in 2008; the University of Technology, Sydney, Australia, in 2009; and the University of Regina, Regina, Canada, in 2014. He is currently a Professor and the Director of the Key Laboratory of Cloud Computing and Intelligent Techniques, Southwest Jiaotong University. He has authored or co-authored over 300 research papers in refereed journals and conferences. His research interests include big data, machine learning, data mining, granular computing, and rough sets.
http://arxiv.org/abs/2312.16475v1
{ "authors": [ "Xin Yang", "Hao Yu", "Xin Gao", "Hao Wang", "Junbo Zhang", "Tianrui Li" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231227084739", "title": "Federated Continual Learning via Knowledge Fusion: A Survey" }
/łϕøρ∂ω∇ΦFaculty of Physics, Urmia University of Technology, Urmia, [email protected] of Physics, Urmia University of Technology, Urmia, IranSchool of Physics, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran, [email protected] of Physics, Semnan University, P.O. Box 35195-363, Semnan, Iran/łϕøρ∂ω∇ΦWe investigate the relationship between quantum speed limit time and the non-Markovianity of an atom in structured environments.We show that there exists an inverse relation between them, which means that the non-Markovian feature of the quantum process leads to speedup of the process. Our results might shed light on the relationship between the speedup of quantum evolution and the backflow of information from the environment to the system.Quantum speed limit and non-Markovianity in structured environments Saeed Haddadi 0000-0002-1596-0763 January 14, 2024 =================================================================== § INTRODUCTIONHow to accelerate the evolution of quantum systems is one of the most important questions in quantum mechanics theory. Based on this theory, quantum speed limit (QSL) quantifies the minimum evolution time for a quantum system to pass through a predetermined distance<cit.>.It refers to the shortest possible time required for a quantum system to evolve from an arbitrary initial state to a target final state <cit.>.QSL time and its applications have been extensively studied experimentally and theoretically <cit.>. Moreover, QSL time plays a particularly important role in the development of quantum information protection and quantum optimal theory as well <cit.>. In recent years, there has been a particular focus on utilizing the advantages of QSL time in quantum batteries<cit.>. A deeper knowledge of QSL has led to the expansion of the speed limit to classical systems <cit.>. There is a strong correlation between the vanishing QSL times and the emerging classical behavior due to the reduced uncertainty in quantum observables <cit.>. In Ref. <cit.>, the application of QSL time to quantum resource theories has been investigated. In the beginning, only isolated systems were studied in accordance with the concept of QSL time <cit.>. Initially, Mandelstam and Tamm (MT) introduced the minimal time for evolution from an initial to an orthogonal state for a closed quantum system as τ⩾πħ/2 Δ E, where (Δ E)^2=⟨ψ|H^2| ψ⟩-(⟨ψ|H| ψ⟩)^2 is the energy variance of the system <cit.>. This bound is known as MT bound. Another bound related to the mean energy E=⟨ψ| H |ψ⟩ was then derived by Margolus and Levitin (ML) as τ≥πħ/2 E, where it is assumed that the ground state energy E_0 is equal to zero <cit.>. This bound is known as ML bound. It is possible to implement a unified QSL bound for the unitary evolution of closed quantum systems by combining both ML and MT bounds as <cit.> τ≥max{πħ/2 Δ E, πħ/2 E}.The ML and MT bounds have been expanded and improved using a wide range of metrics, including trace distance, Bures angle, relative purity, and many other metrics <cit.>.The distance among generalized Bloch vectors for both unitary and non-unitary processes can also be used to obtain tight QSL bounds <cit.>.It should be noted that in the real world, quantum systems are open and their evolution is non-unitary <cit.>. A variety of approaches can be used to analyze the behavior of open systems, including the master equation formalism and the quantum trajectory method. The master equation is a central tool for studying the dynamics of open systems. It provides a mathematical description of how the system's density matrix changes over time. The nature of an interaction between a quantum system and its environment can influence its dynamics and evolution.The system-environment interaction is most often analyzed from both the non-Markovian and Markovian perspectives <cit.>. In a Markovian regime, the evolution of the system is purely determined by its current state. In this case, the system-environment interactions occur over a much shorter timescale than the significant system evolution.However, in a non-Markovian regime, system-environment interactions are longer in time than the intrinsic evolution of the system <cit.>. An important feature of non-Markovian regimes is that the interaction behavior of the system with its environment in the past determines the evolution of the system in the future. Due to the unavoidable interaction between a quantum system and its environment, the decoherence effect caused by this coupling can have a significant effect on quantum statistics.The ratio between the QSL time and the actual time quantifies the possibility of speeding up the quantum evolution. QSL bounds are saturated with no potential for speedup when the ratio equals one. However, the QSL bound is unsaturated if the ratio is less than one, and there exists potential for speeding up the evolution of the dynamical system. In Ref. <cit.>, it has been shown that the non-Markovian effect can accelerate quantum evolution <cit.>. Both theoretically and experimentally, non-Markovianity regulation has been proposed as a mechanism for regulating dynamical speedup. Moreover, it has been shown that non-Markovian effects induce unsaturated QSL bounds in open dynamics <cit.>. It is important to note that dynamical speedup in this way depends very much on the feedback from the environment. Therefore, it will be essential to investigate other sources that can generate unsaturated QSL bounds besides non-Markovianity as a source of unsaturated QSL bounds <cit.>. Motivated by this, we investigate the possibility of quantum speedup using a convenient cavity-based engineered environment. We show how the Markovian environments consisting of two cavities can accelerate the dynamics of an artificial atom interacting with a pseudomode. In the considered model, the coupling between the pseudomodes, atom-pseudomode coupling, and detuning between atom and pseudomode can have significant impacts on QSL time and non-Markovianity.The work is organized as follows. First, we present a physical model and its solution in Sec. <ref>.Then, we discuss the QSL time and non-Markovianity for the considered model in Sec. <ref>. Next, we turn to the results and discussion in Sec. <ref>. Finally, we provide a brief conclusion of the most important results in Sec. <ref>. § MODEL AND PSEUDOMODE METHODThe considered model consists of a two-level atom interacting with a Lorentzian structured environment. Figure <ref> shows the schematic representation of the scheme. For this model, the Hamiltonian can be expressed as follows <cit.> Ĥ=ω_s/2σ̂_z+ω_1 â_1^†â_1+ω_2 â_2^†â_2+k(â_1^†σ̂_-+â_1 σ̂_+) +J(â_1^†â_2+â_1 â_2^†),where σ̂_z=|e⟩⟨ e|-| g⟩⟨ g| is the ordinary Pauli operator in z-direction of atom, ω_s is the transition frequency of the atom, σ_+(σ_-) is raising (lowering) operator of the atom, and ω_1 and ω_2 are the frequencies of the modes inside the Markovian environments shown by cavity 1 and cavity 2, respectively. Let us take ω_1=ω_2=ω and set ω_0=ω + δ, where δ is atom-pseudomode detuning.Besides â_1(â^†_1) and â_2(â^†_2) are the annihilation (creation) operators. In what follows, we study the dynamics using the pseudomode theory <cit.>. This approach relies on the relationship between the atom dynamics and the shape of the spectral distribution of the environment. The significance of precise approaches is of current interest due to the experimental realization of quantum systems. As shown in Fig. <ref>, the environment can be depicted by two nondegenerate pseudomodes that leak into the Markovian environments (cavities 1 and 2) with dissipation rates Γ_1 and Γ_2. Here, the two-level atom only interacts with the first pseudomode Pm_1 (the strength of the coupling k) which is in turn coupled to the second pseudomode Pm_2 (the strength of the coupling J). The dynamics of the whole system for a Lorentzian spectral distribution can be obtained by the following exact pseudomode master equation ρ̇(t)= -i[Ĥ, ρ(t)]-∑_n=1, 2Γ_n/2[a_n^† a_n ρ(t)-2 a_n ρ(t) a_n^†+ρ(t) a_n^† a_n]. The above master equation represents the coherent interaction between the two-level atom and two pseudomodes in the presence of the pseudomodes decay caused by the interaction with Markovian environments.Here, it is assumed that the atom is initially in an excited state | e_a ⟩ while both pseudomodes Pm_ and Pm_are in the ground state | g__1 g__2⟩, that is, the initial state of the overall system is assumed to be ρ(0)=| e_a g__1 g__2⟩⟨ e_a g__1 g__2|. Considering that there is only one excitation in the whole system including a two-level atom and two cavities, the state of the system at time t can be written asρ(t)= (1-η(t))|ψ(t)⟩⟨ψ(t)|+η(t)| g_a g__1 g__2⟩⟨ g_a g__1 g__2|,where 0 ≤η(t) ≤ 1 with η(0)=0, and we have |ψ(t)⟩= c_1(t)|g_a e__1 g__2⟩+c_2(t)|g_a g__1 e__2⟩+h(t)|e_a g__1 g__2⟩,with h(0)=1 and c_1(0)=c_2(0)=0. For convenience, it is appropriate to consider the following unnormalized state as |ψ(t)⟩≡ √(1-η(t))|ψ(t)⟩= c̃_1(t)|g_a e__1 g__2⟩+c̃_2(t)|g_a g__1 e__2⟩+h̃(t)|e_a g__1 g__2⟩,where h(t)=√(1-η(t)) h(t) is the probability amplitude that the two-level atom is in an excited state while c̃_n(t) ≡√(1-η(t))c_n(t) (with n=1,2) is the probability amplitude that the pseudomodes being in their excited states. Using the above unnormalized state, the density matrix of the total system can be rewritten as ρ(t)=|ψ(t)⟩⟨ψ(t)|+η(t)|g_a g__1 g__2⟩⟨ g_a g__1 g__2|. The time-dependent amplitudes of h̃(t), c̃_1(t), and c̃_2(t) can be obtained by solving the following coupled differential equationsi d h(t)/d t=(ω+δ) h(t)+k c̃_1(t), i d c̃_1(t)/d t=(ω-i/2Γ_1) c̃_1(t)+k h(t)+J c̃_2(t), i d c̃_2(t)/d t=(ω-i/2Γ_2) c̃_2(t)+J c̃_1(t) . In the above set of coupled differential equations, the coefficients h(t), c̃_1(t), and c̃_2(t) can be obtained using the standard Laplace transformations. By knowing these,the time-evolved density matrix of the two-level atom can be obtained by taking the partial trace over two pseudomodes, which we will provide in the next section. § QSL TIME AND NON-MARKOVIANITY§.§ QSL time We first deal with QSL time using the method introduced in Ref. <cit.> for open quantum systems.A comprehensive bound of the QSL time for any arbitrary initial state can be defined as τ_=max{1/Λ_τ^o p, 1/Λ_τ^t r, 1/Λ_τ^h s}sin ^2[Θ(ρ_0, ρ_τ)] tr[ρ_0^2],where Θ(ρ_0, ρ_t)=arccos(√(tr[ρ_0 ρ_t]/tr[ρ_0^2])) is a function of relative purity between initial state ρ_0 and the state at arbitrary time ρ_t that determined by the time-dependent non-unitary master equation ρ̇_t=ℒ_t (ρ_t) and Λ_τ^o p=1/τ∫_0^τ d tℒ_t (ρ_t)_o p, Λ_τ^tr=1/τ∫_0^τ d tℒ_t (ρ_t)_tr, Λ_τ^hs=1/τ∫_0^τ d tℒ_t (ρ_t)_hs, where ℒ_t (ρ_t)_o p=λ_1, ℒ_t (ρ_t)_tr=∑_i λ_i and ℒ_t (ρ_t)_hs=√(∑_i λ_i^2) are operator norm, trace norm and Hilbert-Schmidth norm, respectively. Also, λ_i's are singular values of ℒ_t (ρ_t) and λ_1 is the largest singular value of ℒ_t (ρ_t).When the denominator of the fraction is Λ_τ^o p and Λ_τ^tr, we have a generalized ML type of QSL bound for open quantum systems, whereas when it is Λ_τ^hs, we have MT type bound on the QSL time.According to Ref. <cit.>, the ML type bound based on the operator norm provides the tightest QSL bound. Using the recently achieved bound for QSL time and a given actual driving time τ, one can estimate the intrinsic speed of a dynamical evolution.In the case of τ_=τ, the further speedup is not possible and the speedup is not observed. In contrast, when τ_<τ, this indicates the possibility of quantum dynamics speeding up. Based on the assumption that the two-level atom is initially in the excited state as mentioned in the previous section, the time-evolved density matrix of the atom takes the following form ρ_t= |h(t) |^2| e ⟩⟨ e | + (1-|h(t) |^2)| g ⟩⟨ g |. So, QSL time based on the tightest bound is obtained as τ_=1 -|h(τ) |^2/1/τ∫_0^τ|∂_t |h(t) |^2| dt.§.§ Non-MarkovianityDuring evolution, the backflow of information from the environment to the system is responsible for the non-Markovian feature of the process. For quantification, the degree of non-Markovianity can be defined as <cit.> 𝒩=max _ρ_1(0), ρ_2(0)∫_σ>0σ[t, ρ_1(0), ρ_2(0)] d t,in whichσ[t, ρ_1(0), ρ_2(0)]=d/d t D[ρ_1(t), ρ_2(t)],where D[ρ_1(t), ρ_2(t)]=1/2Tr|ρ_1(t)-ρ_2(t)| is the trace distance of a pair of states with the trace norm of an operator as |Â| = √(ÂÂ^†). Note that when σ[t, ρ_1(0), ρ_2(0)]>0, we have the backflow of information from the environment to the system and so, the evolution is non-Markovian 𝒩≠0. However, if σ[t, ρ_1(0), ρ_2(0)]<0 then 𝒩=0, meaning that the evolution is Markovian, information irreversibly flows from system to environment, and backflow does not occur. As a result, non-Markovianity indicates the total backflow of information from the environment and the system. In ref. <cit.>, it has been proved that the optimal pair of initial states that maximize the degree of non-Markovianity 𝒩 are ρ_1(0)=| e ⟩⟨ e | and ρ_2(0)=| g ⟩⟨ g |. Using the findings of the previous section and taking into account the mentioned optimal initial states, one can find 𝒟[ρ_1(t), ρ_2(t)]=|h(t)|^2. Therefore, the degree of non-Markovianity is obtained as 𝒩=1/2[∫_0^τ∂_t|h(t)|^2d t +|h(τ)|^2-1].§ RESULTS AND DISCUSSIONNow everything is ready to study the QSL time and non-Markovianity based on the analytical expressions (<ref>) and (<ref>) obtained in the previous section.In Fig. <ref>, the non-Markovianity (<ref>) has been plotted as functions of atom-pseudomode coupling k/Γ_1 and pseudomode-pseudomode coupling J/Γ_1. From this figure, it can be seen that in the absence of pseudomode coupling strength (J=0), the evolution is Markovian in a weak coupling regime k/Γ_1<0.25 and non-Markovian in a strong coupling regime k/Γ_1>0.25.In general, the non-Markovian feature of evolution is revealed for strong atom-pseudomode coupling for any pseudomode-pseudomode coupling strength. Therefore, Fig. <ref> clearly shows the regions in which the Markovian and non-Markovian dynamics take place.Interestingly, by employing Eqs. (<ref>) and (<ref>), it is possible to connect QSL time and degree of non-Markovianity asτ_QSL=τ×(2𝒩/1-|h(τ)|^2+1)^-1. From Eq. (<ref>),if the evolution is Markovian (𝒩=0), then the actual evolution time is equal to the QSL time, so we have a saturated bound for the QSL time. On the other hand, in a situation where the evolution is non-Markovian (𝒩≠ 0), the QSL time is shorter than the actual evolution time. In Fig. <ref>, the QSL time and non-Markovianity have been plotted as a function of actual evolution time τ in a weak coupling regime with k/Γ_1=0.1 for different values of pseudomode-pseudomode coupling J/Γ_1. From Fig. <ref>(a), one can see that the QSL time decreases with increasing J/Γ_1. Besides, Fig. <ref>(b) shows that in this weak coupling regime, the non-Markovianity feature of the evolution is revealed by increasing J/Γ_1. Thus, we conclude that the QSL time decreases with the emergence of the non-Markovian feature of quantum evolution. Notably, one can also see that in Markovian evolution (red solid line), the QSL time is equal to the actual evolution time (τ_QSL=τ) because 𝒩=0, whose root is in Eq. (<ref>). In other words, it can be said that the reduction of QSL time originates from the non-Markovian feature of the quantum evolution.Fig. <ref> shows the QSL time and non-Markovianity as a function of actual evolution time τ in a strong coupling regime with k/Γ_1=0.75 for different values of J/Γ_1. It seems that the results are similar to what we observed in the weak coupling regime, however,the non-Markovian feature of evolution can be observed in this case even when J/Γ_1 is zero, which is in good agreement with the results of Fig. <ref>.In Fig. <ref>(a), the QSL time has been plotted as a function ofk/Γ_1 for both resonant (δ/Γ_1=0) and off-resonant (δ/Γ_1=5) cases. The QSL time decreases with increasing k/Γ_1 for both resonant and off-resonant cases. Notice that the QSL time in the off-resonant case is shorter than in the resonant case. In Fig. <ref>(b), we plot the non-Markovianity versus k/Γ_1. From this plot, we observe that the degree of non-Markovianity increases by the strengthening of the coupling between the atom and the first pseudomode for both resonant and off-resonant cases. By comparing Figs. <ref>(a) and<ref>(b), it can be concluded that there exists an inverse relation between QSL time and non-Markovianity.The QSL time and non-Markovianity have been plotted as a function of J/Γ_1 for two different values of k/Γ_1 in Fig. <ref>. We see that in a strong coupling regime (k/Γ_1=0.75), the QSL time is shorter than that of the weak coupling regime (k/Γ_1=0.1). On the other hand, the non-Markovianity gets its maximum value in a strong coupling regime. Again, we find that there is an inverse relation between non-Markovianity and QSL time. These results are in good agreement with the previous figures. Finally, the QSL time and non-Markovianity have been illustrated as a function of the detuning parameter δ/Γ_1 for two different values of k/Γ_1 in Fig. <ref>. It is worth noting that the results are similar to the findings shown in the previous plots. Remarkably, the existence of an inverse relation between non-Markovianity and QSL time is also confirmed in this case. Note that the ratio between the QSL time, τ_QSL, and the actual time, τ, quantifies the possibility of speeding up the quantum evolution. QSL bounds are saturated with no potential for speedup when τ_QSL/τ=1. However, the QSL bound is unsaturated if τ_QSL/τ<1, and there is potential for speeding up the evolution of the dynamical system. Our results show that τ_QSL/τ≤1, indicating that both of these scenarios occurred in our study. § CONCLUSION AND OUTLOOKUsing a cavity-based engineered environment, we explored the possibility of manipulating the QSL time. In particular, we have observed that by constructing the Markovian environments consisting of two cavities, one can switch between quantum speed-down and speedup regimes for the dynamics of an artificial atom interacting with a pseudomode. Based on memory effects, the dynamics of the open systems are divided into two categories: memory-less (Markovian) and with-memory (non-Markovian) dynamics. Notably, we observed that there exists an inverse relation between the QSL time and non-Markovianity. In other words, the non-Markovianity feature of quantum evolution can lead to quantum speedup in the dynamics. Therefore, this feature leads to an intrinsic interest in the context of controlling the memory effects that are associated with open systems. The parameters included atom-pseudomode coupling, pseudomode-pseudomode coupling, and detuning between atom and pseudomode were used to manipulate the non-Markovianity. We found that in a strong coupling regime, for both atom-pseudomode and pseudomode-pseudomode couplings, the evolution is non-Markovian and the quantum speedup is revealed. As a result of this study, new insights are gained into the control of QSL time for open systems, which opens the door to further experimental development towards the development of quantum thermodynamic devices and quantum computers. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT Maryam Hadipour:Writing–original draft,Investigation. Soroush Haseli:Writing–original draft, Writing–review & editing, Supervision, Software, Investigation, Conceptualization. Saeed Haddadi: Writing–review & editing, Investigation, Formal analysis.§ ACKNOWLEDGEMENTS S. Haddadi was supported by Semnan University under Contract No. 21270. § DISCLOSURESThe authors declare that they have no known competing financial interests.§ DATA AVAILABILITYNo datasets were generated or analyzed during the current study. 99Deffner2017Deffner S and Campbell S 2017 J. Phys. A: Math. Theor. 50, 453001.Hilgevoord2002Hilgevoord J 2002 Am. J. Phys. 70, 301.Deffner2017pDeffner S 2017 New J. Phys. 19, 103018.Pfeifer1993Pfeifer P 1993 Phys. Rev. Lett. 70, 3365.Bukov2019Bukov M, Sels D and Polkovnikov A 2019 Phys. Rev. X 9, 011034.Funo2019Funo K, Shiraishi N and Saito K 2019 New J. Phys. 21, 013006.Hegerfeldt2013Hegerfeldt G C 2013 Phys. Rev. Lett. 111, 260501.Garcia-Pintos2019Garcia-Pintos L P and del Campo A 2019 New J. Phys. 21, 033012. Garcia-Pintos2019pGarcia-Pintos L P and del Campo A 2019 New J. Phys. 21, 033012.Kobe1994Kobe D H and Aguilera-Navarro V C 1994 Phys. Rev. A 50, 933.Jones2010Jones P J and Kok P 2010 Phys. Rev. A 82, 022107.Xu2016Xu Z-Y 2016 New J. Phys. 18, 073005.Deffner2013aDeffner S and Lutz E 2013 J. Phys. A: Math. Theor. 46, 335302.Shao2020Shao Y, Liu B, Zhang M, Yuan H and Liu J 2020 Phys. Rev. Res. 2, 023299.Russell2014Russell B and Stepney S 2014 Phys. Rev. A 90, 012303.Hu2020Hu X, Sun S and Zheng Y 2020 Phys. Rev. A 101, 042107.Cheneau2012Cheneau M et al 2012 Nature 481, 484.Campo2021del Campo A 2021 Phys. Rev. Lett. 126, 180603.Taddei2013Taddei M M, Escher B M, Davidovich L and de Matos Filho R L 2013 Phys. Rev. Lett. 110, 050402.Escher2011Escher B M, de Matos Filho R L and Davidovich L 2011 Nat. Phys. 7, 406.Lam2021Lam M R et al 2021 Phys. Rev. X 11, 011035.Poggi2019Poggi P M 2019 Phys. Rev. A 99, 042116.Caneva2009Caneva T, Murphy M, Calarco T, Fazio R, Montangero S, Giovannetti V and Santoro G E 2009 Phys. Rev. Lett. 103, 240501.Zhang2021Zhang Y-J, Wei H, Yan W-B, Man Z-X, Xia Y-J and Fan H 2021 New J. Phys. 23, 113004.Marvian2015Marvian I and Lidar D A 2015 Phys. Rev. Lett. 115, 210402.Bai2020Bai S Y and An J H 2020 Phys. Rev. A 102, 060201.Campaioli2017Campaioli F, Pollock F A, Binder F C, Celeri L, Goold J, Vinjanampathy S and Modi K 2017 Phys. Rev. Lett. 118, 150601.Hovhannisyan2013Hovhannisyan K V, Perarnau-Llobet M, Huber M and Acın A 2013 Phys. Rev. Lett. 111, 240401.Binder2015Binder F C, Vinjanampathy S, Modi K and Goold J 2015 New J. Phys. 17, 075015.OkuyamaMand2018Okuyama M and Ohzeki M 2018 Phys. Rev. Lett. 120, 070402.Shanahan2018Shanahan B, Chenu A, Margolus N and del Campo A 2018 Phys. Rev. Lett. 120, 070401.Bolonek-Lason2021Bolonek-Lason K, Gonera J and Kosinski P 2021 Quantum 5, 482.Poggi2021Poggi P M, Campbell S and Deffner S 2021 PRX Quantum 2, 040349.arXiv:2004.03078Campaioli F, Yu C-s, Pollock F A and Modi K 2022New J. Phys. 24, 065001.Mandelstam1991Mandelstam L and Tamm I 1991 The uncertainty relation between energy and time in non-relativistic quantum mechanics (Selected Papers, Berlin: Springer pp. 115).Margolus1998Margolus N and Levitin L B 1998 Physica D 120, 188.Levitin2009Levitin L B and Toffoli T 2009 Phys. Rev. Lett. 103, 160502.Jozsa1994Jozsa R 1994 J. Mod. Opt. 41, 2315.Uhlmann1976Uhlmann A 1976 Rep. Math. Phys. 9, 273.Luo2004Luo S and Zhang Q 2004 Phys. Rev. A 69, 032106.Wu2020Wu S-x and Yu C-s 2020 Sci. Rep. 10, 5500.Deffner2013Deffner S and Lutz E 2013 Phys. Rev. Lett. 111, 010402.Wu2018Wu S-x and Yu C-s 2018 Phys. Rev. A 98, 042132.Cai2017Cai X and Zheng Y 2017 Phys. Rev. A 95, 052104.del2013del Campo A, Egusquiza I L, Plenio M B and Huelga S F 2013 Phys. Rev. Lett. 110, 050403.Ektesabi2017Ektesabi A, Behzadi N and Faizi E 2017 Phys. Rev. A 95, 022115.Liu2016Liu H B, Yang W L, An J H and Xu Z Y 2016 Phys. Rev. A 93, 020105.Breuer2007H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, Oxford, UK: Cambridge University Press, 2007).Davies1976Davies, E.B.: Quantum theory of open systems (Academic Press, London 1976).Wolf2008 Wolf, M M, Eisert J, Cubitt T S and Cirac J I 2008 Phys. Rev. Lett. 101, 150402. Breuer2009 Breuer H P, Laine E M and Piilo J 2009 Phys. Rev. Lett. 103, 210401. Rivas2010 Rivas A, Huelga S F and Plenio M B 2010 Phys. Rev. Lett. 105, 050403. Luo2012 Luo S, Fu S and Song H 2012 Phys. Rev. A 86, 044101. Zeng2011 Zeng H S, Tang N, Zheng Y P and Wang G Y 2011 Phys. Rev. A 84, 032118. He2017 He Z, Zeng H S, Li Y, Wang Q and Yao C 2017 Phys. Rev. A 96, 022106.Zhang2014Zhang Y J, HanW, Xia Y J, Cao J P and Fan H 2014 Sci. Rep. 4, 4890. Cimmarusti2015Cimmarusti A D, Yan Z, Patterson B D, Corcos L P, Orozco L A and Deffner S 2015 Phys. Rev. Lett. 114, 233602.Zhang2015Zhang Y J, Han W, Xia Y J, Cao J P and Fan H 2015 Phys. Rev. A 91, 032112.Cianciaruso2017Cianciaruso M, Maniscalco S and Adesso G 2017 Phys. Rev. A 96, 012105.Sun2015Sun Z, Liu J, Ma J and Wang X 2015 Sci. Rep. 5, 8444.Xu2019Xu K, Zhang G F and Liu W M 2019 Phys. Rev. A 100, 052305.Xu2014Xu Z Y, Luo S, YangWL, Liu C and Zhu S 2014 Phys. Rev. A 89, 012307.Teittinen2019Teittinen J, Lyyra H and Maniscalco S 2019 New J. Phys. 21, 123041.Berrada2018Berrada K 2018 Physica E 95, 6.Shahri2023Shahri Y,Hadipour M, Haddadi S, Dolatkhah H, and Haseli S 2023 Phys. Lett. A 470, 128783.Gholizadeh2023 Gholizadeh A,Hadipour M,Haseli S,Haddadi S,Dolatkhah H 2023 Commun. Theor. Phys. 75, 075101. Hadipour2023 Hadipour M,Haseli S,Dolatkhah H,Haddadi S,Czerwinski A 2022 Photonics 9, 875. Mazzola2009 Mazzola L,Maniscalco S,Piilo J,Suominen K A, andGarraway B M 2009 Phys. Rev. A 80, 012104.Xu2021 Xu K, Zhu H J, Zhang G F, Liu W M 2021 Phys. Rev. E 104, 064143.pseudomode1 Garraway B M 1997 Phys. Rev. A 55, 2290.pseudomode2 Garraway B M 1997 Phys. Rev. A 55, 4636.pseudomode3 Mazzola L, Maniscalco S, Piilo J, Suominen K A, and Garraway B M 2009 Phys. Rev. A 80, 012104.pseudomode4 Pleasance G, Garraway B M, and Petruccione F 2020Phys. Rev. Research 2, 043058.
http://arxiv.org/abs/2312.16150v1
{ "authors": [ "Maryam Hadipour", "Soroush Haseli", "Saeed Haddadi" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231226181843", "title": "Quantum speed limit and non-Markovianity in structured environments" }
Efficient Cost Modeling of Space-filling Curves Jianzhong Qi=============================================== § ABSTRACT Multi-node optical clock networks will enable future studies of fundamental physics and enable applications in quantum and classical communications as well as navigation and geodesy. We implement the first ever multi-node optical clock network with real-time, relative synchronization over free-space communication channels and precision on the order of 10 fs, realized as a three-node system in a hub-and-spoke topology.In this paper we describe the system and its performance, including a new, independent, out-of-loop verification of two-way optical time synchronization.§ INTRODUCTIONThe most precise optical atomic clocks have now demonstrated frequency stability at levels that reach below 1×10^-17 in only 1000 s <cit.>. Networks of these optical clocks could be utilized in studies of fundamental physics of relativity <cit.>, searches for ultralight <cit.> or topological dark matter <cit.>, gravitational wave detection <cit.>, and geodesy <cit.>. Furthermore, a system synchronized to this level has many applications, such as quantum networking <cit.>, high rate data transfer over optical links <cit.>, and global navigation satellite systems <cit.>.However, to establish an optical clock network which achieves the fractional frequency stabilities individually realized by these clocks, there must exist a method of synchronization with performance even better than them. Additionally, high-performance synchronization systems have great value in distributing time from an ultra stable but bulky and power-hungry lattice or ion clock to a clock with lower size, weight, and power (SWaP) <cit.>, to realize increased time and frequency stability in environments where low instability clocks could not otherwise operate.There have been several demonstrations of time synchronization at the femtosecond level (i.e. time deviation (TDEV) in the single-digit femtosecond range from 1 s to 1 hour) over continuous links <cit.>.Heretofore, these methods have only been demonstrated for a two-node point-to-point link, while many applications for an optical clock network require multiple nodes.Bodine et al. <cit.> made progress beyond a two-node, single-link system by constructing a mid-link relay and demonstrating that it did not degrade the overall performance. However, the relay was not itself synchronized or syntonized to the Primary node, and so did not actively participate in the timing network. Moreover, the two endpoints shared a common clock and so were only able to measure link noise rather than full synchronization.In what follows, we present results for the first demonstration of a true three-node, three-clock, free-space, real-time optical time synchronization network with femtosecond level performance.§ NETWORK DESIGN Our multi-node system, illustrated schematically in Fig. <ref>a, implements a hub-and-spoke topology where the central Primary node distributes time to two spatially-separated Secondary nodes; there is no optical time synchronization link between Secondary sites 1 and 2.In this experiment, all three nodes are located in the same laboratory, but each node is self-contained within a single electronics rack and each has an independent cavity-stabilized laser as shown in Fig. <ref>b.The experiment involved two independent free-space links set up on an optics table connected to each node by 10 m of fiber.The free-space path of the Primary to Secondary 1 node was approximately 11 m and Primary to Secondary 2 node was approximately 4 m.Additionally, direct comparison of the optical frequency combs at the Primary and Secondary nodes are performed in a 3 m out-of-loop verification fiber, depicted as the looped fiber between nodes in Fig. <ref>a.We employ a similar two-way optical frequency comb-based time synchronization protocol to <cit.> using “linear optical sampling” or cross-correlation measurements from optical heterodyne beats between frequency comb pulses <cit.>. Local comb A at the Primary site and remote combs Y and Z at the Secondary sites are each individually locked to a stable reference cavity laser such that the repetition rate of the combs are f_rep≈ 200 MHz.These three clock combs are the physical oscillators that are network-synchronized to the femtosecond level. Offset-repetition rate linear optical sampling requires an additional frequency comb, referred to as the transfer comb, which is located at the Primary site and is locked with a slightly lower repetition rate f_rep - Δ f_rep where Δ f_rep≈ 2.08 kHz.Comb pulses are exchanged between the transfer comb at the Primary site and the remote combs Y and Z at the Secondary sites over a free-space optical link and generate interferograms (IGMs) in the transceiver (Fig. <ref>c) at each node.IGMs are generated every 1/Δ f_rep≈ 481 μs.Figure <ref>c shows a simplified optical setup of the two-way optical time synchronization experiment.At the Primary node light from the transfer comb X is split. One part is optically mixed with local comb A and detected on detector 3 generating the local/transfer IGM, while the remaining light from the transfer comb is then split again to be used separately for the twotransceivers at the Primary site.In each transceiver, the transfer comb is split one last time reserving some of the light to be optically mixed with the received remote comb light, generating the remote/transfer IGM on detectors 1 and 2, while the majority of the light is sent over the free space optical links to each of the Secondary nodes.At each site the arrival time of the IGM is measured with respect to that site's local timebase, and the timestamps are then transmitted between sites over a free-space optical communications link co-propagating with the frequency comb pulses. The timestamps are mathematically combined to yield the time of flight between each site and the offset between each Secondary site's clock and the Primary site's clock.The Secondary site clock combs are adjusted through a real-time feedback controller to null the clock offsets by varying the lock offset frequency f_opt between the clock comb and the optical reference cavity. Since the 200 MHz repetition rate of the frequency combs leads to a timing ambiguity modulo 1/f_rep,we utilize a coarse timing link <cit.> for each Secondary node to overcome this 5 ns ambiguity.We send coarse timing information over the same free-space optical path that is used to transmit the frequency comb light; a waveguide electro-optic modulator is used to encode a pseudo random binary sequence on the phase of the continuous wave laser at 1531 nm.This optical communications link is also the channel used to exchange the timestamps of the linear optical sampling measurements (mentioned above) enabling closure of the time synchronization loop.With any time synchronization experiment, it is important to characterize the performance using an out-of-loop verification technique. For two-node systems, a heterodyne method <cit.> has been employed to ascertain the level of synchronization between the Primary and Secondary node. While our implementation of a multi-node system also enables the same measurements for characterization, it also distinctively enables characterization of the performance of the two independent Secondary nodes which have no direct communication between them, as well as removes the concomitant dependency of the local clock from the characterization. In both the latter case and the standard two-node implementation, the two nodes being compared have their f_ceo's offset by a fixed amount (1 MHz in our case). For perfect synchronization the beat note oscillates at a frequency given by this difference in f_ceo of the two frequency combs. Variations in the relative amplitude of the detected signal enable measurement and characterization of the level of precision of the time synchronization achieved in the system.We note that performing the verification measurements between the Primary node and both Secondary nodes, as well as between the two Secondary nodes, is analogous to three-corner-hat clock measurements in the sense that such a characterization of a three-node optical time synchronization experiment is more robust against unintended measurement bias. § RESULTSUtilizing the heterodyne out-of-loop verification technique described previously, we characterize the performance of the time synchronization protocol with modified Allan deviation (MDEV, mod σ_y(τ)) and time deviation measurements (TDEV, σ_x(τ)) over a few hours – first under direct fiber connections and then over short free-space links. The modified Allan deviation differs from the standard Allan deviation in that it numerically varies the measurement bandwidth depending on the averaging period τ.This variation of the binned time averaged phase measurements allows for the disambiguation of measurements with white phase noise and white frequency noise <cit.>.After calculation of the modified Allan deviation the time deviation is simply given by σ_x(τ) = τ/√(3)mod σ_y(τ) <cit.>.Our results in Fig. <ref> show TDEVs at 1 second at the femtosecond level for each of two independent links (Primary ↔ Secondary 1 and Primary ↔ Secondary 2), as well as similar precision for the out-of-loop comparison between the two Secondary nodes.Each data set (fiber link or free-space link) is collected under two separate initialization conditions.Data is first simultaneously collected between the clock combs at the Primary node and the two respective Secondary nodes; then the offset between the two Secondary node clock combs is re-adjusted to 1 MHz, and data is collected again but now between the two frequency combs at the Secondary sites. We note that our out-of-loop measurements have excess noise relative to previously published two-node work.As reported in <cit.>, the out-of-loop verification technique is susceptible to deviations in fiber path length (unlike the in-loop optical time synchronization protocol).In this experiment the nodes were tested in the same laboratory, but each node is fully self-contained and designed to be placed at a remote location relative to the others. This design results in the out-of-loop fiber being 3 m long which has added to the residual long-term measurement noise associated with temperature fluctuations over this link.§ DISCUSSION AND FUTURE WORKWe see opportunities to greatly increase the utility of this work.The hub-and-spoke topology used for this experiment keeps a centralized node that all other nodes are steered towards.A more robust topology would allow any individual node to become the Primary or Secondary node.Such a configuration would require an optical link between what are now the two Secondary nodes and the introduction of one additional frequency comb at each of these sites.In addition, optical switching will likely need to be implemented in order to determine which comb pulses are sent for the eventual heterodyne at each site.Further optimization and improvement on the quality and precision of time synchronization is also currently under investigation for the three-node system in the hub-and-spoke topology.Next steps also include increasing the length of the free-space optical links by moving the Secondary nodes away from the Primary while maintaining the out-of-loop verification, as well as performing measurements in non-laboratory environments.§ ACKNOWLEDGEMENTS Approved for public release; distribution is unlimited. Public Affairs release approval # AFRL-2023-5866. apalike
http://arxiv.org/abs/2312.16348v1
{ "authors": [ "Kyle W. Martin", "Nader Zaki", "Matthew S. Bigelow", "Benjamin K. Stuhl", "Nolan Matthews", "John D. Elgin", "Kimberly Frey" ], "categories": [ "physics.optics", "physics.ins-det" ], "primary_category": "physics.optics", "published": "20231226224830", "title": "Demonstration of Real-Time Precision Optical Time Synchronization in a True Three-Node Architecture" }
Selective-Memory Meta-Learning with Environment Representations for Sound Event Localization and Detection Jinbo Hu, Student Member, IEEE, Yin Cao, Member, IEEE, Ming Wu, Member, IEEE, Qiuqiang Kong, Member, IEEE, Feiran Yang, Member, IEEE, Mark D. Plumbley, Fellow, IEEE, Jun Yang, Senior Member, IEEEJinbo Hu, Ming Wu, and Jun Yang are with the Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]; [email protected]; [email protected]). Jinbo Hu and Jun Yang are also with the University of Chinese Academy of Sciences, Beijing 100049, China. (Corresponding author: Jun Yang)Yin Cao is with the Department of Intelligent Science, Xi’an Jiaotong Liverpool University, Suzhou 215123, China (e-mail: [email protected]).Qiuqiang Kong is with the Chinese University of Hong Kong, Hong Kong, China (e-mail: [email protected]).Feiran Yang is with the State Key Laboratory of Acoustics, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China ([email protected]).Mark D. Plumbley is with the Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH, U.K. (e-mail: [email protected]).This work was partly supported by the National Key Research and Development Project (NO. 2022YFB2602000), Grant "XJTLU RDF-22-01-084", and Engineering and Physical Sciences Research Council (EPSRC) Grant EP/T019751/1. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.January 14, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Environment shifts and conflicts present significant challenges for learning-based sound event localization and detection (SELD) methods. SELD systems, when trained in particular acoustic settings, often show restricted generalization capabilities for diverse acoustic environments. Furthermore, it is notably costly to obtain annotated samples for spatial sound events. Deploying a SELD system in a new environment requires extensive time for re-training and fine-tuning. To overcome these challenges, we propose environment-adaptive Meta-SELD, designed for efficient adaptation to new environments using minimal data. Our method specifically utilizes computationally synthesized spatial data and employs Model-Agnostic Meta-Learning (MAML) on a pre-trained, environment-independent model. The method then utilizes fast adaptation to unseen real-world environments using limited samples from the respective environments. Inspired by the Learning-to-Forget approach, we introduce the concept of selective memory as a strategy for resolving conflicts across environments. This approach involves selectively memorizing target-environment-relevant information and adapting to the new environments through the selective attenuation of model parameters. In addition, we introduce environment representations to characterize different acoustic settings, enhancing the adaptability of our attenuation approach to various environments. We evaluate our proposed method on the development set of the Sony-TAU Realistic Spatial Soundscapes 2023 (STARSS23) dataset and computationally synthesized scenes. Experimental results demonstrate the superior performance of the proposed method compared to conventional supervised learning methods, particularly in localization.Sound event localization and detection, meta-learning, environment adaptation, selective memory.§ INTRODUCTIONSound event localization and detection (SELD) refers to detecting categories, presence, and spatial locations of different sound sources. SELD was first introduced in Task 3 of the Detection and Classification of Acoustics Scenes and Events (DCASE) 2019 Challenge <cit.>. After three iterations of Task 3 of the DCASE Challenge <cit.>, the types of data have transformed from computationally synthesized spatial recordings to real-scene recordings in 2022 and 2023 <cit.>. Large-scale datasets of spatialized sound events were released for these challenges to be used for training and evaluating learning-based approaches. §.§ Learning-based SELD methodsSELD can be regarded as a multi-task learning problem. Adavanne et al.<cit.> proposed SELDnet for a joint task of sound event detection (SED) and regression-based direction-of-arrival (DOA) estimation. SELDnet cannot detect homogeneous overlap, which refers to overlapping sound events of the same type but from different locations. The Event-Independent Network V2 (EINV2), with a track-wise output format and permutation invariant training, was proposed to tackle the homogeneous overlap detection problem <cit.>. In contrast to the use of two outputs of SED and DOA in SELDnet and EINV2, the Activity-coupled Cartesian DOA (ACCDOA) approach merges the two subtasks into a single task<cit.>, where the Cartesian DOA vectors also contain the activity information of sound events. However, the performance of learning-based methods is usually degraded when the training set and test set are mismatched. The training set cannot cover all actual instances from different acoustic environments. §.§ Environment shifts and conflicts The change in the data distribution between a training set and a test set is known as the domain shift problem <cit.>. In the practical deployment of SELD systems, the differences among environments could potentially be very significant. Unseen complex acoustic environments could result in a decline in system performance, due to the distribution change in acoustic properties, such as varying degrees of echo and reverberation, diverse types of ambient noise, and directional interference. The change in the distribution of acoustic properties among environments is referred to as the environment shift. The issue is particularly salient when the system encounters acoustic properties it has not been exposed to during training. On the other hand, different acoustic environments may present conflicts. Optimal solutions for diverse acoustic environments may display considerable variation, e.g. indoor environments and outdoor environments. A single training configuration cannot encompass all types of environments.Fig. <ref> illustrates the results of our previous system <cit.> submitted to Task 3 of the DCASE 2022 Challenge. STARSS22 <cit.> is a dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events. There are no duplicated recording environments between the training and test sets in STARSS22. The system was evaluated on the STARSS22 dataset and obtained second in the team ranking. However, we found unsatisfactory generalization performance for the Room 2 recordings in the dev-test-tau set of STARSS22 <cit.> compared with other rooms. Experimental results show that class-dependent localization error LE_CD is much higher, and location-dependent F-score F_≤ 20^∘ is much lower in Room 2, but class-dependent localization recall LR_CD is high. This suggests that our system may have weak localizing performance in Room 2 due to the environment shift or conflict. §.§ Data acquisition One of the most effective methods for solving environment shifts and conflicts is acquiring as much data as possible <cit.>. However, manually collecting and annotating spatial sound event recordings is highly cost-intensive. For instance, the STARSS22 dataset <cit.> employed sophisticated setups involving a 32-channel spherical microphone array for recording, wireless microphones for manually annotating types, onset, and offset of sound events, a motion capture (mocap) system for extracting the tracked data, and a 360^∘ camera for validating those annotations.Another practical approach for acquiring data is computationally synthesizing spatial sound event samples. Spatial sound event recordings are simulated by convolving dry sound event samples with spatial room impulse responses (SRIRs). The multi-channel simulation (MCS) framework <cit.> and the impulse response simulation (IRS) framework <cit.> are both designed to simulate multi-channel recordings, emphasizing augmentation from original data without the reliance on external datasets. MCS and IRS involve the convolution of enhanced source signals with extracted covariance matrices and computationally simulated SRIRs, respectively. Noteworthy among existing external datasets used for synthesizing spatial sound events include FSD50K <cit.>, AudioSet <cit.>, and TAU-SRIR DB <cit.>. FSD50K and AudioSet are large-scale sound event datasets, while TAU-SRIR DB is a real-recorded SRIR database tailored for the DCASE Challenge. Given the insufficiency of publicly accessible SELD data, particularly for specific microphone array types, e.g. the number of microphones and geometry of the microphone array, the role of the SRIR simulation technique becomes indispensable in data synthesis. Numerous RIR simulation methodologies rely on geometric approaches <cit.>, where the propagation of sound waves is modeled and manipulated in the form of a ray. These techniques simulate the reflections and reverberation of sound waves. Software packages like Pyroomacoustics <cit.>, gpuRIR <cit.>, and SMIR-Generator <cit.> are representative tools for geometric-based SRIR simulations. §.§ Meta learningModels trained on synthetic datasets typically demonstrate a degree of generalization ability but could limit the robustness of the network on real-world data due to the environment shift <cit.>. One line of research is to train the model with realistic signals by transfer learning <cit.>, pre-training a model with large-scale synthetic datasets, and then fine-tuning the model with limited real-recorded datasets <cit.>. In circumstances when environmental disparities occur between training sets and test sets and collected sample availability is limited, few-shot learning (FSL) <cit.> can offer a solution for adaptation to realistic and specific environments. The conventional supervised learning method refers to training a model on a labeled dataset and then applying the trained model to predict unseen data. The core issue of conventional supervised learning training methodologies in the case of limited samples is that the empirical risk minimizer is unreliable <cit.>. Nevertheless, by incorporating prior knowledge, FSL can effectively generalize to the specific task, even with only a few samples <cit.>. Meta-learning, which facilitates FSL, learns a general-purpose learning algorithm that generalizes across tasks and ideally enables each new task to be learned well from the task-distribution view <cit.>. Meta-learning has advanced FSL significantly in computer vision <cit.>. One of the most successful meta-learning algorithms is model-agnostic meta-learning (MAML) <cit.>. MAML formulates prior knowledge as commonly initialized parameters across tasks and then exploits a few samples of the target task to adapt that task quickly. Due to its model-agnostic nature, MAML is compatible with any model trained with gradient descent, making it applicable to various learning problems, including classification, regression, and reinforcement learning. In audio signal processing, the meta-learning method has recently attracted interest in solving FSL problems. Based on MAML, several audio-related researchers have investigated building systems to adapt their specific tasks rapidly with only a few corresponding samples <cit.>. To the best of our knowledge, the few-shot environment adaptation problems based on meta-learning have not been thoroughly studied. MAML can be designed to employ a set of trained initial parameters and a few samples from a specific environment to cope with the environment shift problem and train multiple models for each specific environment to somewhat mitigate the environment conflict issue. However, forcibly sharing the initial parameters can still lead to some conflicts and compromises among tasks <cit.>. Multimodal MAML (MMAML) <cit.> focuses on task-dependent initial parameters and tries to learn task embeddings and transform the initial parameters with affine parameters. Compared with Multimodal MAML, Learning-to-Forget (L2F) <cit.> proposes layer-wise attenuation on the compromised initial parameters for each task to reduce its influence.§.§ Our contributions In this work, we extend our previous work Meta-SELD <cit.> to environment-adaptive Meta-SELD, investigating an adaptation to the environment shift problem using meta-learning-based few-shot methods. Drawing inspiration from L2F <cit.>, we propose to selectively memorize components relevant to the target environment and to learn the target environment by using environment representations. Fig. <ref> presents a flow diagram of environment-adaptive Meta-SELD. In contrast to <cit.>, which tackles unseen categories of sound events problem in SELD, our proposed Meta-SELD focuses on adaptation to unknown environments.The method of fast adaptation to environments is mainly based on Model-Agnostic Meta-Learning (MAML) <cit.> and an environment-independent (EI) model. The EI model is pre-trained on a computationally synthesized dataset, encompassing a wide range of acoustic environments. We then apply MAML to the pre-trained EI model to create a meta-EI model, which enables fast adaptation to an unseen environment using a few samples recorded in the target environment. In addition, forcibly sharing common initial parameters across environments allows the environment conflict issue to remain. Inspired by Learning-to-Forget (L2F) <cit.>, we adopt an attenuation network and propose environment representations to selectively memorize target-environment-relevant components and selectively forget contradicting information. An illustration of our proposed environment-adaptive Meta-SELD is depicted in Fig. <ref>. We evaluated the proposed method on the development set of the Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23) <cit.> dataset and on computationally synthesized scenes. Experimental results demonstrated the effectiveness of environment-adaptive Meta-SELD.Our main contributions can be summarized as follows:1) Pre-training an environment-independent (EI) model on computationally synthesized datasets to contain as many acoustic properties as possible.2) Investigating an environment adaptation approach based on Model-Agnostic Meta-Learning (MAML) and the pre-trained EI model.3) Introducing a solution to selectively memorize prior information relevant to the target environment to mitigate environment conflicts.4) Proposing a technique to extract environment representations for selective memory and designing comprehensive experiments to display environment representations.§ FAST ADAPTATION TO THE ENVIRONMENT§.§ Pre-trained environment-independent modelsWe train a simple convolutional recurrent neural network (CRNN) on the datasets synthesized using computationally generated SRIRs. Source code about data synthesis of spatial sound events is released[https://github.com/Jinbo-Hu/SELD-Data-Generator]. This model is not training on samples in specific environments and is consequently termed the environment-independent (EI) model.§.§.§ Data synthesisThe spatial sound event recordings are simulated by convolving monophonic sound event samples with SRIRs. The sound event samples are selected from FSD50K and AudioSet, based on the similarity of the labels in those datasets to target classes in STARSS23. These selected clips are then cleaned by the pre-trained CNN14 <cit.> model. We utilize CNN14 to infer these clips and select high-quality clips based on the output probability. The SRIRs are computationally generated using geometric-based methods <cit.>. The computational generation method for SRIRs consists of two steps: microphone-array RIRs simulation and Ambisonics format converter. We use the image source method <cit.> to generate microphone-array RIRs with four channels. This method replaces reflection on walls with virtual sources playing the same sound as the source and builds an RIR from the corresponding delays and attenuations. We use the python package pyroomacoustics <cit.> to simulate room acoustics. As the microphones are mounted on an acoustically hard spherical baffle in the official setup of STARSS23, the frequency response of the h-th microphone with a wave number of k on a rigid baffle of radius R for l-th image source is <cit.>:H_h l(k, ψ_h l)=∑_n=0^∞i^n(2 n+1) b_n(k R) P_n(cosψ_h l),where ψ_hl denotes the angle between the DoA of the l-th sound source and the orientation of the h-th microphone, P_n denotes the Legendre polynomial<cit.>, i is the imaginary unit, and b_n is the mode strength term for a rigid baffle array given by <cit.> b_n(k R)=i/(k R)^2 h_n^(1)^'(k R),with h_n^(1)^' denoting the derivate of the n-th-order spherical Hankel function of the first kind<cit.>.The Ambisonics format conversion transforms the above-mentioned microphone-array format signals to first-order Ambisonics (FOA) format signals. The n-th-order and m-th-degree spherical harmonic function is defined with the angle Ψ={θ,ϕ} as follows <cit.>:Y_n^m(Ψ) ≡√(2 n+1/4 π(n-m) !/(n+m) !) P_n^m(cosθ) e^i m ϕ,where (·)! represents the factorial operator, θ and ϕ are elevation and azimuth, and P_n^m(·) is the associated Legendre function. The spherical harmonic representation of the RIRs can be computed by using the following encoding process<cit.>:𝐚(k)=𝐁(k)^-1𝐘^†𝐱(k),where𝐁(k)=([ b_0 0 0 0; 0 b_1 0 0; 0 0 b_1 0; 0 0 0 b_1; ]),𝐱(k) denotes simulated microphone-array RIR signals, (·)^† represents the Moore-Penrose pseudo inverse, and 𝐚(k) denotes the resulting FOA format signal. 𝐘 denotes first-order spherical harmony matrices with a four-channel microphone array as follows:𝐘=( [Y_0^0(Ψ_1) Y_1^-1(Ψ_1)Y_1^0(Ψ_1)Y_1^1(Ψ_1);Y_0^0(Ψ_2) Y_1^-1(Ψ_2)Y_1^0(Ψ_2)Y_1^1(Ψ_2);Y_0^0(Ψ_3) Y_1^-1(Ψ_3)Y_1^0(Ψ_3)Y_1^1(Ψ_3);Y_0^0(Ψ_4) Y_1^-1(Ψ_4)Y_1^0(Ψ_4)Y_1^1(Ψ_4); ]),with Ψ_i={θ_i,ϕ_i} being the direction of the i-th microphone.§.§.§ Network architectureWithout loss of generality, in this study, we adopt a simple CRNN as our backbone for our following experiments. The CRNN is similar to the baseline of Task 3 of the DCASE 2022 Challenge<cit.> but with an ACCDOA representation <cit.>. As shown in Fig. <ref>, the backbone has four convolution blocks followed by a one-layer bidirectional gated recurrent unit (BiGRU). The network takes C-channel T-frame F-mel-bin spectrograms, the concatenation of log-mel spectrograms and intensity vectors as input, and predicts active sound events of M classes with corresponding Cartesian DOA vectors for each time stamp.§.§ Meta-SELDGiven a model represented by a parameterized function f_Θ with parameters Θ, MAML <cit.> learns the initial parameters Θ_0 from general tasks 𝒯_i sampled from the meta-training set 𝒟_𝚝𝚛𝚊𝚒𝚗. The initial parameters Θ_0 are sensitive to task-specific fine-tuning <cit.> and expected to perform well on unseen tasks from the meta-test set 𝒟_𝚝𝚎𝚜𝚝 after a few parameter updates with a few task-specific samples. The task in meta-learning refers to a specific learning problem. Each task 𝒯_i consists of a support set 𝒮_i of K samples and a query set 𝒬_i of Q samples, analogy to the training set and test set, respectively. A new task is expected to be quickly learned with K samples, known as K-shot learning. The loss function of MAML isℒ = ∑_𝒯_i∼ p(𝒯)ℒ_𝒯_i(f_Θ),where p(𝒯), sampled from 𝒟_𝚝𝚛𝚊𝚒𝚗, is a distribution over tasks that we want our model to adapt to, and ℒ_𝒯_i indicates the task-specific loss. In contrast to conventional supervised learning methods, the objective of which is to find optimal parameters to minimize the loss function across all training samples, MAML tries to find common generalized initial parameters across tasks and then updates the initial parameters after several iterations of training on limited data of new tasks.There are two groups of parameters in the MAML algorithm: meta parameters and adaptation parameters. In the meta-training phase, MAML starts with randomly initialized meta parameters Θ and then adapts to a new specific task 𝒯_i with several update iterations using the support set 𝒮_i. The meta parameters Θ become adaptation parameters Θ_i^':Θ_i^'=Θ-α∇_Θℒ_𝒯_i(f_Θ, 𝒮_i),where α is the adaptation learning rate. For simplicity of notation, we display only one iteration of the gradient update. After computing task-specific loss on 𝒬_i with f_Θ_i^' across a batch of tasks, the meta parameters are updated as follows:ΘΘ-β∇_Θ∑_𝒯_iℒ_𝒯_i(f_Θ_i^', 𝒬_i),where β is the meta step size. After accumulating ℒ_𝒯_i for several tasks, Θ is updated by gradient descent and will be used as the initial parameters for the subsequent loops of meta-training steps.In the meta-testing phase, a specific unseen task 𝒯_j^𝚝𝚎𝚜𝚝 created using the meta-test set 𝒟_𝚝𝚎𝚜𝚝 is used. 𝒯_j^𝚝𝚎𝚜𝚝 consists of a labeled support set 𝒮_j^𝚝𝚎𝚜𝚝 of K samples, and an unlabeled query set 𝒬_j^𝚝𝚎𝚜𝚝 of Q samples. We update the model, initialized by well-trained parameter Θ in the meta-training phase, on 𝒮_j^𝚝𝚎𝚜𝚝 to get adaptation parameters Θ_j^' using the update procedure of Eq. <ref>. The adaptation performance is evaluated on 𝒬_j^𝚝𝚎𝚜𝚝 with f_Θ_j^'.We aim to adapt to an unseen environment with K samples (K-shot learning). The objective of MAML is to find optimal initial parameters across several tasks, so we need to construct a set of tasks from the meta-training set 𝒟_𝚝𝚛𝚊𝚒𝚗. 𝒟_𝚝𝚛𝚊𝚒𝚗 is split into several tasks according to the different recording environments. Audio clips recorded in different environments belong to different tasks. We first sample a batch of environments and then sample K+Q clips in each environment, where K clips from the support set 𝒮_i and Q clips from the query set 𝒬_i. The overall training procedure of MAML is summarized in Algorithm <ref>. Step 11 in Algorithm <ref> is the inner-loop update for adaptation parameters, while Step 15 is the outer-loop update for meta parameters.The meta-learning processes of SELD for testing and training are slightly different in the data division. Fig. <ref> shows the division of the meta-training and the meta-test sets. Similar to the training, the meta-test set 𝒟_𝚝𝚎𝚜𝚝 is partitioned based on the recording environments of each audio clip. For clips of each environment, we also chose K clips for meta-test support set 𝒮_j^𝚝𝚎𝚜𝚝, and all remaining clips for meta-test query set 𝒬_j^𝚝𝚎𝚜𝚝. After N iterations of update on 𝒮_j^𝚝𝚎𝚜𝚝, the meta parameters Θ are updated to Θ_j^'. The final performance is evaluated on 𝒬_j^𝚝𝚎𝚜𝚝 with f_Θ_j^'. §.§ Meta-SELD++Instead of randomly initializing the meta parameters, we leverage the power of the pre-trained environment-independent (EI) model to initialize the meta parameters of MAML. In other words, we may utilize a well-trained EI model to find better initial parameters and accelerate convergence for training the meta-EI model. Specifically, during meta-training, we initialize the meta parameters with parameters of the pre-trained EI model and then create a meta-EI model trained on synthetic datasets using collected SRIRs based on MAML. We denote this method as Meta-SELD++, whose meta parameters are initialized with pre-trained parameters. The training procedure will be expounded in Section <ref>.§ ENVIRONMENT-ADAPTIVE META-SELD §.§ Selective memoryAccording to Eq. <ref>, MAML gives each task equal weight and tries to find optimal initial parameters across tasks in an average sense so that it may perform better or worse on a specific task than the conventional supervised learning method. There may be conflicts when optimizing across a batch of environments <cit.>. Inspired by L2F <cit.>, which argues that forcibly sharing a common initialization in MAML induces conflicts and thus leads to the compromised location of the initialization, we selectively memorize parts relevant to the target environment. More comprehensively, we employ an environment-dependent layer-wise attenuation on the initialization, thereby dynamically controlling the influence of prior knowledge for each environment. The attenuation is generated by an attenuation network h_Φ. Fig. <ref> illustrates the optimization procedure of selective memory Meta-SELD. Compared with Meta-SELD++, selective memory adds a step to attenuate the common initial parameters for target environments before fast adaptation to corresponding environments.One general information input to h_Φ is gradients. At the beginning of the inner loop of MAML, task-specific gradients ∇_Θℒ_𝒯_i(f_Θ, 𝒮_i) on the support sets 𝒮_i of the i-th environment are computed to generate layer-wise attenuation coefficients:λ_i=h_Φ(∇_Θℒ_𝒯_i(f_Θ, 𝒮_i)),where h_Φ is a 3-layer MLP network with parameters Φ, with a sigmoid in the final layer to facilitate attenuation. The layer-wise attenuation coefficients λ_i={λ_i^l}^l=1… p act on each layer of Θ={Θ^l}^l=1… p:Θ_i^l = λ_i^l ·Θ^l,where l and p indicate the layer index and the number of layers of the backbone f_Θ, respectively.Task-specific gradients generate attenuation coefficients insensitive to various environments and hence seem to be ineffective information to make attenuation environment-adaptive, which will be presented and analyzed in Section <ref>. We adopt novel representations relevant to environments and expect that these representations can effectively capture and represent the acoustic properties of various environments. These representations are acquired through an unsupervised learning approach, enabling the selective memorization of target-environment-dependent prior knowledge to some extent. We denote selective memory Meta-SELD with environment representations as environment-adaptive Meta-SELD. §.§ Environment representations Fig. <ref> depicts the method of extracting environment representations. Specifically, we employ the environment extractor to extract feature embeddings from output feature maps of each backbone layer. Subsequently, feature embeddings from each layer are concatenated to environment representations. The detail of the environment extractor is shown in Fig. <ref>. The feature embedding dimension D=2048 is used. The output feature maps of each layer are averaged and weighted-averaged in the batch and time dimension. We conjecture that these operators mitigate the influence of acoustic events on the feature embeddings while preserving the environmental information. Fig. <ref> illustrates the environment-adaptive Meta-SELD training procedure. Step 6 in Algorithm <ref> represents the extractor process of environment representations. Our emphasis lies in evaluating the performance of Meta-SELD in unfamiliar environments. We hypothesize that audio recordings captured in diverse spatial locations within a given environment exhibit more similar acoustic properties when compared to recordings from different environments, excluding extreme cases where the speaker or microphone is close to reflections or each other. Furthermore, these acoustic properties generally remain consistent across recording moments and sound events. To test our hypothesis, we display environment representations via the similarity map and t-SNE <cit.> in Section <ref>.§ EXPERIMENTAL SETUPS§.§ Datasets We utilize the computationally simulated SRIRs to synthesize the datasets containing 30,000 5-second clips with reverberation time (RT60) from 0.3 seconds to 0.5 seconds. Sound event examples from FSD50K and AudioSet are cleaned by PANNs. We refer to the computationally simulated datasets as CSD. For simple comparison and reproducibility, we adopt the official synthetic datasets <cit.>, which are synthesized using collected SRIRs for the baseline training of Task 3 of the DCASE Challenge in 2022 and 2023 <cit.>. The official synthetic datasets are denoted as Base Dataset or Base in this work. The Base Dataset contains 1200 1-minute audio clips and is synthesized using real-scene SRIRs from TAU-SRIR DB <cit.>, which are measured in 9 rooms at Tampere University. There are 16 different recording rooms in total in the development set of the STARSS23 dataset, including nine recording rooms in dev-train-set and seven recording rooms in dev-test-set. The labels of the STARSS23 evaluation set remain inaccessible, and hence all subsequent ablation experiments are based solely on the analysis of the STARSS23 development set. To further validate the effectiveness of our proposed method, we present the performance of the computationally synthesized scenes in Section <ref>.The environment-adaptive Meta-SELD involves a three-stage pipeline, each stage utilizing a distinct dataset, as illustrated in Fig. <ref>. We start with randomly initializing the EI model and train the EI model on CSD. Subsequently, selective memory meta-learning is applied to the EI model on the Base Dataset, served as the meta-training set 𝒟_train, to create the meta-EI model. Finally, fast adaptation is performed to a specific environment from STARSS23 using a few samples recorded in the corresponding environment. All development sets of STARSS23 are used for meta-test set 𝒟_test to evaluate the performance of the adaptation to unknown environments.𝒟_train and 𝒟_test are divided into 9 tasks and 16 tasks, respectively, corresponding to 9 rooms and 16 rooms. In the meta-training phase, we first sample a batch of rooms and then sample a batch of examples from each room. A batch of samples from an individual room constructs a task, and a part of the samples are support samples while the remaining samples are query samples. The batch sizes of rooms and samples are set to 9 and 128, respectively. For each batch, the first 30 samples are designated as support samples in this work. In the meta-test phase, samples from each room, excluding the support samples from 𝒮_j^𝚝𝚎𝚜𝚝, comprise the final test (query) set 𝒬_j^𝚝𝚎𝚜𝚝. We will evaluate the performance of parameterized function f_Θ_j^' with adaptation parameters Θ_j^' on 𝒬_j^𝚝𝚎𝚜𝚝 after iteration updates on 𝒮_j^𝚝𝚎𝚜𝚝 with the initial parameters Θ. §.§ Evaluation metrics We use a joint metric of localization and detection<cit.> here: two location-dependent detection metrics, F-score (F_≤ 20^∘) and error rate (ER_≤ 20^∘), and two class-dependent localization metrics, localization recall (LR_CD) and localization error (LE_CD). F_≤ 20^∘ and ER_≤ 20^∘ consider true positives predicted within a spatial threshold 20^∘ away from the ground truth. LE_CD and LR_CD compute the mean angular error and true positive rate in the case when the types of sound events are predicted correctly, respectively.We use an aggregated SELD metric for the method comparison and hyper-parameter selection: 0.9ℰ_𝚂𝙴𝙻𝙳=1/4[ER_≤ 20^∘+(1-F_≤ 20^∘)+LE_CD/180^∘+(1-LR_CD)]. A macro-average of F_≤ 20^∘, LR_CD, LE_CD, and ℰ_𝚂𝙴𝙻𝙳 across classes is used. A good system should have small ER_≤ 20^∘, large F_≤ 20^∘, small LE_CD, large LR_CD, and small ℰ_𝚂𝙴𝙻𝙳. Note that different from <cit.>, where room-wise metrics are micro-averaged in the end, we compute the metrics in each room and then macro-average the metrics across rooms. §.§ Hyper-parametersThe sampling rate is 24 kHz. We extract 64-dimensional log mel spectrograms from four-channel FOA signals with a Hanning window of 1024 points, and a hop size of 320. Each audio clip is segmented to a fixed length of five seconds with no overlap for training and inference. AdamW <cit.> is used to update meta parameters, while SGD is used to update adaptation parameters. The batch size is 128. For training the conventional supervised learning models, the learning rate is set to 0.001 for the first 60 epochs out of 80 epochs and is then decreased by 10% every 10 epochs. In the meta-training phase, we find that setting the momentum to 0.01 in the Batch Normalization layer <cit.> has better performance. We set one epoch containing nine meta batches, which encompass all rooms within the Base Dataset. Subsequently, the gradients of one epoch are averaged to update meta parameters in the outer loop step. For training Meta-SELD++, the learning rate is 0.0003 for the first 300 epochs out of 500 epochs and is then decreased by 10% every 100 epochs. We only consider 5 iteration updates in the inner loop of MAML. All networks are implemented using PyTorch. § EXPERIMENTS §.§ Effect of synthetic dataWe evaluate our synthetic data using the conventional supervised learning method. The model is trained on the Base Dataset (Base) and CSD, and then evaluated on all development sets of STARSS23. Table <ref> shows the results of the data synthesis method. The results demonstrate that the model trained on the CSD can generalize to real-scene datasets. Comparing Base with CSD, we observe the performance gap is mainly in localization. One of the possible reasons is that there is some discrepancy between computationally simulated SRIRs and measured SRIRs in real scenes. In addition, adding CSD to the Base Dataset for training further improves the performance. §.§ Effect of Meta-SELD To demonstrate the effectiveness of our proposed Meta-SELD, we compare the Meta-SELD, the SELD method, and the fine-tuned SELD method. The differences among these methods are described in Table <ref>. Macro-averaged metrics for all 16 rooms of STARSS23 are shown in Table <ref>. In SELD (Base + 𝒮_j^test), the support set 𝒮_j^test from the j-th room of STARSS23 is added to the synthetic datasets for training from scratch, and the query set 𝒬_j^test is used for evaluating that specific model. This approach, however, requires training multiple models from scratch, one for each specific room of STARSS23. In SELD (Base), we first train an EI model on the Base Dataset and then fine-tune (adapt) on 𝒮_j^test. In Meta-SELD, we apply MAML to a SELD model with random initialized meta-parameters. The SELD (Base) and Meta-SELD methods without adaptation refer to no fine-tuning on 𝒮_j^test. The top block of Table <ref> presents the method using the Base Dataset for training or meta-training, while the bottom block presents the method using the Base Dataset and computationally simulated datasets. In terms of the average performance, we can see that after adaptation on 𝒮_j^test, both Meta-SELD and Meta-SELD++ exhibit superior performance in comparison to the corresponding fine-tuned SELD method. In a comparison between SELD (Base) and Meta-SELD, we observe a bigger performance gap in Meta-SELD with and without adaptation, which means the meta parameters of Meta-SELD are more suitable for adapting to a new environment. However, the performance gap in Meta-SELD++ becomes smaller and meta parameters from Meta-SELD++ outperform parameters from SELD (Base + CSD). We conjecture the phenomenon results from conflicts while optimizing among environments, especially obvious in Meta-SELD++. The SELD method, which adds the support set 𝒮_j^test for training, performs the best among these methods. This phenomenon may be attributed to the fact that training multiple independent models from scratch could avoid compromise in these environments. §.§ Effect of selective memory§.§.§ Inputs to selective memory Table <ref> shows the results of three types of input in the selective memory methods, the task-specific gradients on the support set 𝒮_i, "None", and the environment representations. The input "None" in the selective memory method refers to layer-wise learnable parameters as attenuation coefficients instead of being generated by the attenuation network. We see that applying selective memory to Meta-SELD++ obtains performance improvement, particularly in localization. These methods are even more effective than the SELD (Base + CSD + 𝒮_i^test) in Table <ref>. Among these types of inputs, the environment representations perform better. This exhibits the effectiveness of environment representations as the input to the selective memory method.§.§.§ Attenuation factors Fig. <ref> illustrates the attenuation factor λ of a few typical layers for each room of STARSS23. However, observing room-and-layer-wise attenuation factor λ derived from the task-specific gradients, we note that λ varies over a small range from room to room. This suggests that λ can not be adaptive to the diverse acoustic environments of these rooms. We analyze the inputs to the attenuation network, task-specific gradients on 𝒮_i, and find that most gradients exhibit diminutive magnitudes. This phenomenon can be attributed to the pre-trained EI model, which initializes the meta-parameters of Meta-SELD++, resulting in minute gradient values. In contrast, a comparison of the gradients and representations as input to the attenuation network indicates that environment representations generate more environment-adaptive attenuation factors λ. The changes in these attenuation factors are more conspicuous from room to room. For CNN-weight_1 in Fig. <ref>, which is the first layer of the backbone, we observe that generated λ of this layer is relatively large and changes pretty little among rooms, compared with subsequent layers. The initial layers encode environment-independent features, while deep layers prefer environment-adaptive attenuation and encode environment-dependent features, which aligns with the observation of <cit.>. §.§.§ Representations of EnvironmentsWe investigate various techniques for extracting environment representations: feature maps from the last layer (before the linear layer) that are averaged in the batch and time axes, feature embeddings derived from the last layer, and environment representations constituted by concatenated feature embeddings from all preceding layers. Table <ref> demonstrates the effectiveness of the environment extractor and preceding feature embeddings. Feature maps from the last layer encoded by the environment extractor perform better compared to being directly averaged. Moreover, preceding feature embeddings provide more information and further performance improvement.§.§ Effect of adaptation setups The most important hyper-parameters in the adaptation phase of MAML include the number of inner-loop update steps and support samples. Intuitively, the inner-loop optimization should be consistent during meta-training and meta-testing. However, the large number of update steps and support samples leads to excessive computation and memory burdens. We exploit previously well-trained parameters and investigate the effect of adaptation setups in meta-testing. The top row of Fig. <ref> shows the effect of the number of update steps on the adaptation of SELD (Base + CSD), Meta-SELD++, and environment-adaptive Meta-SELD. The environment-adaptive Meta-SELD achieves more efficient adaptation. The benefits come from selectively memorizing necessary information, helping the learner adapt to new environments more quickly.We also investigate the effect of the number of shots (support samples). Note that in this experimental setup, we select the first 50 samples of each room as the support set, and all remaining samples are as the query set. The bottom row of Fig. <ref> shows that meta-learning-based methods exploit support samples more effectively. When the number of support samples increases in the environment-adaptive Meta-SELD method, the performance is consistently improved, but the magnitude of performance improvement also appears to decrease.§.§ Room-wise performance Fig. <ref> shows the room-wise metrics. Through a comparative analysis of conventional supervised-learning-based and meta-learning-based methods on identical datasets, we observe that meta-learning-based methods can reduce ℰ_𝚂𝙴𝙻𝙳 effectively in rooms where conventional supervised-learning-based methods exhibit high ℰ_𝚂𝙴𝙻𝙳, such as Room 2, Room 4, and Room 22. This means when a model has an unsatisfactory generalization to a specific room below the average performance, using a few samples collected in the specific room for adapting could improve the performance significantly. However, we also observe performance degradation or insignificant improvement in ℰ_𝚂𝙴𝙻𝙳, even though some new samples of unseen environments are used for adapting in both conventional supervised-learning-based and meta-learning-based methods, for example, in Room 7 and Room 15. This phenomenon could arise from the fact that our methods have difficulty extracting valid information for training from new samples. In Room 9, the performance of SELD (Base) and SELD (Base + CSD) is improved after fine-tuning using some new samples of the corresponding room, but the performance of Meta-SELD and Meta-SELD++ after adaptation is degraded. The reason may be that compromised initial meta parameters among those rooms fail to adapt to the environment. The meta-learning-based methods find general initial parameters that can be adapted to unknown environments in the sense of average, and experimental results demonstrate that meta-learning-based methods outperform the conventional supervised-learning-based methods in most rooms and perform better on average. In contrast to SELD (Base + CSD), the environment-adaptive Meta-SELD exhibits comparable average performance in LR_𝙲𝙳, but lower LE_𝙲𝙳. Consequently, performance improvement in localization is the main factor for the reduction in ℰ_𝚂𝙴𝙻𝙳. Additionally, the performance improvement in Room 2 and Room 22 can also be attributed to analogous factors. §.§ Environment representationsTo interpret the representation extracted by the sub-network in Fig. <ref>, we study the relationship between the representation and the environment. We hypothesize that different clips recorded at various spatial positions in the same environment generally have more similar acoustic properties in contrast to various environments, except in extreme cases. Therefore, we extract representations from several disjoint batches of samples, and then we compute the similarity between the representations and visualize these representations via t-SNE <cit.>. Fig. <ref> shows the visualization of extracted representations.The SELD performance is greatly affected by noise and reverberation. To dynamically control the acoustic properties of environments to investigate the representation, we also simulate both reverberant rooms and noisy rooms. For the reverberant simulations, rooms of the same size are simulated at intervals of 0.3-second RT60, ranging from 0.4 to 2.5 seconds, therefore, there are 8 rooms with different RT60 in total. For the noisy simulations, we simulate 15 rooms of the same size, absorption coefficients, and reflection orders. Subsequently, we add ambient noise from the NoiseX-92 <cit.> database into synthetic recordings to generate SNR ranging from 10 dB to 15 dB. The NoiseX-92 database contains 15 types of ambient noise. Hence each room has a unique noise type. All sound event examples for synthesizing are randomly sampled from FSD50K and AudioSet. Based on the previously trained model using the environment-adaptive Meta-SELD method, we directly perform an inference (meta-test) on these simulated datasets.§.§.§ Similarity MapsRepresentations from the same or similar rooms should have a high similarity. In this work, the cosine distance of two representations is used to measure their similarity. For a batch of 128 samples from the same room, we extract support representations from the first 30 samples, corresponding to the number of support samples for adaptation, and query representations from the last 98 samples. The cosine similarity is computed between support representations from a room and query representations from the same or other rooms. Fig. <ref> (a) and (b) present the cosine similarity maps on the Base Dataset and STARSS23. The experimental results show that all diagonal elements are the maximum of their rows of the similarity matrix on the Base Dataset, while 12/16 of diagonal elements are the maximum of their rows on STARSS23. Fig. <ref>(c) and (d) show the cosine similarity maps on the simulated reverberant rooms and simulated noisy rooms. We see that these maps are relatively symmetrical. The diagonal elements of the similarity matrix are pretty large. In addition, we observe environment representations have a high resolution in rooms with low RT60, but a low resolution in rooms with high RT60. The differences in the environment representations of these rooms with high RT60 are small. This may result from the small range RT60 of the training set.§.§.§ Visualization of representations via t-SNE We sample 8 batches of clips from each dataset, with 32 clips per batch, and then compute representations for each batch[For STARSS23, the number of clips is less than 256 in a few rooms, therefore, we repeat the corresponding batch of clips along the batch dimension.]. Empirically, we find the learned representations cluster meaningfully: representations from the same room tend to be clustered, as shown in Fig. <ref> (e-h). Especially in Fig. <ref>(h), different recordings with the same noisy type have similar features, which leads to better clustering performance.These observations demonstrate the extracted representations are relevant to the environments. §.§ Results on computationally synthesized scenes We evaluate the method on computationally synthesized scenes to further validate the effectiveness of environment-adaptive Meta-SELD. These scenes are represented as semantically labeled 3D meshes from the 3D-FRONT dataset <cit.>, which contains 18,968 diversely furnished rooms in 6,813 professionally designed houses. We computationally simulate SRIRs of 15 houses sampled from the 3D-FRONT dataset, which is the same as the simulation of the Geometric-Wave Acoustic (GWA) dataset <cit.>. We synthesize 256 5-second spatial recordings for each acoustic environment with these simulated SRIRs and sound events from FSD50K <cit.> and AudioSet <cit.>. Ambient noise from the NoiseX-92 <cit.> database is also mixed into spatial recordings to generate signal-to-noise (SNR) ranging from 10 dB to 15 dB.We leverage the previous methods trained on CSD and Base Dataset to perform an inference (meta-test) on the simulated dataset. Table <ref> presents the results on computationally synthesized scenes. Notably, we see weak generalizations to these environments for both conventional supervised learning and meta-learning methods, due to disparities in environmental conditions between the training and test datasets. Additionally, the considerable variation in distribution between meta-training sets and meta-testing sets puts the meta-learned prior knowledge at high risk of losing effectiveness <cit.>. However, the incorporation of selective memory into Meta-SELD++ manifests superior performance in contrast to other methods, particularly improving the localization performance.§ CONCLUSIONThis study presents environment-adaptive Meta-SELD designed for efficient adaptation to specific acoustic environments using a limited number of samples recorded in those settings. We apply Model-Agnostic Meta-Learning (MAML) to a pre-trained environment-independent SELD model to obtain generalized initial parameters for different environments. Subsequently, we introduce selective memory and environment representations in Meta-SELD++ to alleviate conflicts and the limitations of common initialization across different environments. When evaluated on the development set of the STARSS23 datasets and computationally synthesized scenes, our proposed environment-adaptive Meta-SELD demonstrates superior performance compared to conventional supervised-learning-based SELD methods. Furthermore, we investigate and exhibit environment representations. Experimental results show that environment representations effectively capture the nuances of diverse acoustic environments. The potential applications of these environment representations are extensive, promising significant advancements in enhancing acoustic scene analysis in diverse settings.IEEEtran
http://arxiv.org/abs/2312.16422v1
{ "authors": [ "Jinbo Hu", "Yin Cao", "Ming Wu", "Qiuqiang Kong", "Feiran Yang", "Mark D. Plumbley", "Jun Yang" ], "categories": [ "eess.AS", "cs.SD" ], "primary_category": "eess.AS", "published": "20231227060250", "title": "Selective-Memory Meta-Learning with Environment Representations for Sound Event Localization and Detection" }
MCnet-23-24 Fachbereich Mathematik, Universität Tübingen, Auf der Morgenstelle 10, 72076 Tübingen, Germany Institute of Physics, NAWI Graz, University of Graz, Universitätsplatz 5, A-8010 Graz, Austria Particle Physics, Faculty of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Wien, Austria Erwin Schrödinger Institute for Mathematics and Physics, University of Vienna, Boltzmanngasse 9, A-1090 Wien, Austria Department of Physics, Lund University, Box 118, 221 00 Lund, Sweden We construct a set of Wigner 6j symbols with gluon lines (adjoint representations) in closed form, expressed in terms of similar 6j symbols with quark lines (fundamental representations). Together with Wigner 6j symbols with quark lines, this gives a set of 6j symbols sufficient for treating QCD color structure for any number of external particles, in or beyond perturbation theory. This facilitates a complete treatment of QCD color structure in terms of orthogonal multiplet bases, without the need of ever explicitly constructing the corresponding bases. We thereby open up for a completely representation theory based treatment of SU(N) color structure, with the potential of significantly speeding up the color structure treatment.Wigner 6j symbols with gluon lines: completing the set of 6j symbols required for color decomposition Malin Sjodahl January 14, 2024 =====================================================================================================§ INTRODUCTION A major challenge for accurate predictions of collision rates for processes involving many colored patrons, is the treatment of thecolor space associated with QCD.This challenge is typically addressed by expanding in color bases, often trace bases <cit.> or color-flow bases <cit.>, and sometimes accompanied by sampling of the color states <cit.>.A problem with the trace and color-flow bases is that they are only orthogonal in the limit N→∞, and in fact overcomplete for many particles; for high multiplicities they are severely overcomplete <cit.>, with a dimension that scales as the factorial of the number of gluons plus quark-antiquark pairs.If one does not want to exploit sampling over different color structures[Typically the sampling is also accompanied by detailed relations of color flows and kinematic quantities.], like done in for example the CVolver program <cit.>, this gives rise to a major bottle neck for the squaring of the color structure, which then scales as a factorial square.It appears appealing to explore minimal orthogonal bases. This is accomplished by multiplet bases <cit.>, which rely on the Clebsch-Gordan decomposition of the involved particle representation for constructing orthogonal bases. Examples of multiplet bases can be found in Refs. , and a general construction in Refs. .However, it is possible to do better than that: Any color structure can be decomposed into a multiplet basis without explicitly constructing this basis, by making use of the group invariant Wigner 6j symbols (here 6js for short, also known as 6j coefficients, Racah coefficients, or Racah W coefficients, up to signs), along with Wigner 3j coefficients and dimensions of representations.The problem of decomposing the color structure is then essentially reduced to the problem of finding a sufficient set of 6j symbols for the color decomposition in question. Some work in this direction has been pursued in Refs. , where symmetry is exploited to recursively calculate a set of 6j symbols applicable for processes with a limited number of partons. Other recent work obtains 3 6j symbols numerically, by first calculating 3 Clebsch-Gordan coefficients<cit.>.For 2, the problem is addressed in Ref. .In a recent paper <cit.>, we started to explore a third avenue, namely to recursively derive 6j symbols in terms of other 6j symbols and dimensions of representations. We there derived closed forms of a set of 6j symbols characterized by having quarks (fundamental representations) in opposing positions.In the present paper, we complete this set of 6j symbols with symbols where two of theopposing representations are quarks or gluons (adjoint representations), and 6js where one vertex only contains fundamental and adjointrepresentations, whereas the other representations are arbitrary. As we will see, this class of 6js define a complete set for decomposing any color structure appearing in the standard model.We lay out the basics ofcolor calculations using the birdtrack method in sec:computational. In sec:gluon6js-4cases we go through a general procedure for decomposing the color structure. This allows us to identify a set of 6j symbols that is sufficientto decompose any color structure to any order in perturbation theory.While one of the necessary classes of 6j symbols is calculated in Ref. , the remaining ones are calculated in sec:closed-form-expressions, after a careful discussion on how to define vertices in sec:vertices. Finally, we make concluding remarks in sec:conclusions. § REDUCINGCOLOR STRUCTURE IN BIRDTRACK NOTATIONIn this section we briefly outline how to calculateinvariants, using the birdtrack method, and assuming knowledge of a sufficient set of Wigner 3j[We will later normalize the 3js to 1.] and Wigner 6j symbols.It is worth remarking that while our discussion focuses on , in particular 3, this reduction method is applicable for any Lie group. For a full, comprehensive introduction to the birdtrack formalism, we refer to Ref. , a minimal introduction can be found in Appendix A of Ref. , whereas a more pedagogical account is written up in Ref. . Examples of birdtrack calculations for QCD can be found in Refs.  and .As we are interested in fully color summed (averaged) color structures, every color structure can be seen as a fully connected graph ofrepresentations, for example as in fig:full contraction.This entails of course triplet and octet representations but also the higher dimensional irreducible representations (irreps), used in the construction of multiplet bases, or appearing during the calculations.In the end, we want to calculate a scalar product in color space, for example between a Feynman diagram and basis vector in an orthogonal multiplet basis.Generally, the color structure then consists of a fully connected graph. The graph contains loops of various length, for example, we may encounter-1.2 cm < g r a p h i c s >,where the double lines denote any irrep of , and in general should be supplied with representation labels and arrow directions, which we suppress here for readability.While short loops of length up to three can be immediately removed (see below), the fall-back method to handle long loops is to split them up to shorter loops by repeated insertion of the completeness relationGenRep-beta-gamma= ∑_δd_δ/3j-gammaSTAR-beta-delta GenRep-gammaVbeta-delta-ProjOps , where d_δ denotes the dimension of the irrep δ, appearing in the Clebsch-Gordan decomposition, and where the denominator is a Wigner 3j symbol.Tracing both sides of this equation, and using -0.5 cm < g r a p h i c s > =d_α , it is clear that the completeness relation implies d_β d_γ =∑_δ d_δ, as anticipated.Applying the completeness relation (<ref>) to two of the representations in eq:6VertexLoopOnly, marked in red below, schematically results in-0.45 < g r a p h i c s > (<ref>)∑_α d_α/-0.45 < g r a p h i c s > -0.45 < g r a p h i c s >,where we now have a “vertex correction” loop with three internal representations. This loop can be removed using the Wigner 6j symbolsGenRep-alpha-sigma-rhoSTAR–VCorr-delta-gamma-betaSTAR=∑_a 1/3j-sigmaSTAR-alpha-rho-VertexLabelsVa 6j-delta-rho-sigma–beta-gamma-alpha–CornerLabelsV3a_Wigner-6jVertex-alpha-sigma-rhoSTAR–V–VLabela .The sum above runs over instances a of the irrep ρ in α⊗σ, for example the two octets in 8⊗ 8=1⊕8⊕ 8⊕ 10 ⊕10⊕ 27.In this paper, every encountered vertex will contain at least one fundamental or adjoint representation, implying that most often there is only one instance, but for A ⊗σ, with A being the adjoint representation for , and σ being an arbitrary irrep, there can be up to N-1 representations of type σ <cit.>. We will choose the corresponding vertices to be mutually orthogonal, in the sense that1.5⟨-0.45 < g r a p h i c s >, -0.45 < g r a p h i c s > 1.5⟩= -0.45 < g r a p h i c s > != 0 if a b . Furthermore, for the 6j symbols that we derive, we choose to normalize our vertices such that-0.45 < g r a p h i c s > ≡δ_ab ,for all non-vanishing vertices,i.e., the 3j coefficients are normalized to one.This implies in particular that , in contrast to the standard QCD normalization , for the generator normalization tr[t^at^b]=1/2δ^a b. We explain in app:restore-3j how to easily transform our results to any desired normalization.Note that after having applied eq:vertex-correction to eq:loop-reduced, we are left with a loop with one representation less,-0.45 < g r a p h i c s > (<ref>)∑_α d_α/-0.45 < g r a p h i c s > -0.45 < g r a p h i c s > (<ref>)∑_α d_α/-0.45 < g r a p h i c s > -0.2 < g r a p h i c s > /-0.45 < g r a p h i c s > -0.45 < g r a p h i c s > . Repeatedly applying this procedureto eq:6VertexLoopOnly, it is thus possible to reduce loops with any number of internal representations down to loops of length three (removed using eq:vertex-correction) or length two, removed using the “self energy” relation-0.8 cm < g r a p h i c s > =< g r a p h i c s > /d_α-0.1 cm < g r a p h i c s >.In this way, assuming the knowledge of the 6j symbols, it is possible to reduce any fully connected graph to a number. It should be noted that the required set of 6js depends on how the contraction is performed, and what basis vectors are used. In the present paper, we consider basis vectors of the form,-0.45 < g r a p h i c s >,where we have a backbone chain of general representations α_1,α_2,⋯, α_n, denoted by double lines with suppressed arrows, to which the external particle representations, octets, triplets and antitriples, denoted by single lines, are attached (also with suppressed arrows). To the authors knowledge all multiplet bases in the literature are of this form. One could also imagine bases where general irreps are contracted in vertices. Such bases will require 6j symbols beyond those presented here, and are therefore beyond the scope of this paper. We emphasize once more, that the philosophy underlying the present work is to avoid explicitly constructing any bases, and instead achieve a decomposition using 6js.§ A SUFFICIENT SET OF 6J SYMBOLS FOR DECOMPOSING COLOR STRUCTURE In this section we identify a sufficient set of 6j symbols for decomposing color structure into the orthogonal basis vectors in eq:basis vector to any order in perturbation theory.We start out with considering tree-level color structures, and return to higher orders later.Again, letting single lines schematically denote triplet, octet or singlet representations (i.e. representations of the external particles) and letting double lines denote the general irreps encountered in the basis vectors, the fully connected graph for a tree-level color structure contracted with a basis vector, will always contain at least two loops of the form-0.4 < g r a p h i c s > ,where the characterizing feature is that there is only one vertex from the initial color structure (the gray blob, representing a quark-gluon or triple-gluon vertex).Typically the color structure will contain many color structures that are trivial to contract using eq:vertex-correction and eq:self-energy , but we here consider a worst, general case.To reduce loops of this type, the completeness relation, eq:completeness-relation, and the vertex correction relation, eq:vertex-correction, can be applied to the two red representations below-0.4 < g r a p h i c s >= ∑_ψd_ψ/-0.45 < g r a p h i c s > -0.4 < g r a p h i c s >= ∑_ψ,ad_ψ/-0.45 < g r a p h i c s > -0.1 < g r a p h i c s > /-0.45 < g r a p h i c s > -0.4 < g r a p h i c s >.Repeating this procedure will eventually result in a vertex correctioncontaining the gray blob. (For the loop in the above example, thisstep would need to be repeated two more times.) The vertex correction with the gray blob gives-0.49 < g r a p h i c s >= ∑_a-0.1 < g r a p h i c s > /-0.45 < g r a p h i c s > -0.5 < g r a p h i c s >,for some representations α, β and γ. This last step removes two vertices, one gray blob, i.e., a vertex from the initial color structure and one vertex between arbitrary representations in the basis vector, eq:basis vector.As every contracted loop removes one vertex from the basis vector and one from the color structure to be decomposed, the resulting graph is topologically equivalent to a graphfor a tree-level color structure with one less external patron. After a loop of the form of eq:Loop1 has been contracted, there must thus exist at least two loops of the type in eq:Loop1 in the resulting color structure by the above argument. Hence any tree-level color structure can be completely contracted by repeatedly contracting loops of the form of eq:Loop1.Only treating loops of the form of eq:Loop1is thus sufficient for tree-level color structures. We now address the situation where the color structure to be decomposed itself contains loops. It is then not always possible to choose color loops of the form in eq:basis vector. At one-loop this happens for diagrams where all external partons form a single loop-0.4 < g r a p h i c s > .(For all other one-loop color structures there is at least one vertex with two uncontracted indices, implying that a loop of the from in eq:Loop1 can be found, such that it is possible to contract loops as in eq:CRApplicationLoopType1.) For color structures of the form of eq:NLOAllLoop, there always exists loops of the form-0.4 < g r a p h i c s > .Similarly to the loop in eq:Loop1, the steps detailed ineq:CRApplicationLoopType1 remain valid. However, at the end, instead of contracting a loop of the form in eq:CRApplicationLoopType1LastCR, a loop with four vertices is encountered,-0.5 < g r a p h i c s >= ∑_ψd_ψ/-0.45 < g r a p h i c s > -0.5 < g r a p h i c s > = ∑_ψ,a,bd_ψ/-0.45 < g r a p h i c s > -0.1 < g r a p h i c s > /-0.45 < g r a p h i c s > -0.1 < g r a p h i c s > /-0.45 < g r a p h i c s > -0.49 < g r a p h i c s >.Treating a loop like this removes two vertices from the color structure, and none from the basis vector. Since a one-loop color structure has two vertices more than a tree-level color structure for the same process, the number of vertices after the contraction matches a tree-level structure. Since two legs which initially belonged to the loop of the color structure to be decomposed (the upper legs in eq:CRApplicationLoopType3) now attach directly to the sequence of basis vector representations, the topology after the contraction is equivalent to that of a tree-level color structure.Note that a loop of the type in eq:Loop3, need not be the first loop to be contracted (most one-loop diagrams contain no loop of the form in eq:NLOAllLoop), but such loops may at some point be encountered, and necessary to contract to continue the reduction. In this way, we can thus contract any one-loop diagram.Color structures of arbitrary order in perturbation theory can be decomposed by contracting loops similar to eq:CRApplicationLoopType1 and eq:CRApplicationLoopType3, possibly with more than two (cf. eq:CRApplicationLoopType3) vertices from the initial color structure. The final steps in the contraction would then proceed as in eq:CRApplicationLoopType3, but with more completeness relations inserted.In the above color decomposition procedure, we can identify a minimal set of necessary 6j symbols, namely those appearing in the different steps above, Eqs. (<ref>-<ref>) and eq:CRApplicationLoopType3. Keeping in mind that the single lines above denote adjoint or fundamental representations, we conclude that the 6js we are after can be divided into the cases in tab:6js.We note that the 6js of type (0) in tab:6js are known from Ref. . In this article we address the computation of the remaining 6js. Before taking on this task, we must, however, be careful with how we define the vertices for the cases where we have more than one vertex between the same set of representations, which can happen for vertices with gluons, as discussed below eq:vertex-correction.§ VERTEX CONSTRUCTIONIn tab:6js we sorted the 6j symbols that we are going to study in this work according to the number of gluon lines and according to the number of vertices with gluon lines that these 6j symbols contain. Before we can evaluate the 6j symbols, we have to construct all vertices with at least one gluon line. When discussing how many vertices with a given set of irreps there are, it is useful to think of the general irrep labels (for which we use Greek letters) as Young diagrams. In fact, a systematic labeling ofirreps applicable for arbitrary N should rather be in terms of pairs of Young diagrams.<cit.> However, if we allow for Young diagrams with columns with an N-dependent number of boxes, we can replace each pair of Young diagrams by a single Young diagram<cit.>. For instance, the adjoint representation is then labeled by the Young diagram A= *(black) (black) (black), where, here and in what follows, a black column always represents a column with N-1 boxes. Hence, in the following, we can always think of irrep labels as single Young diagrams with, possibly, N-dependent column lengths.We will normalize all our vertices such that all non-vanishing 3j symbols are equal to 1, as already mentioned following eq:scalar-product-and-3j. Readers who prefer to work with different normalizations are referred to app:restore-3j for a simple transformation rule.For each instance of the irrep α in the complete reduction of γ⊗ A we have to construct a vertexVertex-alphaO-adj-gammaI–Va,where a thus runs from 1 to the multiplicity of α in γ⊗ A.Essentially, we construct these vertices by splitting the gluon line into a qq̅-pair. More precisely, we consider all diagramsVertex-alphaO-adj-gammaI–lambdaj-fund-fundSTAR,where j enumerates admissible intermediate irreps according to a scheme to be explained below, and construct the desired vertices as linear combinations of these diagrams. In general, these vertices then take the formVertex-alphaO-adj-gammaI–Va= ∑_j C^αγ_ajVertex-alphaO-adj-gammaI–lambdaj-fund-fundSTARwith coefficients C^αγ_aj∈ℂ, which will actually turn out to be real-valued functions of N. We distinguish the two cases α≠γ and α=γ.If α≠γ then there is only a non-zero vertex (<ref>) if α can be found in the Clebsch-Gordan decomposition of γ⊗ A. This means that we can obtain α from γ by adding a box in one row and subsequently removing a box in a different row (possibly after first adding a column of length N)<cit.>. In this case there is a unique intermediate irrep λ_1 in diagram (<ref>), representing the intermediate step in this process after adding a box but before removing the other box. Hence, for such α≠γ, we findVertex-alphaO-adj-gammaI–V1 = C^αγ_11Vertex-alphaO-adj-gammaI–lambda1-fund-fundSTAR,where the constant has to be chosen such that the normalization condition for the corresponding 3j symbol,1.5⟨Vertex-alphaO-adj-gammaI–V1 , Vertex-alphaO-adj-gammaI–V11.5⟩ = 3j-alpha-gamma-adj–V1-V1!= 1 ,is fulfilled. After a few steps, spelled out in App:props_of_vertex-corr-diagrams, we get from eq:square_diagram_final_resultC^αγ_11=√(d_λ_1(N^2-1))forα≠γ . For α=γ there can be up to N-1 vertices of type (<ref>), cf. Appendix B of Ref. , i.e., if the multiplicity of α in the complete reduction of α⊗ A is K, we have to construct the verticesVertex-alphaO-adj-alphaI–Va ,a=1,…,K.In this case there exist K+1 different admissible irreps λ_j rendering the diagramsVertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR ,j=1,…,K+1 ,non-zero, as discussed in Appendix <ref>, where we also show that all vertices in eq:Vertex-alphaO-adj-alphaI–Va are linear combinations of theses diagrams, i.e. Vertex-alphaO-adj-alphaI–Va =∑_j C^αα_ajVertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR.Hence, we can obtain a set of orthonormal vertices (<ref>) by applying the Gram-Schmidt algorithm to the set of diagrams (<ref>) with admissible intermediate irreps λ_j.In order to obtain a unique result when carrying out Gram-Schmidt we have to decide how to sort the diagrams (<ref>). To this end, note that an admissible λ_j is obtained by adding an extra box toα. We say thatVertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR < Vertex-alphaO-adj-alphaI–lambdak-fund-fundSTARif in λ_k this extra box is added further down compared to where it was added in λ_j. We then sort the birdtrack diagrams (<ref>) in increasing order. Hence, the first birdtrack diagram in our list is always the diagram with intermediate irrep λ_1 which is obtained by adding a box to the first row of α. In Appendix <ref> we show that the last diagram in this list is always a linear combinations of the first K diagrams and can thus be omitted. The sum ineq:vertices_for_alpha=gamma hence runs from 1 to K.In order to carry out Gram-Schmidt we only need to know the scalar products between all diagrams (<ref>), which are calculated in App. <ref>, Eq. (<ref>)–(<ref>). We denote thiss_jk = 1.5⟨Vertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR , Vertex-alphaO-adj-alphaI–lambdak-fund-fundSTAR1.5⟩ = 1/N^2-1( δ_jk/d_λ_j - 1/N d_α) .The scalar products s_jk also depend on the irrep α but we do not display this dependence in our notation since in the following s_jk for different irreps α never appear alongside each other in our equations.We explicitly state the formulae for the first two vertices, which are the only vertices in the physically relevant case N=3,Vertex-alphaO-adj-alphaI–V1= 1/√(s_11)Vertex-alphaO-adj-alphaI–lambda1-fund-fundSTAR Vertex-alphaO-adj-alphaI–V2= √(s_11/s_11s_22-s_12^2) ( Vertex-alphaO-adj-alphaI–lambda2-fund-fundSTAR - s_12/s_11Vertex-alphaO-adj-alphaI–lambda1-fund-fundSTAR)Further vertices, which only exist for N>3, are calculated by straightforwardly continuing Gram-Schmidt. In app:examples we illustrate the explicit vertex construction with a few examples.§ FORMULAE FOR GLUON 6J SYMBOLSWe will now describe how the calculation of the different classes of 6j symbols proceeds. Our main tool will be the repeated insertion of vertex corrections, and the Fierz identity, eq:Fierz, to decompose gluon lines. We will go through the basic idea and steps for the simpler cases here, but defer the details of longer calculations to App:steps for the sake of readability. §.§ Case 1: 6js with a quark-gluon vertex We here consider the 6j symbol which contains a qq̅ g vertex (in its center). We proceed to calculate this by expanding the gluon vertex into a vertex correction 6j-alpha-beta-gamma–fundSTAR-fund-adj(<ref>)√(d_λ_1(N^2-1))Square-alpha-beta-gamma-lambda1–fundSTAR-fund-fundSTAR-fund-adj(<ref>)δ_βλ_1/√(d_β(N^2-1)),where we have assumed α≠γ and used eq:square_diagram_final_result from App:props_of_vertex-corr-diagrams, which builds on the Fierz identity, eq:Fierz.In the case α=γ, again using eq:square_diagram_final_result, for the first two vertices a=1,2 we obtain 6j-alpha-lambdak-alpha–fundSTAR-fund-adj–V1 (<ref>)s_1k/√(s_11)and 6j-alpha-lambdak-alpha–fundSTAR-fund-adj–V2 (<ref>)√(s_11/s_11s_22-s_12^2)( s_2k - s_12/s_11s_1k) .Note that the last expression vanishes for k=1. For N>3, there might, as described, be more vertices which then are treated similarly. §.§ Case 2: 6js with a gluon line opposing a quark line In this case we have one quark and one gluon line attaching to different vertices. Rewriting the gluon vertices in terms of vertex corrections and invoking the Fierz identity, we then find, after a few steps spelled out in App:steps 6j-fund-gamma-delta–betaSTAR-adj-alpha–Va-Vb_smaller= ∑_j=1^a ∑_k=1^bC^βα_aj C^δγ_bk/N^2-1(6j-fund-muk-delta–lambdajSTAR-fund-alpha6j-fund-gamma-muk–betaSTAR-fundSTAR-lambdaj- δ_αβ δ_γδ/N d_αd_γ) ,where the 6j symbols with two quark lines are given in closed formin Ref. . §.§ Case 3: 6js with two opposing gluon lines Case 3 can be addressed with a similar strategy as the other cases. Our result, for which we demonstrate all intermediate steps in App:steps, reads6j-adj-gamma-delta–betaSTAR-adj-alpha–Va-Vb-Vc-Vd_smaller =∑_j=1^a ∑_k=1^bC^βα_aj C^δγ_bk/N^2-1(6j-fund-alphaSTAR-deltaSTAR–lambda-adj-mukSTAR–Vc-Vd6j-fund-gammaSTAR-betaSTAR–muk-adj-lambdajSTAR–Vc-Vd- δ_αβ δ_γδ/N d_αd_γ) .§.§ Case 4: 6js with three-gluon vertices The class of the 6j symbols with three gluons consists of two three-gluon vertices, which typically is taken to be proportional to if^abc and d^abc. We will illustrate the case of if^abc first.In particular, we use the definition of the if^abc-vertex in terms of traces, and then insert vertex corrections,6j-alpha-beta-gamma–adj-adj-adj–Vf-Va-Vb-Vc_smaller = N^2-1/√(2N)2.2(6j-alpha-beta-gamma–adj-adj-adj–loop-clockwise–Va-Vb-Vc_smaller - 6j-alpha-beta-gamma–adj-adj-adj–loop-counterclockwise–Va-Vb-Vc_smaller2.2)Using the Fierz identity, eq:Fierz, we can remove all internal gluon lines and the results are expressed in terms of a number of different diagrams which reduce to 3j symbols, dimensions and traces over quark lines, see Appendix <ref>. A single non-trivial diagram remains, which can be expressed as6j-alpha-beta-gamma–insert-lambdaj-muk-nul–crossing =∑_σ d_σ 6j-alpha-lambdaj-fund–fundSTAR-beta-sigmaSTAR6j-fund-muk-gamma–betaSTAR-fund-sigma6j-fund-nul-alpha–gammaSTAR-fund-sigma ,where a minus sign next to a vertex indicates that the lines are connected to this vertex in opposite order, i.e. GenRep-VertexSTAR-gammaO-betaO-alphaO=GenRep-Vertex-gammaO-betaOSTAR-alphaOSTAR ,see also appendix C of Ref. . For the above vertices with quarks this only makes a difference for the antisymmetic vertex of q⊗ q.<cit.> Our final result for the f-vertex is6j-alpha-beta-gamma–adj-adj-adj–Vf-Va-Vb-Vc_smaller = 1/(N^2-1)^21/√(2N)∑_j=1^a ∑_k=1^b ∑_ℓ=1^c C^βα_aj C^γβ_bk C^αγ_cℓ 2.2(∑_σ d_σ 6j-alpha-lambdaj-fund–fundSTAR-beta-sigmaSTAR6j-fund-muk-gamma–betaSTAR-fund-sigma6j-fund-nul-alpha–gammaSTAR-fund-sigma - δ_λ_jμ_k δ_λ_jν_ℓ/d_λ_j^22.2) ,while we obtain for the d-vertex,6j-alpha-beta-gamma–adj-adj-adj–Vd-Va-Vb-Vc_smaller = 1/(N^2-1)^2√(N/2(N^2-4))∑_j=1^a ∑_k=1^b ∑_ℓ=1^c C^βα_aj C^γβ_bk C^αγ_cℓ 2.2(∑_σ d_σ 6j-alpha-lambdaj-fund–fundSTAR-beta-sigmaSTAR6j-fund-muk-gamma–betaSTAR-fund-sigma6j-fund-nul-alpha–gammaSTAR-fund-sigma + δ_λ_jμ_k δ_λ_jν_ℓ/d_λ_j^2 + 4/N^2δ_αβ δ_αγ/d_α^2 - 2/N( δ_αγ δ_λ_jμ_k/d_αd_λ_j + δ_αβ δ_μ_kν_ℓ/d_αd_μ_k + δ_βγ δ_λ_jν_ℓ/d_βd_λ_j) 2.2) .§ CONCLUSIONS AND OUTLOOKIn the present paper we have shown how to calculate a set of Wigner 6j coefficients with adjoint representations. Together with a set of previously derived 6js<cit.>, this set constitutes a complete set of 6js required to decompose any color structure, to any order into orthogonal multiplet bases, cf. eq:basis vector.This opens up for the usage of orthogonal representation theory based color bases also for processes with high multiplicities, including the analysis of evolution equations in color space <cit.>.We note, however, that the present work does not close the research area of representation theory based treatment of color structure. In particular, more general 6j symbols are required for fully general multiplet bases (with vertices between general representations). We believe that this can be addressed with similar methods.§ ACKNOWLEDGMENTSWe are thankful to Judith Alcock-Zeilinger for useful discussions. We also thank the Erwin Schrödinger International Institute for Mathematics and Physics (ESI) in Vienna, for hospitality, discussions, and support, both via the Research in Teams programme “Amplitude Level Evolution II: Cracking down on color bases” (RIT0521), where this work was initiated, and via the ESI-QFT 2023 workshop, where the scientific part was concluded. MS acknowledges support by the Swedish Research Council (contract number 2016-05996, as well as the European Union’s Horizon 2020 research and innovation programme (grant agreement No 668679). § PROPERTIES OF VERTEX CORRECTION DIAGRAMSWe discuss some properties of the vertex correction diagrams in eq:Vertex-alphaO-adj-gammaI–lambdaj-fund-fundST, which we use for the construction of vertices with at least one gluon. Let K be the multiplicity of α in the complete reduction of α⊗ A, giving the number of vertices (<ref>) to be constructed. In contrast, the number of intermediate irreps λ_j in eq:Vertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR is given by the multiplicity of α in the complete reduction of α⊗1⊗1. The latter number is one higher than the former since due to 1⊗1=A⊗∙ (where ∙ denotes the trivial representation) we have α⊗1⊗1 =(α⊗ A) ⊕α. Moreover, K+1 is also the number of terms in the complete reduction of α⊗1, i.e. the number of ways in which we can add a box to the Young diagram α.First we show that the diagrams in eq:Vertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR are linearly dependent. To this end, consider the complete reduction of α⊗1,Misc-alpha-tensor-q = ∑_j=1^K+1 d_λ_jMisc-project-alpha-q-to-lambdaj ,multiply with a quark-gluon vertex (Lie algebra generators), and contract the quark and antiquark lines, yieldingMisc-alpha-tadpole_=0 = ∑_j=1^K+1 d_λ_jVertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR.The l.h.s. vanishes since thegenerators are traceless, i.e. we have found a non-trivial vanishing linear combination of the diagrams in eq:Vertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR. Next we show that all vertices in eq:Vertex-alphaO-adj-alphaI–Va are linear combinations of the diagrams in eq:Vertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR. To this end, consider a gluon exchange between α and a quark line, and insert two completeness relations:Misc-alpha-q-gluon_exchange = ∑_j,k=1^K+1 d_λ_jd_λ_kMisc-alpha-q-gluon_exchange-inserted_completness .Due to Schur's Lemma the middle segment can only be non-zero if λ_j and λ_k are equivalent. If two irreps in the complete reduction of α⊗1 are equivalent then they are the same, i.e.Misc-alpha-q-gluon_exchange-middle_section = C_aj δ_jkMisc-lambdajwith some constant C_aj. Substituting into eq:Misc-alpha-q-gluon_exchange, multiplying with a quark-gluon vertex, and contracting the quark and antiquark line we findMisc-alpha-gluon_w_quark_loop = ∑_j=1^K+1C_ajd_λ_j^2Vertex-alphaO-adj-alphaI–lambdaj-fund-fundSTAR.The quark loop on the l.h.s. can be traded for a factor of (N^2-1)^-1 (recall that we set all 3j symbols equal to 1), and by defining C^αα_aj = C_ajd_λ_j^2 (N^2-1) we obtain eq:vertices_for_alpha=gamma, as claimed in sec:vertices.Now we can even take advantage of the linear dependence (<ref>) of the vertex correction diagrams (<ref>). Equation (<ref>) tells us that any one of the K+1 diagrams (<ref>) can be expressed as a linear combination of the other K diagrams, since none of the coefficients in Eq. (<ref>) vanishes. In Sec. <ref> we order the diagrams in a unique way and determine the orthonormal vertices in eq:Vertex-alphaO-adj-alphaI–Va by means of the Gram-Schmidt algorithm. Since the last vertex correction diagram is guaranteed to be a linear combination of the first K diagrams, we can always terminate Gram-Schmidt before using the last diagram, i.e the vertices (<ref>) are actually linear combinations of the first K vertex correction diagrams (<ref>). Finally, we explicitly determine the scalar products between all vertex correction diagrams (<ref>). The result is the main ingredient for the Gram-Schmidt process in Sec. <ref>. Consider1.5⟨Vertex-alphaO-adj-gammaI–beta-fund-fundSTAR , Vertex-alphaO-adj-gammaI–delta-fund-fundSTAR1.5⟩ =Square-alpha-beta-gamma-delta–fundSTAR-fund-fundSTAR-fund-adj.The square diagram can be evaluated by invoking the Fierz identity (or adjoint representation projector, also equivalent to the completeness relation for q⊗q̅), which with our unit 3j symbols takes the form-0.8 cm < g r a p h i c s >=1/N^2-1( -0.8 cm < g r a p h i c s > - 1/N-0.8 cm < g r a p h i c s > ) .Inserting this gives for the scalar productSquare-alpha-beta-gamma-delta–fundSTAR-fund-fundSTAR-fund-adj= 1/N^2-1(Square-alpha-beta-gamma-delta–vert- 1/N Square-alpha-beta-gamma-delta–horiz)= 1/N^2-1( δ_βδ/d_β - δ_αγ/Nd_α) . § EXAMPLES OF VERTEX CONSTRUCTIONWe illustrate how to construct vertices of type (<ref>) using methods and results from sec:vertices.First consider an example with α≠γ.For α=1,1 and γ=2 the unique intermediate irrep is λ_1=2,1. Then, using eq:different rep norm for normalization, the unique vertex with irreps 1,1, 2 and one gluon readsVertex-Y11-adj-Y2–V1 = (N^2-1) √(N/3) Vertex-Y11-adj-Y2–Y21-fund-fundSTAR. The Young diagram with the smallest number of boxes for which there is more than one vertex isα=γ=2,1, i.e., the octet for N=3 (note that this is not the adjoint representation for N3). The admissible intermediate irreps are λ_1=3,1 and λ_2=2,2. Using eq:Vertex-alphaO-adj-alphaI–V2, the orthonormal vertices becomeVertex-Y21-adj-Y21–V1= N (N^2-1) √(N+2/5N-6)Vertex-Y21-adj-Y21–Y31-fund-fundSTARand Vertex-Y21-adj-Y21–V2= N(N^2-1)/6√(5N-6/N-2) ( Vertex-Y21-adj-Y21–Y22-fund-fundSTAR + 3N+2/5N-6Vertex-Y21-adj-Y21–Y31-fund-fundSTAR) . For α=γ=A= *(black) (black) (black) (recall that the black column stands for a column with N-1 boxes) we obtain three-gluon vertices for general N. The admissible intermediate irreps are then λ_1= *(black) (black) (black) and λ_2= *(black) (black) (black).Using eq:Vertex-alphaO-adj-alphaI–V2, our orthonormal vertices readVertex-adj-adj-adj–V1= (N^2-1) √(N+2)Vertex-adj-adj-adj–Yc2-fund-fundSTARand Vertex-adj-adj-adj–V2= N(N^2-1)/2√(N-2) ( Vertex-adj-adj-adj–Yc11-fund-fundSTAR + N+2/NVertex-adj-adj-adj–Yc2-fund-fundSTAR) .Notice that for N=3 Eqs. (<ref>)/(<ref>) and Eqs.(<ref>)/(<ref>) coincide. Instead of the latter vertices, one will likely want to use the much more common antisymmetric f and symmetric d vertices, to which our vertices are related by a unitary transformation, which we explicitly state below. Like all other vertices in this article we normalize f and d such that the corresponding 3j symbols are equal to one, i.e.Vertex-adj-adj-adj–Vf= N^2-1/√(2N)( Vertex-adj-adj-adj–loop-clockwise - Vertex-adj-adj-adj–loop-counterclockwise) and Vertex-adj-adj-adj–Vd= (N^2-1) √(N/2(N^2-4))( Vertex-adj-adj-adj–loop-clockwise + Vertex-adj-adj-adj–loop-counterclockwise) ,see app:restore-3j for how to easily transform results to other normalizations. The vertices (<ref>) and (<ref>) are related to f and d by a unitary transformation,Vertex-adj-adj-adj–V1= -√(N+2/2N)Vertex-adj-adj-adj–Vf + √(N-2/2N)Vertex-adj-adj-adj–Vd,Vertex-adj-adj-adj–V2= - √(N-2/2N)Vertex-adj-adj-adj–Vf - √(N+2/2N)Vertex-adj-adj-adj–Vd,and vice versa,Vertex-adj-adj-adj–Vf= - √(N+2/2N)Vertex-adj-adj-adj–V1 - √(N-2/2N)Vertex-adj-adj-adj–V2,Vertex-adj-adj-adj–Vd= √(N-2/2N)Vertex-adj-adj-adj–V1 - √(N+2/2N)Vertex-adj-adj-adj–V2,facilitating easy conversion.The coefficients of this unitary transformation are determined by scalar products between the two sets of vertices, and these scalar products can be evaluated by calculations similar to Eqs. (<ref>)–(<ref>).§ DETAILS OF 6J DERIVATIONSWe here give, in full detail, the intermediate steps for the derivation insec:closed-form-expressions.§.§ Derivation for case 2 We here derived the form of the 6j coefficients in eq:case 2.In essence the vertices involving gluons are expressed in terms of vertex corrections, after which the Fierz identity, eq:Fierz, is applied, and vertex corrections are removed using eq:vertex-correction6j-fund-gamma-delta–betaSTAR-adj-alpha–Va-Vb = ∑_j=1^a ∑_k=1^b C^βα_aj C^δγ_bk6j-fund-gamma-delta–betaSTAR-adj-alpha–lambdaj-muk = ∑_j=1^a ∑_k=1^bC^βα_aj C^δγ_bk/N^2-1( 6j-fund-gamma-delta–betaSTAR-adj-alpha–lambdaj-muk-connect- 1/N6j-fund-gamma-delta–betaSTAR-adj-alpha–lambdaj-muk-loops) = ∑_j=1^a ∑_k=1^bC^βα_aj C^δγ_bk/N^2-1(6j-fund-muk-delta–lambdajSTAR-fund-alpha6j-fund-gamma-muk–betaSTAR-fundSTAR-lambdaj- δ_αβ δ_γδ/N d_αd_γ) .§.§ Derivation for case 3The steps in the derivation of eq:case 3 progress similarly to those in the derivation of eq:case 2,6j-adj-gamma-delta–betaSTAR-adj-alpha–Va-Vb-Vc-Vd = ∑_j=1^a ∑_k=1^b C^βα_aj C^δγ_bk6j-adj-gamma-delta–betaSTAR-adj-alpha–lambdaj-muk–Vc-Vd = ∑_j=1^a ∑_k=1^bC^βα_aj C^δγ_bk/N^2-1( 6j-adj-gamma-delta–betaSTAR-adj-alpha–lambdaj-muk-connect–Vc-Vd- 1/N6j-adj-gamma-delta–betaSTAR-adj-alpha–lambdaj-muk-loops–Vc-Vd)= ∑_j=1^a ∑_k=1^bC^βα_aj C^δγ_bk/N^2-1(6j-adj-muk-delta–lambdajSTAR-fund-alpha–Vc-Vd6j-adj-gamma-muk–betaSTAR-fundSTAR-lambdaj–Vc-Vd- δ_αβ δ_γδ/N d_αd_γ)= ∑_j=1^a ∑_k=1^bC^βα_aj C^δγ_bk/N^2-1(6j-fund-alphaSTAR-deltaSTAR–lambda-adj-mukSTAR–Vc-Vd6j-fund-gammaSTAR-betaSTAR–muk-adj-lambdajSTAR–Vc-Vd- δ_αβ δ_γδ/N d_αd_γ) .We remark that the result looks very similar to the result for case 2, but that it is now expressed in terms of the 6js from case 2. §.§ Derivation for case 4 Again the gluon vertices are expressed in terms of vertex corrections with quarks, both in the triple-gluon vertices and in the vertices with the general representations. This gives for the antisymmetric (f) triple-gluon vertex6j-alpha-beta-gamma–adj-adj-adj–Vf-Va-Vb-Vc_smaller = N^2-1/√(2N)2.2(6j-alpha-beta-gamma–adj-adj-adj–loop-clockwise–Va-Vb-Vc_smaller - 6j-alpha-beta-gamma–adj-adj-adj–loop-counterclockwise–Va-Vb-Vc_smaller2.2)= N^2-1/√(2N)∑_j=1^a ∑_k=1^b ∑_ℓ=1^c C^βα_aj C^γβ_bk C^αγ_cℓ2.2(6j-alpha-beta-gamma–adj-adj-adj–loop-clockwise–insert-lambdaj-muk-nul - 6j-alpha-beta-gamma–adj-adj-adj–loop-counterclockwise–insert-lambdaj-muk-nul2.2)and the symmetric (d) vertex differ only by the sign of the second term.The second term above is calculated using the Fierz identity (<ref>),6j-alpha-beta-gamma–adj-adj-adj–loop-counterclockwise–insert-lambdaj-muk-nul = 1/(N^2-1)^32.2[6j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-alpha-beta-gamma - 1/N2.2(6j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-beta-nul + 6j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-gamma-lambdaj + 6j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-alpha-muk2.2)+ 3/N^26j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-lambdaj-muk-nul - 1/N^36j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-lambdaj-muk-nul–counterclockwise2.2] ,where the closed quark loop in the last diagram simply yields a factor of N, and the others are easy to evaluate using the self energy relation, eq:self-energy, for example6j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-alpha-beta-gamma = δ_λ_jμ_k δ_λ_jν_ℓ/d_λ_j^2 . By identical steps, the first term in eq:3g decomp gives6j-alpha-beta-gamma–adj-adj-adj–loop-clockwise–insert-lambdaj-muk-nul = 1/(N^2-1)^32.2[6j-alpha-beta-gamma–insert-lambdaj-muk-nul–crossing - 1/N2.2(6j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-beta-nul + 6j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-gamma-lambdaj + 6j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-alpha-muk2.2)+ 3/N^26j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-lambdaj-muk-nul - 1/N^36j-alpha-beta-gamma–insert-lambdaj-muk-nul–loops-lambdaj-muk-nul–clockwise2.2] .Here the first term needs to be reduced using 6j symbols,6j-alpha-beta-gamma–insert-lambdaj-muk-nul–crossing = 6j-alpha-beta-gamma–insert-lambdaj-muk-nul–uncrossing = ∑_σ d_σ6j-alpha-beta-gamma–insert-lambdaj-muk-nul–uncrossing–insert-sigma space bitte... = ∑_σ d_σ 6j-alpha-lambdaj-fund–fundSTAR-beta-sigmaSTAR6j-fund-muk-gamma–betaSTAR-fund-sigma6j-alpha-sigma-gamma-nul–fund-fund= ∑_σ d_σ 6j-alpha-lambdaj-fund–fundSTAR-beta-sigmaSTAR6j-fund-muk-gamma–betaSTAR-fund-sigma6j-fund-nul-alpha–gammaSTAR-fund-sigma ,where a minus sign next to a vertex indicates that the lines are connected to this vertex in opposite order, see eq:signed-vertex.The expressions calculated here are assembled in eq:f and eq:d for the antisymmetric and symmetric vertices, respectively. § VERTEX NORMALIZATIONS LEADING TO NON-TRIVIAL 3J SYMBOLS All explicit formulae for Wigner 6j symbols in this article, inparticular the results in sec:closed-form-expressions, are valid for vertices normalized such that all non-vanishing 3j symbols are equal to 1. While this normalization is convenient, it differs from normalizations typically applied in the context of QCD. We therefore here give a simple rule for how to transform any of our 6j symbols when changing the normalization of any 3j symbol.Assume we have calculated the 6j symbol6j-with-alpha-beta-gamma–Vbullet,whereby we chose the normalization3j-alpha-beta-gamma = 1 .If we prefer this 3j symbol to be equal to C≠ 1 we define a vertexVertex-betaOut-gammaIn-alphaIn–Vsquare = √(C) Vertex-betaOut-gammaIn-alphaIn, which then satisfies3j-alpha-beta-gamma–Vsquare = C3j-alpha-beta-gamma= C .Consequently, 6j-with-alpha-beta-gamma–Vsquare = √(C) 6j-with-alpha-beta-gamma–Vbullet.In short: For each vertex whose 3j symbol you normalize to a number ≠ 1 multiply our 6j symbol by the square root of the value of your 3j symbol in order to obtain the value of the 6j symbol with your normalization convention.
http://arxiv.org/abs/2312.16688v1
{ "authors": [ "Stefan Keppeler", "Simon Plätzer", "Malin Sjodahl" ], "categories": [ "hep-ph", "math-ph", "math.MP" ], "primary_category": "hep-ph", "published": "20231227190001", "title": "Wigner 6j symbols with gluon lines: completing the set of 6j symbols required for color decomposition" }
KG]Kunal Garg JU]James Usevitch UM2]Joseph Breeden MB]Mitchell Black UM2]Devansh Agrawal UM1]Hardik Parwana UM0]Dimitra Panagoucor1 [cor1]Corresponding author [email protected][KG]organization=Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, addressline=77 Massachusetts Avenue,city=Cambridge, postcode=02139,state=MA, country=USA[JU]organization=Department of Electrical and Computer Engineering, Brigham Young University, addressline=450 Engineering Building,city=Provo, postcode=84602,state=UT, country=USA [UM2]organization=Department of Aerospace Engineering, University of Michigan, Ann Arbor, addressline=1320 Beal Avenue,city=Ann Arbor, postcode=48109,state=MI, country=USA[MB]organization=Toyota North America Research and Development, addressline=1555 Woodridge Ave,city=Ann Arbor, postcode=48105,state=MI, country=USA [UM1]organization=Department of Robotics, University of Michigan, Ann Arbor, addressline=2505 Hayward St.,city=Ann Arbor, postcode=48109,state=MI, country=USA [UM0]organization=Department of Robotics and Department of Aerospace Engineering, University of Michigan, Ann Arbor, addressline=2505 Hayward St.,city=Ann Arbor, postcode=48109,state=MI, country=USA This tutorial paper presents recent work of the authors that extends the theory of Control Barrier Functions (CBFs) to address practical challenges in the synthesis of safe controllers for autonomous systems and robots. We present novel CBFs and methods that handle safety constraints (i) with time and input constraints under disturbances, (ii) with high-relative degree under disturbances and input constraints, and (iii) that are affected by adversarial inputs and sampled-data effects. We then present novel CBFs and adaptation methods that prevent loss of validity of the CBF, as well as methods to tune the parameters of the CBF online to reduce conservatism in the system response. We also address the pointwise-only optimal character of CBF-induced control inputs by introducing a CBF formulation that accounts for future trajectories, as well as implementation challenges such as how to preserve safety when using output feedback control and zero-order-hold control. Finally we consider how to synthesize non-smooth CBFs when discontinuous inputs and multiple constraints are present.Control barrier functions; safe control design; practical challenges in safe control of robotic and autonomous systems § INTRODUCTIONControl Barrier Functions (CBFs) have been developed in recent years as a tool to verify and synthesize trajectories for nonlinear constrained control systems. Their principle is as follows: Given a constraint function, termed barrier function thereafter, whose zero super-level (or sub-level) sets define a constrained set, termed also the safe set, the key idea is that one restricts the rate of change of the barrier function along the system trajectories using a class 𝒦 function of the barrier function <cit.>. If such a condition can be satisfied everywhere in the constrained set under the given dynamics and control input constraints, then the barrier function is called a Control Barrier Function (CBF), and the constrained set is forward invariant. This method, in conjunction with Control Lyapunov functions (CLFs) for stability, has been employed to design safe controllers for several applications. §.§ ChallengesHowever, similarly to Lyapunov methods, some of the major challenges of verifying safety and synthesizing safe controllers are that 1) finding a valid CBF for arbitrary system dynamics is not trivial, 2) safety constraints of high relative degree compared to the system dynamics, as well as input constraints make the problem even more challenging, 3) modeling/parametric uncertainty dictates the formulation of tools and techniques from robust control, adaptation and learning in order to define valid CBFs that account for uncertainty. Among other challenges, the fact that the control inputs derived due to the CBF condition are only pointwise optimal (also often called myopic control inputs), has given rise to considerations on under which conditions one can guarantee optimality and feasibility of the resulting control policies.In the recent 3-4 years, the literature has seen an abundance of papers with a variety of methodologies that aim to address some of the aforementioned challenges. Finding valid CBFs for example has been addressed with offline <cit.> and online methods, searching for either a valid function h over the constrained set, searching for some of the parameters of the CBF condition <cit.>, or adapting for those parameters online in order to render the candidate function a valid CBF <cit.>. High-relative degree constraint functions have been first addressed in  <cit.>, which considers the class of Exponential CBFs when thefunctions used in CBF derivative condition are linear in their argument; then <cit.> generalizes Exponential CBFs to generic nonlinearfunctions in the form of Higher-Order CBFs (HOCBF). Time constraints and specifications (beyond state constraints) and cooperative multi-agent systems have also been considered <cit.>. In a relatively less explored area, CBFs for noncooperative multi-agent systems have also started being studied recently <cit.>. Adaptive, robust, and learning-based formulations have also appeared in order to deal with various sources of uncertainty (stochastic uncertainty in the system dynamics, parametric uncertainty, deterministic additive external disturbances), see for example <cit.>. §.§ Overview and OrganizationThe scope of this tutorial paper is not to provide a thorough literature survey and review of recent CBF techniques, but rather to focus on some of the authors' own work on safety verification and control, presented in a roughly chronological and thematic order. More specifically, Section <ref> shows how time constraints can be encoded as novel forms of timed CBFs, called Fixed-Time Barriers, how to concurrently handle time, safety and input constraints using novel forms of FxT-CLF-CBF-QPs, and how such concepts can be used to solve problems ranging from spatiotemporal control, to integrated planning and control with safety and recursive feasibility guarantees. Section <ref> presents our constructive methods for constraints with high-relative degree under disturbances and input constraints. We also introduce Input-Constrained CBFs, which are generalizations of High-Order CBFs. Then, Section <ref> presents novel adaptation methods so that either the control-input coefficient is tuned online to prevent loss of controllability, or the parameters of the CBF condition are tuned online in order to reduce conservatism in the system response. Then, Section <ref> addresses the pointwise optimal character of CBF-induced control inputs by accounting for future trajectories, in a computationally-efficient way that checks for possible future safety violations, and adjusts the control action as needed. In Section <ref> we address implementation challenges such as how to preserve safety when using output feedback control and zero-order hold control, while Section <ref> covers the definition of Adversarially-Robust CBFs for multi-robot control. Finally, we present our approach on how to synthesize non-smooth CBFs when multiple constraints are present in <ref>. Concluding, we note some of our more recent and ongoing work in Section <ref>. Again, while we have cited relevant work of many of our fellow colleagues in the field, the references list is vastly incomplete. It is out of the scope of this paper to provide a thorough literature review. Interested readers are referred to <cit.> for recent comprehensive reviews on various topics related to CBFs, as well as to the survey papers in this special issue. § PRELIMINARIES: DEFINITION OF CONTROL BARRIER FUNCTIONS, SET INVARIANCE, AND BASIC QUADRATIC PROGRAM FOR SAFE CONTROL §.§ Notations The set of real numbers is denoted as ℝ and the non-negative real numbers as ℝ^+. Given x∈ℝ, y ∈ℝ^n_i, and z∈ℝ^n_i× m_i, |x| denotes the absolute value of x and ||y|| denotes L_2 norm of y. The interior and boundary of a set 𝒞 are denoted by Int(𝒞) and ∂𝒞. The distance of a point x from a set 𝒞 is denoted |x|_𝒞 = min_y ∈𝒞 || x - y ||. For a∈ℝ^+, a continuous function α:[0,a)→[0,∞) is a class 𝒦 function if it is strictly increasing and α(0)=0. A continuous function α:(-b,a)→ (-∞,∞) for a,b∈ℝ^+ is an extended class 𝒦 function if it is strictly increasing and α(0)=0. Furthermore, if a=∞ and lim_r→∞α(r)=∞, then it is called extended class-𝒦_∞. The k^th time derivative of a function h(t,x):ℝ^+×ℝ^n→ℝ is denoted as h^(k).For brevity, we will refrain from mentioning explicit arguments whenever the context is clear. For example, h(x) may simply be denoted as h. The Lie derivative of a function h w.r.t a function f is denoted as L_f h = ∂ h/∂ xf.§.§ Control Barrier FunctionsConsider the nonlinear control-affine dynamical system:ẋ = f(x) + g(x)u,where x∈𝒳⊂ℝ^n and u ∈𝒰⊂ℝ^m represent the state and control input, and f:𝒳→ℝ^n and g:𝒳→ℝ^m are locally Lipschitz continuous functions. The set 𝒮(t) of allowable states at time t is specified as an intersection of N sets 𝒮_i(t),i∈{1,2,..,N}, each of which is defined as the zero-superlevel set[Note that in certain sections of the current paper, as well as in many references in the related literature, the constrained set 𝒮_i(t) is defined as the zero-sublevel set of a constraint function h_i.] of a (sufficiently smooth) function h_i:ℝ^+ ×𝒳→ℝ as:𝒮_i(t)= { x ∈𝒳 |h_i(t,x) ≥ 0 },∂𝒮_i(t)= { x∈𝒳 |h_i(t,x)=0 },Int (𝒮_i)(t)= { x ∈𝒳 |h_i(t,x)>0 }.(Control Barrier Function)<cit.>For the dynamical system (<ref>), h_i: ℝ^+ ×𝒳→ℝ is a control barrier function (CBF) on the set 𝒮_i(t) defined by (<ref>)-(<ref>) for t≥ 0 if there exists a class-𝒦 function α_i: ℝ→ℝ^+such that sup_u∈𝒰[ ∂ h_i(t,x)/∂ t + L_f h_i(t,x)+ L_g h_i(t,x)u ] ≥ -α_i(h_i(t,x))∀ x∈𝒮_i, ∀ t>0.Henceforth, we refer to (<ref>) as the CBF derivative condition. (Set Invariance) <cit.> Given the dynamical system (<ref>) and a set 𝒮_i(t) defined by (<ref>)-(<ref>) for some continuously differentiable function h_i:ℝ^+ ×ℝ^n→ℝ, if h_i is a control barrier function on the set 𝒮_i(t), and there exists a u:ℝ_+×ℝ^n→ℝ^m, piecewise continuous in t Lipschitz continuous in x, that satisfies∂ h_i(t,x)/∂ t + L_f h_i(t,x)+ L_g h_i(t,x)u ≥ -α_i(h_i(t,x)) , ∀ x∈𝒮_i(t), ∀ t>0,then 𝒮_i(t) is forward invariant. If L_g h_i(t,x) ≡ 0 ∀ (t,x)∈ℝ^+×𝒳, then the control input u does not appear in the left-hand side of the CBF condition (<ref>). Suppose the relative degree of the function h_i w.r.t. the control input u under the dynamics (<ref>) is equal to r_i≥ 2. We can then define r_i functions as follows: ψ_i^0(t,x)= h_i(t,x),ψ_i^k(t,x)= ψ̇_i^k-1(t,x) + α_i^k (ψ^k-1_i(t,x)), k∈{1,2,…,r_i-1},and denote their zero-superlevel sets respectively, as: 𝒞_i(t) = {x |ψ_i^k(t,x)≥ 0, ∀ k∈{0,..,r_i-1}}. (Higher-Order CBF)[Definitions <ref> and <ref> were presented in their original papers for the time-invariant safe sets 𝒮_i. We note that an extension to the time-varying case can be proven with Nagumo's theorem applied to non-autonomous systems <cit.> and hence we directly present that. This follows also the notation in <cit.>. ]<cit.> The function h_i(t,x):ℝ^+×ℝ^n→ℝ is a Higher-Order CBF (HOCBF) of r_i-th order on the set 𝒞_i(t) if there exist r_i extended class-𝒦 functions α_i^k:ℝ→ℝ,k∈{1,2,..,r_i}, and an open set 𝒟_i(t)⊂ℝ^+ ×ℝ^n with 𝒞_i(t) ⊂𝒟_i(t) ⊂𝒳 such thatψ̇_i^r_i-1(t,x,u) ≥ -α_i^r_i (ψ_i^r_i-1(t,x)),  ∀ x ∈𝒟_i(t), ∀ t≥ 0.Enforcing multiple constraints encoded via Control Lyapunov Functions <cit.> and (in general, high-order) Control Barrier Functions has commonly been addressed via the following class of controllers:(CLF-(HO)CBF-QP) (u, δ) = min_u∈𝒰, δ≥ 0 ||u-u_r(t,x)|| + M δ^2s.t. V̇(t,x,u) ≤ - k V(t,x) + δ, ψ̇_i^r_i-1(t,x,u) ≥ -α_i^r_i (ψ_i^r_i-1(t,x)), i∈{1,2,..,N}where u_r: ℝ^+×ℝ^n→ℝ^m is the reference control input, often designed without any regard to constraints, M∈ℝ^+ is positive definite weighting matrix, V(t,x) a control Lyapunov function (CLF) encoding convergence objectives for the system trajectories, k ∈ℝ^+ is the exponential rate of convergence, and δ∈ℝ^+ is a slack variable used to relax the CLF constraint (<ref>). The optimization (<ref>) is a QP when the dynamics is control-affine as in (<ref>) and 𝒰 can be expressed in the form of a polytope Au≤ b, A∈ℝ^q× m, b∈ℝ^q× 1, q>0. § FIXED-TIME CONTROL BARRIER FUNCTIONS: SYNTHESIS UNDER TIME, INPUT AND SAFETY CONSTRAINTS Spatial constraints, i.e., constraints requiring the system trajectories to evolve in some safe set, while visiting some goal set(s), are typical in safety-critical applications. Furthermore, temporal constraints, i.e., constraints on the time of convergence, appear in time-critical applications, for instance, when a task must be completed within a fixed time due to an internal or an external deadline. Simultaneous satisfaction of the safety (i.e., spatial) and the convergence (i.e., temporal) requirements was first explored in <cit.> through a CLF-CBF-QP formulation where CLF (respectively, CBF) condition is used for the formulation of the convergence (respectively, safety) requirement. However, there are three main caveats to that approach. First, there is no consideration of input constraints or input bounds in the QP formulation, which might lead to scenarios where computed input is not realizable in a real-world robotic system. Second, the CLF condition is relaxed with a slack variable to ensure the feasibility of the QP, which might lead to a compromise on the performance. Third, the CLF condition can only ensure exponential convergence, and hence, it is not suitable for specifications with temporal requirements. The work in <cit.> introduced the notion of finite-time control barrier functions which utilized the notion of finite-time stability from <cit.> under which the system trajectories converge to the equilibrium point within a finite time. While this time of convergence is finite, it depends on the initial condition and grows unbounded as the initial conditions go further away from the equilibrium point. Thus, this notion is also not useful for problems that impose a uniform requirement on the convergence time, irrespective of the initial conditions. The authors in <cit.> introduced another notion of faster convergence, termed fixed-time convergence, under which not only the time of convergence is finite, it is also uniformly bounded for any initial conditions. Thus, building on the results in <cit.>, a new class of CLFs, termed FxT-CLFs, is introduced in <cit.> that can be used for encoding temporal constraint in the control design framework.In this section, we present a method to address temporal constraints (e.g., convergence to a goal region within a given time horizon) in addition to safety constraints (realized via CBFs) for nonlinear systems with bounded inputs. The main references for this section are <cit.>. §.§ Fixed-time Stability (FxTS) under Input ConstraintsTo encode time constraints, we utilize a relatively newer notion of stability, termed fixed-time stability (FxTS) <cit.>, which requires that the system trajectories converge to the equilibrium within a given fixed time T < ∞. The following definition of FxTS and the corresponding Lyapunov conditions are adapted from <cit.>. Consider the autonomous dynamical system: ẋ(t) = f(x(t)),where x∈ℝ^n, f: 𝒟→ℝ^n is continuous on an open neighborhood 𝒟⊆ℝ^n of the origin and f(0)=0. The origin is a FxTS equilibrium of (<ref>) if it is Lyapunov stable and there exists a fixed time T such that lim_t→ Tx(t) = 0 for all x(0)∈ℝ^n, i.e., the trajectories converge to the origin within a fixed time T.The authors of <cit.> also presented Lyapunov conditions for the equilibrium of the uncontrained system (<ref>) to be FxTS.Suppose there exists a continuously differentiable, positive definite, radially unbounded function V:ℝ^n→ℝ such that V̇(x) ≤ -α_1V(x)^γ_1-α_2V(x)^γ_2,holds for all x∈ℝ^n ∖{0}, with α_1, α_2>0, γ_1>1 and 0<γ_2<1. Then, the origin of (<ref>) is FxTS with continuous settling-time function T that satisfies:T ≤1/α_1(γ_1-1) + 1/α_2(1-γ_2).As illustrated in <cit.>, this Lyapunov result cannot be used for systems with input constraints. The modified Lyapunov conditions were given in <cit.> for FxTS under input constraints.Let V:ℝ^n→ℝ be a continuously differentiable, positive definite, radially unbounded function, satisfyingV̇(x) ≤ -α_1V(x)^γ_1-α_2V(x)^γ_2+δ_1V(x),for all x∈ℝ^n∖{0} along the trajectories of (<ref>) with α_1, α_2>0, δ_1∈ℝ, γ_1 = 1+1/μ, γ_2 = 1-1/μ and μ>1. Then, there exists a neighborhood D⊆ℝ^n of the origin such that for all x(0)∈ D, the closed-trajectories of (<ref>) reach the origin within a fixed time T, whereD =ℝ^n;δ_1/2√(α_1α_2)< 1, {x|V(x)≤ k^μ(δ_1-√(δ_1^2-4α_1α_2)/2α_1)^μ};δ_1/2√(α_1α_2)≥ 1,,T≤μπ/2√(α_1α_2);δ_1/2√(α_1α_2)≤ 0, μ/α_1k_1(π/2-tan^-1k_2);0 ≤δ_1/2√(α_1α_2)<1, μ/α_1(b-a)(log(b-ka/a(1-k))-log(b/a)); δ_1/2√(α_1α_2)≥ 1, ,where 0<k<1, a< b are the solutions of γ(z) α_1z^2-δ_1z+α_2 = 0,k_1 = √(4α_1α_2-δ_1^2/4α_1^2) and k_2 = -δ_1/√(4α_1α_2-δ_1^2). For a constrained control system, a relation between the domain of attraction, the time of convergence, and the input bounds using the new Lyapunov conditions (<ref>) was developed in <cit.>. In brief, it was shown that the domain of attraction grows as the bounds on the input increases, or the required time of convergence increases, which also matches the basic intuition. Interested readers on the proof of this theorem and a more detailed discussion on this topic are referred to <cit.>. Next, we illustrate how this modified Lyapunov condition naturally fits in a QP formulation for the concurrent problem of FxTS and safety, in the presence of input constraints.§.§ Concurrent FxTS and Safety Consider the nonlinear, control-affine systemẋ = f(x) + g(x)u,x(0) = x_0,where x∈ℝ^n is the state vector, f:ℝ^n→ℝ^n and g:ℝ^n→ℝ^n× m are system vector fields, continuous in their arguments, and u∈𝒰⊂ℝ^m is the control input vector where 𝒰 is the input constraint set. Let h_S:ℝ^n→ℝ and h_G:ℝ^n→ℝ be continuously differentiable functions. Define the safe set S_S {x|h_S(x)≤ 0} such that its boundary and its interior S_S are given as ∂ S_S {x|h_S(x) = 0} and int(S_S) {x|h_S(x)<0}, respectively, to be rendered forward invariant under the closed-loop dynamics of (<ref>). Similarly, define the goal set S_G {x|h_G(x)≤ 0} such that its boundary and its interior S_G are given as ∂ S_G {x|h_G(x) = 0} and int(S_G) {x|h_G(x)<0}, respectively, to be reached by the closed-loop trajectories of (<ref>) in a user-defined fixed time T_ud>0.S_G⋂ S_S≠∅, the set S_G is compact, and the sets S_S and S_G have non-empty interiors. There exists a class-𝒦_∞ function α_G such that h_G(x)≥α_G(|x|_S_G), for all x∉ S_G.The QP formulation in <cit.> uses the old FxTS Lyapunov conditions from Theorem <ref> along with the CBF condition from Definition <ref> for concurrent safety and FxTS. However, that formulation is incapable of handling input constraints. The formulation in <cit.> uses the new FxTS Lyapunov conditions from Theorem <ref>, allowing incorporation of input constraints in the QP. The function h_G is termed as FxT-CLF if it satisfies the new FxTS Lyapunov conditions in Theorem <ref>, while the function h_S is termed as a CBF if it satisfies the conditions in Definition <ref>. Next, we define the notion of the fixed-time domain of attraction for a compact set S⊂ℝ^n: For a compact set S_G⊂ℝ^n, the set D_S⊂ℝ^n, satisfying S_G⊂ D_S, is a Fixed-Time Domain of Attraction (FxT-DoA) with time T>0 for the closed-loop system (<ref>) under u, ifi) for all x(0) ∈ D_S, x(t) ∈ D_S for all t∈ [0, T], andii) there exists T_0∈ [0, T] such that lim_t→ T_0x(t) ∈ S_G.Design a control input u ∈𝒰{v∈ℝ^m|A_uv≤ b_u} and compute D⊂ℝ^n, so that for all x_0∈ D⊆ S_S, the closed-loop trajectories x(t) of (<ref>) satisfy x(t) ∈ S_S for all t≥ 0, and x(T_ud)∈ S_G, where T_ud>0 is a user-defined fixed time and D is a FxT-DoA for the set S_G.[Note that this problem can also be formulated using Signal Temporal Logic <cit.>, as stated in <cit.>.]In <cit.>, a QP-based feedback synthesis approach is presented to address Problem <ref>.Define z = [ v^T δ_1 δ_2 ]^T∈ℝ^m+2, and consider the QP: (FxT-CLF-CBF-QP) min_z∈ℝ^m+2 1/2z^THz + F^Tz s.t.A_uv≤ b_u, L_fh_G(x) + L_gh_G(x)v≤δ_1h_G(x)-α_1max{0,h_G(x)}^γ_1 -α_2max{0,h_G(x)}^γ_2L_fh_S(x) + L_gh_S(x)v ≤ -δ_2h_S(x), where H = diag{w_u_1,…, w_u_m, w_1, w_2} is a diagonal matrix consisting of positive weights w_u_i, w_i>0, F = [ 0_m^T q_1 0 ]^T with q_1>0 and 0_m∈ℝ^m a column vector consisting of zeros. The parameters α_1, α_2, γ_1, γ_2 aare chosen as α_1 = α_2 = μπ/2T_ud, γ_1 = 1+1/μ and γ_2 = 1-1/μ with μ>1. The linear term F^Tz = q_1δ_1 in the objective function of (<ref>) penalizes the positive values of δ_1. Constraint (<ref>) imposes control input constraints. Constraint (<ref>) is imposed for convergence of the closed-loop trajectories of (<ref>) to the set S_G, and the constraint (<ref>) is imposed for forward invariance of the set S_S. The slack terms corresponding to δ_1, δ_2 allow the upper bounds of the time derivatives of h_S(x) and h_G(x), respectively, to have a positive term for x such that h_S(x)<0 and h_G(x)>0. With this setup and under certain conditions, it was shown in <cit.> that the QP (<ref>) is feasible (ensuring a control input exists), has a continuous solution (ensuring applicability of Nagumo's theorem for forward invariance) and guarantees both safety and FxTS from a domain that depends on the maximum value of the slack variable δ_1. For simultaneous safety and FxT convergence, a subset D_S⊂ S_S of the FxT-DoA D of the set S_G can be defined so that its forward invariance per Lyapunov theorem results in safety and it being subset of the FxT-DoA results in FxT convergence (see Figure <ref>). We present a two-agent motion planning example under spatiotemporal specifications, where the robot dynamics are modeled under constrained unicycle dynamics as ẋ_i= u_icos(θ_i),ẏ_i= u_isin(θ_i),θ̇_i= ω_i, where [x_iy_i]^T∈ℝ^2 is the position vector of the agent i for i∈{1, 2}, θ_i∈ℝ its orientation and [u_iω_i]^T∈ℝ^2 the control input vector comprising of the linear speed u_i∈ [0, u_M] and angular velocity |ω_i|≤ω_M. The closed-loop trajectories for the agents, starting from [x_1(0)y_1(0)]^T∈ C_1 = {z∈ℝ^2| z-[-1.51.5]^T_∞≤ 0.5} and [x_2(0)y_2(0)]^T∈ C_2 = {z∈ℝ^2| z-[1.51.5]^T_∞≤ 0.5}, respectively, are required to reach to sets C_2 and C_1, while staying inside the blue rectangle {z∈ℝ^2| z_∞≤ 2}, and outside the red-dotted circle {z∈ℝ^2| z_2 ≤ 1.5}, as shown in Figure <ref>. The agents are also maintaining an inter-agent distance d_m>0 at all times.Figure <ref> shows the simulation case study of a similar, yet extended, problem with single integrator dynamics. Furthermore, Figure <ref> plots the inputs as well the inter-agent distance, illustrating that the input constraints are always respected and the inter-agent distance is always greater than the minimum allowed value so that the systems are always safe. §.§ Robust FxTS and Robust SafetyNext, we discuss how robustness to unmodeled phenomenal and measurement noise can be taken into consideration during control design. For this, consider a perturbed dynamical system:ẋ(t) = f(x(t)) + g(x(t))u + d(t,x),where x∈ℝ^n, u ∈𝒰⊂ℝ^m are the state and the control input vectors, respectively, with 𝒰 the control input constraint set, f:ℝ^n→ℝ^n and g: ℝ^n→ℝ^n× m are continuous functions and d:ℝ_+×ℝ^n→ℝ^n is an unknown additive disturbance. The following assumption is made. [Disturbance bound] There exists γ>0 such that for all t≥ 0 and x∈𝒟⊂ℝ^n, d(t,x)≤γ, where 𝒟 is a compact domain. Encoding safety in the presence of disturbances can be done using robust CBFs <cit.>. In these works, however, only added process noise, or uncertainty in the state dynamics as in (<ref>), is considered, and robust variants of FxT-CLF and CBF are introduced to guarantee convergence to a neighborhood of the goal set and safety. Here we take into account the effect of sensor noise and measurement uncertainties. More specifically, consider that only an estimate of the system state denoted as x̂, is available, that satisfies:ẋ̂̇ = f(x̂) + g(x̂)u.The following assumption is made on the state-estimation error x-x̂.[Estimation error bound] There exists an ϵ>0 such that x̂(t)-x(t)≤ϵ, for all t≥ 0.Then a robust variant of FxT-CLF and a robust variant of CBF is proposed in <cit.> as follows: Corresponding to the set S(t) = {x|h(t,x)≤ 0} where h:ℝ_+×ℝ^n→ℝ is continuously differentiable, define Ŝ_ϵ(t) = {x̂|h(t,x̂)≤ -lϵ}, where l = sup∂ h(t,x)/∂ x is the Lipschitz constant of the function h. Inspired from <cit.>, the notion of a robust CBF is defined as follows. A continuously differentiable function h:ℝ_+×ℝ^n→ℝ is called a robust CBF for (<ref>) with respect to a disturbance d satisfying Assumption <ref> if there exists a locally Lipschitz class-𝒦 function α such that the following condition holdsinf_u∈𝒰{L_fh(t,x(t))+L_gh(t,x(t))u+∂ h/∂ t(t,x(t))}≤α(-h(t, x(t)))-lγ,for all x(t)∈ S(t) and t≥ 0.Note that the worst-case bound lγ of the term L_dh(t,x) can be relaxed if more information than just the upper bound of the disturbance is known, or can be adapted online to reduce the conservatism. Some relevant work has been presented in <cit.>. The existence of a robust CBF implies forward invariance of the set S(t) for all t≥ 0, assuming that the system trajectories start with an initial x(0) such that the measured or estimated state satisfies x̂(0)∈Ŝ_ϵ(0). Similarly we can define the notion of a robust FxT-CLF to guarantee FxTS of the closed-loop trajectories to the goal set. Consider a continuously differentiable function V:ℝ^n→ℝ with Lipschitz constant l_V. A continuously differentiable function V:ℝ^n→ℝ is called a Robust FxT-CLF-S for a set S with respect to a disturbance d satisfying Assumption <ref> if V is positive definite and radially unbounded wrt the set S, V(x)<0 for x∈int(S), and satisfiesinf_u∈𝒰{L_fV(x)+L_gV(x)u}≤ -α_1V(x)^γ_1 -α_2V(x)^γ_2 + δ_1V(x)-l_Vγ,with α_1, α_2>0, δ_1∈ℝ, γ_1 = 1+1/μ, γ_2 = 1-1/μ for μ>1, along the trajectories of (<ref>). Using the mean value theorem, the following inequality can be obtained:V(x)≤ V(x̂) + l_Vϵ,which implies that if V(x̂) ≤ -l_Vϵ, then V(x)≤ 0. Based on this, it is shown in <cit.> that existence of a robust FxT-CLF for the set S_G implies existence of neighborhood D of the set S_G such that fixed time convergence of the closed-loop trajectories of x is guaranteed for all initial conditions x(0) such that the estimated state satisfies x̂(0)∈ D.Note that to encode safety with respect to a general time-varying safe set, let h_T :ℝ_+×ℝ^n→ℝ be a continuously differentiable function defining the time-varying safe set S_T(t) = {x|h_T(t,x)≤ 0}. Now we are ready to present the QP formulation for a robust control synthesis under input constraints. For the sake of brevity, we omit the arguments x̂ and (t,x̂). Define z = [ v^T δ_1 δ_2 δ_3 ]^T∈ℝ^m+3, and consider the following optimization problem: (Robust FxT-CLF-CBF QP)min_z∈ℝ^m+31/2z^THz+ F^Tz s.t.A_uv≤ b_u, L_fĥ_G + L_gĥ_Gv≤δ_1ĥ_G-α_1max{0,ĥ_G}^γ_1 -α_2max{0,ĥ_G}^γ_2-l_GγL_fĥ_S + L_gĥ_Sv ≤ -δ_2ĥ_S-l_Sγ,L_fĥ_T + L_gĥ_Tv ≤ -δ_3ĥ_T-∂ĥ_T/∂ t-l_Tγ,where H = diag{{w_u_l}, w_1, w_2, w_3} is a diagonal matrix consisting of positive weights w_u_l, w_1, w_2, w_3>0 for l = 1, 2, …, m, F = [ 0_m^T q 0 0 ]^T with q>0 and functions ĥ_G,ĥ_S (respectively, ĥ_T) are functions of x̂ (respectively, (t,x̂)) defined as follows. For any function ϕ:ℝ_+×ℝ^n→ℝ with Lipschitz constant l_ϕ, define ϕ̂(t,·) = ϕ(t,·) + l_ϕϵ.The parameters α_1, α_2, γ_1, γ_2 are chosen as α_1 = α_2 = μπ/(2T̅), γ_1 = 1+1/μ and γ_2 = 1-1/μ with μ>1 and T̅ the user-defined time in Problem <ref>. With this robust control design framework, under some technical assumptions and conditions, it is shown in <cit.> that the QP (<ref>) is feasible, its solution is continuous and results in both safety of S_S and FxTS of S_G, from domain of attraction that depends on the maximum value of the slack variable δ_1. In the interest of space, we are not including a detailed case study here, however interested readers are referred to <cit.>, which includes the problem of navigating multiple nonlinear underactuated marine vehicles while respecting visual sensing constraints, avoiding collisions (encoding safety constraints), and moving towards desired destinations (encoding convergence constraints) under additive disturbances (currents) and navigation (state estimation) error. The closed-loop paths of four vehicles are shown in Fig. <ref>. For the robust control design case, we consider a numerical case-study involving underactuated underwater autonomous vehicles with state X_i∈ℝ^6, modeled as [ẋ_i;ẏ_i; ϕ̇_i; m_11u̇_i; m_22v̇_i;m_33ṙ_i ] = [u_icosϕ_i - v_isinϕ_i+ V_wcos(θ_w);u_isinϕ_i + v_icosϕ_i+ V_wsin(θ_w); r_i;m_22v_ir_i + X_uu_i + X_u|u||u_i|u_i; -m_11u_ir_i + Y_vv_i + Y_v|v||v_i|v_i; (m_11-m_22)u_iv_i + N_rr_i + N_r|r||r_i|r_i ]+[ 0; 0; 0; τ_u,i; 0; τ_r,i ] where z_i = [x_i, y_i, ϕ_i]^T is the configuration vector of the i-th agent, [u_i, v_i, r_i]^T are the velocities (linear and angular) w.r.t the body-fixed frame, τ_i = [τ_u,i,τ_r,i]^T is the control input vector where τ_r,i are the control input along the surge (x-axis) and yaw degree of freedom, respectively, X_u, Y_v, N_r are the linear drag terms, and X_u|u|, Y_v|v|, N_r|r| are the non-linear drag terms (see <cit.> for more details). The additive disturbance d = [ V_w(X_i, t)cos(θ_w(X_i, t));V_w(X_i,t)sin(θ_w(X_i, t)) ] with |V_w(X_i, t)|≤γ models the effect of an unknown, time-varying water current acting on the system dynamics of each agent. We also consider measurement uncertainties in the state estimates as stated in Assumption <ref>. The system dynamics is under-actuated since there is no control input in the sway degree of freedom (y-axis). The multi-task problem considered for the case study is as follows (see Figure <ref>): Compute τ_i∈𝒰_i = [-τ_u,m, τ_u,m]×[-τ_r,m, τ_r,m], τ_u,m,τ_r,m>0, such that each agent (i) Reaches an assigned goal region around a point g_i∈ℝ^2 within a user-defined time T; (ii) Keeps their respective point-of-interest p_i∈ℝ^2 in their field of view (given as a sector of radius R>0 and angle α>0); (iii) Maintains a safe distance d_s w.r.t. other agents; where ∠(·) is the angle of the vector (·) with respect to the x-axis of the global frame. Note that (ii) requires safety with respect to a static safe set, while requires (iii) safety with respect to a time-varying safe set. The parameters used in the case study are given in Table <ref>.First, we construct CLF and CBFs to guarantee convergence to the desired location, and invariance of the required safe sets, respectively. Consider the function h_ij = d_s^2-[ x_i(t) y_i(t) ]^T-[ x_j(t) y_j(t) ]^T^2, defined for i≠ j, so that h_ij≤ 0 implies that the agents maintain the safe distance d_s. Since the function h_ij is relative degree two function with respect to the dynamics (<ref>), we use the second order safety condition discussed in <cit.>. Similarly, for keeping the point-of-interest in the field of view, we use two separate CBFs, defined as h_ϕ= |∠(p_i-[ x_i y_i ]^T)-ϕ_i|^2-α^2, h_R= [ x_i y_i ]^T-p_i^2 - R^2, so that h_ϕ(z_i)≤ 0, h_R(z_i)≤ 0 implies that z_i∈ℱ. For h_ϕ, h_R, we use the relative degree 2 condition TODO: CITE HOCBF HERE. Finally, we define the CLF as V = 1/2(X_i-X_di)^T(X_i-X_di), where X_i∈ℝ^6 is the state vector of the i-th agent, and X_di∈ℝ^6 its desired state, defined as X_di = [g_i;θ_g; c_1g_i-[ x_i y_i ]^Tcos(θ_g-ϕ_i); c_1g_i-[ x_i y_i ]^Tsin(θ_g-ϕ_i); c_2(θ_g-ϕ_i) ], where θ_g = ∠(g_i-[ x_i y_i ]^T) and c_1, c_2>0 are some constants. We consider 4 agents for the numerical simulations.First, we fix γ = 0.5 and ϵ = 0.5. Figure <ref> shows the path traced by 4 agents. The solid circular region represents the goal set defined as {X|V(X)≤ 0.1}, and the square boxes denote the point of interests p_i for each agent.[A video of the simulation is available at: https://tinyurl.com/y32oa4p4https://tinyurl.com/y32oa4p4.] Figure <ref> plots V_M = max_i{V_i} showing the convergence of the agents to their respective goal sets, while satisfying all the safety constraints, as can be seen from Figure <ref>, which plots h_M = max{h_ij,h_R, h_ϕ} showing that all the CBFs are non-positive at all times for all the three cases. §.§ Extensions and Relevant WorkOne of the limitations of CLF-CBF QPs for convergence and safety, as analyzed in <cit.>, is the existence of stable undesirable equilibrium. Furthermore, while CBF-based approaches can guarantee step-wise safety (i.e., safety at each step) and hence are termed myopic in nature <cit.>, they cannot guarantee that system trajectories will not enter a region in the future from where safety cannot be guaranteed. Without proper knowledge of the control invariant set, a wrongly chosen barrier function might lead to infeasibility of the CBF-QP and as a result, violation of safety. To circumvent these issues, combining a high-level planner with a low-level controller has become a popular approach <cit.>. The underlying idea in these strategies is to design low-level controllers to track a reference trajectory, which is computed by a high-level planner using a simplified model. However, it is important that the low-level controller is able to track trajectory generated by high-level in a given time dictated by the update frequency of the high-level planner. To this end, the notion of FxTS is utilized in <cit.>, where a FxT-CLF-CBF-QP-based low-level controller guarantees that the trajectories remain in the domain of attraction of the next waypoint, and reach there before the next high-level-planning update occurs. In turn, a model predictive control (MPC)-based high-level planner utilizes the FxT-DoA to generate trajectories so that the low-level QP is guaranteed to remain feasible. This way, the low-level controller helps guarantee the recursive feasibility of the MPC, and the high-level planner helps guarantee the feasibility of the QP, thereby guaranteeing that the underlying problem can be solved. In <cit.>, we also introduced a new notion of safety, termed Periodic Safety, where the system trajectories are required to enter or visit a set (say, 𝒳_T) periodically (say, with period T>0), while remaining in a safe set 𝒳 at all times. In the interest of space, we skip the technical details of the hierarchical framework and briefly discuss the case study that illustrates utility of such as approach. We use the proposed strategy to steer a Segway to the origin.[Code available at https://github.com/kunalgarg42/fxts_multi_rate ] The state of the system are the position p, the velocity v, the rod angle θ and the angular velocity ω. The control action is the voltage commanded to the motor and the equations of motion used to simulate the system can be found in <cit.>. In this simulation, we run the high-level MPC planner at 5Hz and the low-level controller at 10kHz. We choose the set 𝒳_T = {x = [p, v, θ, ω]^T||p|≤ 10, |v|≤ 5, |θ|≤ 0.3, |ω|≤ 10π}, 𝒳_F = {0}, input bounds u≤ 25 with u_m≤ 15. From Figure <ref>, the main takeaway is that periodically, using the proposed FxT-CLF-QP, the closed-loop trajectories reach the set from where feasibility of MPC is guaranteed (denoted 𝒞_i for i-th MPC step). However, an exponentially stabiling controller fails to do so, resulting in infeasibility of the MPC. This demonstrates the efficacy of the proposed framework over the existing methods that use exponentially stabilizing controllers.§ INPUT-CONSTRAINED CONTROL BARRIER FUNCTIONS: SYNTHESIS UNDER HIGH RELATIVE DEGREE AND DISTURBANCES §.§ Constructive Methods for Higher-Order CBFs under Disturbances and Input ConstraintsHigh order CBFs (HOCBFs) were first introduced in <cit.> and extended to be robust in <cit.>. However, a limitation of all these works is that it is unclear how to choose the class-𝒦 functions in the HOCBF construction. When there are no input constraints, choosing these functions is equivalent to tuning the control law. When there are input constraints, these functions determine the size and shape of the CBF set, and thus must be chosen carefully to ensure satisfaction of (<ref>), here modified in Definition <ref> to include robustness. Thus, the objective of <cit.> is to develop constructive methods to choose these functions. This section presents one such method from our work in <cit.>, and the interested reader is referred to <cit.> for two additional methods. Related works also include, non-exhaustively, <cit.>, and the following method is further extended to high-order robust sampled-data CBFs in <cit.>.Consider the time-varying control-affine modelẋ = f(t,x) + g(t,x) (u+w_u) + w_x_= F(t,x,u,w_u,w_x) ,with time t∈𝒯=[t_0,∞), state x∈^n, control input u∈𝒰⊂^m where 𝒰 is compact, unknown disturbances w_u ∈^m and w_x∈^n that are continuous in time, and functions f:𝒯×^n→^n and g:𝒯×^n→^n× m that are piecewise continuous in t and locally Lipschitz continuous in x. Let w_u and w_x be bounded as w_u≤ w_u,max and w_x≤ w_x,max for some w_u,max,w_x,max∈_≥ 0, and define the set of allowable disturbances 𝒲≜{ w_u ∈^m |w_u≤ w_u,max}×{ w_x ∈^n |w_x≤ w_x,max}.Assume a unique solution to (<ref>) exists for all t∈𝒯. Given dynamics (<ref>), a function η : 𝒯×^n → is said to be of relative-degree r if it is r-times total differentiable in time and η^(r) is the lowest order derivative in which u and w_u appear explicitly. Denote the set of all relative-degree r functions as 𝒢^r.Let h : 𝒯×^n →, h ∈^r, denote the constraint function, and define a safe set 𝒮 as𝒮(t) ≜{x∈^n| h(t,x) ≤ 0} ,where we will henceforth drop the argument t for compactness. Also, denote the safe set across time as 𝒮^𝒯≜{ (t,x) ∈×^n | t∈𝒯, x ∈𝒮(t) }. Our aim is to develop methods for rendering the state trajectory always inside the safe set 𝒮 in the presence of any allowable disturbances (w_u,w_x)∈𝒲.We will do this by constructing functions H:𝒯×^n→that generate sets of the form𝒮_H(t) ≜{ x ∈^n | H(t,x) ≤ 0 } , 𝒮_H^res(t) ≜{ x ∈^n | H(t,x) ≤ 0andh(t,x) ≤ 0 } ,visualized in Fig. <ref>. We refer to the set 𝒮_H as an inner safe set (or also a CBF set), and to the set 𝒮_H^res as a restricted safe set. Note that if H(t,x) ≥ h(t,x) for all (t,x)∈𝒯×^n, then 𝒮_H≡𝒮_H^res. A controller is said to render 𝒮_H^res forward invariant, if given any x(t_0) ∈𝒮_H^res(t_0), the closed-loop trajectory satisfies x(t)∈𝒮_H^res(t), ∀ t∈𝒯. In general, there may exist points x(t_0)∈𝒮(t_0), from which we will not be able to render 𝒮 forward invariant under (<ref>). Nevertheless, if we can render the subset 𝒮_H^res⊆𝒮 forward invariant, then we can ensure that the closed loop trajectories of (<ref>) are safe (i.e. always stay in 𝒮) for initial conditions lying in the set 𝒮_H^res. Thus, a crucial requirement is that x(t_0)∈𝒮_H^res(t_0). We also define the domains 𝒮_H^𝒯≜{(t,x)∈×^n| t∈𝒯,x∈𝒮_H(t)} and 𝒮_H^res,𝒯≜{(t,x)∈×^n| t∈𝒯,x∈𝒮_H^res(t)} similar to 𝒮^𝒯.§.§.§ Robust HOCBF Definition Here, we break from the HOCBF convention, and instead work with first-order CBFs. We also only consider relative-degree 2 constraint functions presently, though <cit.> also presents one method for greater relative-degrees. Let ∂_t denote the partial derivative in time t and ∇ the gradient in state x.For the system (<ref>), a continuously differentiable function H:𝒟×^n→ is a robust control barrier function (RCBF) on a time-varying set 𝒳 if there exists a function α∈𝒦 such that ∀ x ∈𝒳(t), t∈𝒯,max_(w_u,w_x)∈𝒲[ inf_u∈𝒰(∂_t H(t,x) +∇ H(t,x) [f(t,x) + g(t,x) (u+w_u) + w_x] )] ≤α(-H(t,x)). Based on Definition <ref>, we also define for compactnessW(t, x)≜max_(w_u,w_x)∈𝒲∇ H(t,x) (g(t,x) w_u + w_x) ≡∇ H(t,x) g(t,x) w_u,max + ∇ H(t,x) w_x,max .The set of control inputs such that (<ref>) is satisfied is thenμ_rcbf (t,x) ≜{ u∈𝒰|∂_t H(t,x) + ∇ H(t,x)(f(t,x) +g(t,x)u) ≤α(-H(t,x)) - W(t,x) } .Note that since Definition <ref> considers the allowable control set 𝒰, if H is a RCBF on 𝒳, then μ_rcbf(t,x) is nonempty for all x∈𝒳(t),t∈𝒯.§.§.§ One Method for Constructing an HOCBF For the system (<ref>), if h is of relative-degree 2, note that ḣ is a function of w_x, and ḧ is a function of w_u and w_x, and thus are not precisely known. Thus, define the following upper bound on ḣ:ḣ_w(t,x) ≜ max_||w_x||≤ w_x,maxḣ(t,x,w_x)= ∂_t h(t,x) + ∇ h(t,x) f(t,x) + ∇ h(t,x)w_x,maxand its derivativeḧ_w(t,x,u,w_u,w_x)= d/dtḣ_w(t,x) = ∂_t ḣ_w(t,x) + ∇ḣ_w(t,x) F(t,x,u,w_u,w_x).Note that ḣ_w is a known quantity, while ḧ_w is still a function of the unknown quantities w_u,w_x in F.For a relative-degree 2 constraint, we can intuitively describe h as the position of an agent with respect to an obstacle, ḣ its velocity, and ḧ its acceleration, where acceleration is the controlled variable. Given some maximal amount of control authority encoded in 𝒰, suppose that there exists some function ϕ:→ such thatmax_(w_u,w_x)∈𝒲inf_u∈𝒰ḧ_w(t,x,u,w_u,w_x) ≤ϕ(h(t,x)) < 0.This is a reasonable assumption for many systems, since intuitively ϕ represents the effects of other forces/accelerations in the environment. Given models of these forces, one can often read the function ϕ directly from the dynamics. If no such function ϕ exists, it may instead be possible to find such a function ϕ for a tighter constraint function, e.g. h^† = h + γ. We then have the following theorem.Let h ∈^2 define a safe set as in (<ref>). Suppose there exists an invertible, continuously differentiable, and strictly monotone decreasing function Φ:→, whose derivative is Φ'=ϕ for ϕ:→, such that (<ref>) holds ∀ (t,x) ∈𝒮^𝒯. Let Φ^-1 be the function for which Φ^-1(Φ(λ)) = λ, ∀λ∈. Then the function H(t,x) = Φ^-1( Φ(h(t,x)) - 1/2ḣ_w(t,x)|ḣ_w(t,x)| )is a RCBF on 𝒮_H^res in (<ref>) for the system (<ref>) for any α∈𝒦. Moreover, any control law u(t,x) such that u(t,x) ∈μ_rcbf(t,x), ∀ (t,x)∈𝒮_H^res,𝒯 also renders 𝒮_H^res forward invariant. That is, if (<ref>) holds, then we have a constructive way to find a CBF H as a function of h and ḣ_w. Note that the intersection of sets 𝒮_H^res = 𝒮_H ∩𝒮 is analogous to the intersection C_1 ∩ C_2 in <cit.>. See <cit.> for more information.§.§.§ Case Study and RemarksTo see Theorem <ref> in practice, consider the system with state x = [r,ṙ]∈^6 with dynamicsr̈ = -μ/r^3r + ufor μ = 6.26(10)^10, and constraint functionh = ρ - rfor ρ = 4.76(10)^5. Let 𝒰 = { u∈^3 | u_∞≤ u_max}. For this constraint function, it holds thatḧ_w ≤μ/r^2 - r (u + w_x + w_u)/rIt follows thatmax_(w_u,w_x)∈𝒲inf_u∈𝒰ḧ_w ≤μ/(ρ - h)^2 + w_x,max + w_u,max - u_max_=ϕ(h)Assuming that (<ref>) is always negative for h ≤ 0, then let Φ be any anti-derivative of ϕ in (<ref>), such asΦ(λ) = μ/ρ - λ + (w_u,max + w_x,max - u_max) λ .and Theorem <ref> guarantees that H as in (<ref>) with (<ref>) is a RCBF. Thus, we have a constructive method of constructing a RCBF for this system. This RCBF was then used in simulation in <cit.> and <cit.>.We now consider how the above approach relates to the more widely used HOCBF formulation in <cit.> and to the Exponential CBF (ECBF) formulation in <cit.> (which is a special case of <cit.>). Given a relative-degree 2 constraint function h meeting the assumptions of Theorem <ref>, h is also an HOCBF ψ_0= hψ_1= ḣ - α_1(-h)ψ_2= ḧ + α_1'(-h) ḣ - α_2(-ψ_1)with choiceα_1(λ) = √(2 (Φ(-λ) - Φ(0)))and with α_2∈𝒦 as a free variable. Thus, one can map between the approaches in <cit.> and <cit.>. However, the choice of α_1 in (<ref>) is 1) non-obvious without the above analysis, and 2) violates the Lipschitzness assumptions present in <cit.>. Also, while our method is constructive, it is conservative in the sense that it results in a CBF that is valid for any class-𝒦 function α in (<ref>), and as a result is valid for any α_2 using the conventions of <cit.>; Theorem <ref> could potentially be applicable to a wider class of systems if this was relaxed.Lastly, the ECBF is an HOCBF that uses only linear class-𝒦 functions. We note that if ḧ is bounded (as is usually the case when 𝒰 is compact), then the ECBF can only be used with compact safe sets. This is because the ECBF, similar to a linear control law, requires stronger accelerations ḧ, and hence larger u, as the state moves further from the safe set boundary. By contrast, the CBF in Theorem <ref> works with an unbounded safe set and admits an unbounded inner safe set while still only commanding signals u∈𝒰 everywhere in the CBF set.§.§.§ Robustly Reachable Sets Finally, since the system (<ref>) is uncertain and the control (<ref>) always considers the worst-case disturbance, it is worth considering the set of states that the system might reach. Frequently, the system evolution can be divided into arcs where either 1) the CBF condition is inactive, or 2) the CBF condition is satisfied with equality. Consider the behavior under the latter case.Suppose H:𝒯×^n→ is a RCBF on the set 𝒮_H(t) (or 𝒮_H^res if using H in (<ref>)) for the system (<ref>). Suppose there exists constants η_1, η_2 > 0 such that W in (<ref>) satisfies W(t,x) ∈ [η_1,η_2], ∀ x∈S_H(t), t∈𝒯. Let α_w∈𝒦. Suppose H(t_0,x(t_0)) ≤ 0. Then any control law u(t,x) that satisfies ∂_t H(·) + ∇ H(·) (f(t,x) + g(t,x) u) + W(t,x) = α_w(-H(·)) W(t,x)will cause the system trajectory to asymptotically approach the set {x∈^n | H(t,x) ∈ [-α_w^-1(2), 0] }. That is, by varying α_w, we can tune how close to the boundary of 𝒮_H trajectories will approach. This theorem is put to use in <cit.> to achieve satisfaction of a so-called “tight-tolerance” objective with RCBFs. See also <cit.>. The above result is also closely related to the definition of “physical margin” in Section <ref>, as robustness to unknown disturbances is closely related to robustness to inter-sampling uncertainty.§.§.§ Open Problems The above results and those in <cit.>, and the references therein, demonstrate that CBFs can be applied to relative-degree 2 systems with input constraints. However, there are still many systems that do not satisfy the conditions in the current literature and thus developing CBFs for these systems remains an open problem. Additionally, the above work only applies to one constraint function and CBF at a time. Working with multiple CBFs simultaneously and in the presence of input constraints is also an open problem. We refer the reader to <cit.> for some constructive, albeit preliminary, methods towards this problem.§.§ Input-Constrained Control Barrier Functions As discussed, designing Control Barrier Functions for general nonlinear control-affine systems is challenging. When the system is also input constrained, this becomes further challenging, since there can be regions of the state-space where the CBF condition (<ref>) is instantaneously satisfied, but the system will evenutally reach the boundary of the safe set and exit then exit it. In <cit.> we proposed a technique to isolate such states, and identify an inner safe set that can be rendered forward invariant under the input constraints. Consider the dynamical system (<ref>) with bounded control inputs u∈𝒰 and a safe set 𝒮 defined by a function h: 𝒳→, as per (<ref>). Assume h is not a CBF on 𝒮. We define the following sequence of functions: b_0(x)= h(x) b_1(x)= inf_u∈𝒰 [L_f b_0(x) + L_g b_0(x) u + α_0(b_0(x))] b_2(x)= inf_u∈𝒰 [L_f b_1(x) + L_g b_1(x) u + α_1(b_1(x))]⋮b_N(x)= inf_u∈𝒰 [L_f b_N-1(x) + L_g b_N-1(x) u+ α_N-1(b_N-1(x))] where each α_i is some user-specified class-𝒦 function, and N is a positive integer. We assume the functions f, g, h are sufficiently smooth such that b_N and its derivative are defined.We also define the sets 𝒞_0 = { x ∈𝒳 : b_0(x) ≥ 0} = 𝒮, 𝒞_1 = { x ∈𝒳 : b_1(x) ≥ 0}, …,𝒞_N = { x ∈𝒳 : b_N(x) ≥ 0} and their intersection 𝒞^* = 𝒞_0 ∩𝒞_1 ∩ ... ∩𝒞_N. We assume the set 𝒞^* is closed, non-empty and has no isolated points. The sets are visualized in Figure <ref>. For the above construction, if there exists a class-𝒦 function α_N such thatsup_u∈𝒰[L_fb_N(x) + L_gb_N(x) u + α_N(b_N(x))] ≥ 0 ∀ x∈𝒞^*,then b_N is an Input Constrained Control Barrier Function (ICCBF).Note, this does not require b_N to be a CBF on 𝒞_N. The definition only requires condition (<ref>) to hold for all x ∈𝒞^* which is a subset of 𝒞_N.The main result of <cit.> is stated as: Given the input constrained dynamical system (<ref>), if b_N, defined by (<ref> - <ref>), is an ICCBF, then any Lipschitz continuous controller u : 𝒞^* →𝒰 such that u(x) ∈ K_ICCBF(x), where K_ICCBF(x) ={ u ∈𝒰: L_fb_N(x) + L_gb_N(x) u ≥ -α_N(b_N(x)) } renders the set 𝒞^* ⊆𝒮 forward invariant.Time-invariant Higher Order CBFs, as in <cit.>, are a special case of ICCBFs. For instance, in systems of relative degree 2, L_gh(x) = 0 for all x ∈𝒮. In this case, in the construction of ICCBFs we have b_1(x) = inf_u∈𝒰 [ L_fh(x) + L_gh(x) u +α_0(h(x))]= inf_u∈𝒰 [ L_fh(x)+α_0(h(x))]= L_fh(x)+ α_0(h(x)) which is exactly the function defined in <cit.>. This repeats for any relative degree greater than 2, and thus for a system with relative degree ρ, the first ρ expressions of ICCBFs are identical to those of HOCBFs. Moreover, ICCBFs can handle systems with non-uniform relative degree, by choosing N greater or equal to the largest relative degree of the system in 𝒮.§.§.§ Example: Adaptive Cruise Control As a demonstration, we apply ICCBFs to the Adaptive Cruise Control (ACC) problem of <cit.>. Consider a point-mass model of a vehicle moving in a straight line. The vehicle is following a vehicle d distance in-front, moving at a known constant speed v_0. The objective is to design a controller to accelerate to the speed limit but prevent the vehicles from colliding. The dynamics model and safety constraints are as in <cit.>, 𝒮 = {x ∈𝒳 : h(x) = x_1 - 1.8 x_2 ≥ 0}. In addition, we impose the input constraints 𝒰 = { u :|u| ≤ 0.25 }, representing a maximum acceleration or deceleration of 0.25 g. One can verify that 𝒮 cannot be rendered forward invariant under the input constraints, and therefore we use the ICCBF construction technique to design an inner safe set. We (arbitrarily) choose N = 2 and the class 𝒦 functions α_0(h) = 4 h,α_1(h) = 7 √(h), α_2(h) = 2h to define the functions b_1, b_2 and sets 𝒞_1, 𝒞_2. To (approximately) verify that b_2 is an ICCBF, we used a nonlinear optimization to determine that (<ref>) was satisfied (see <cit.> for details). The sets 𝒞_0,𝒞_1, 𝒞_2, 𝒞^* are visualized in Figure <ref>.Figure <ref>(e-g) compares the CLF-CBF-QP controller of <cit.> (blue) to our proposed controller (green), π(x) = u ∈argmin1/2 (u-π_d(x))^2 s.t. L_fb_2(x) + L_gb_2(x) u ≥ - 2 b_2(x) u ∈𝒰 where π_d(x) is the desired acceleration, computed using a Control Lyapunov Function V(x) = (x_2 - v_max)^2, where v_max=24 is the speed limit. The (standard) CLF-CBF-QP reaches the input-constraint at t=5.9 seconds, and thus the input constraints force the system to leave the safe set. In constrast, the ICCBF-QP remains feasible and safe for the entire duration, by applying brakes early, at t=2.9 seconds. Thus, by explicitly accounting for input constraints ICCBF-QP controller keeps the input-constrained system safe.In the interest of space, the reader is referred to <cit.> for additional details and examples on how to construct ICCBFs for relatively simple systems. How to systematically construct ICCBFs is part of our ongoing work. § ADAPTATION: HOW TO PREVENT LOSS OF CONTROLLABILITY, AND HOW TO REDUCE CONSERVATISM OF THE SYSTEM RESPONSE? In this section, we present some results that involve online adaptation of CBFs towards two main challenges: The first for assuring that a candidate CBF will remain a valid CBF throughout the system trajectories, and the second for reducing the conservatism of the system response by allowing trajectories to approach closer to the boundary of the safety set.§.§ Online Verification via Consolidated CBFs Verifying a candidate CBFs as valid, i.e., proving that the CBF condition is satisfiable via available control authority in perpetuity, is a challenging and rather underdeveloped problem. For isolated or single-CBF constraints, verifying or finding valid CBFs under either unlimited <cit.>, or bounded control authority <cit.>, or by considering only one constraint at a time, either by assumption <cit.> or construction in a non-smooth manner <cit.> is a fairly studied task. However, these methods do not extend to multiple constraints. Some recent works synthesize and/or verify a CBF using sum-of-squares optimization <cit.>, supervised machine learning <cit.>, and Hamilton-Jacobi-Bellman reachability analysis <cit.>, but are limited to offline tools. In our recent work <cit.>, we consider a multi-agent system, each of whose A constituent agents is modeled by the following class of nonlinear, control-affine dynamical systems:ẋ_i = f_i(x_i) + g_i(x_i)u_i,where x_i ∈ℝ^n and u_i ∈𝒰_i ⊆ℝ^m are the state and control input vectors for the i^th agent, with 𝒰_i the input constraint set, and where f_i: ℝ^n →ℝ^n and g_i: ℝ^n→ℝ^n × m are known, locally Lipschitz, and not necessarily homogeneous ∀ i ∈𝒜 = [A]. The concatenated state vector is x = [x_1T̂,,x_AT̂]T̂∈ℝ^N, the concatenated control input vector is u = [u_1T̂,,u_AT̂]T̂∈𝒰⊆ℝ^M, and as such the full system dynamics areẋ = F(x(t)) + G(x(t))u(x(t)), x(0) = x_0,where F = [f_1T̂,,f_AT̂]T̂: ℝ^N →ℝ^N and G = diag([g_1,,g_A]): ℝ^N→ℝ^N × M.Consider also a collection of c>1 state constraints, each described by a function h_s ∈𝒞^1: ℝ^N →ℝ for s ∈ [c]. Each h_s is a candidate CBF (hereafter referred to as a constituent constraint function) and defines a set𝒮_s = {x∈ℝ^N | h_s(x) ≥ 0},that obeys the same structure as (<ref>). The following assumption is required, otherwise it is impossible to satisfy all constraints jointly. The intersection of constraint sets is non-empty, i.e., 𝒮 = ⋂_s=1^c𝒮_s ≠∅. §.§.§ Definition of Consolidated CBFs Define a positive gain vector k = [k_1k_c]T̂∈ℝ^c_+. A consolidated CBF (C-CBF) candidate H: ℝ^N ×ℝ_+^c →ℝ takes the following form:H(x, k) = 1 - ∑_s=1^cϕ(h_s(x), k_s),where ϕ∈𝒞^1: ℝ_+×ℝ_+→ℝ_+ belongs to class ℒℒ and satisfies[For example, the decaying exponential function, i.e., ϕ(h_s,k_s)=e^-h_sk_s, satisfies the requirements over the domain ℝ_+×ℝ_+.] ϕ(h_s,0)=ϕ(0,k_s)=ϕ(0,0)=1. It follows that the set 𝒞(k) = {x∈ℝ^N | H(x, k) ≥ 0} is a subset of 𝒮 (i.e., 𝒞(k) ⊂𝒮), where the level of closeness of 𝒞(k) to 𝒮 depends on the choices of gains k. This may be confirmed by observing that if any h_s(x) = 0 then H(x) ≤ 1 - 1 - ∑_j=1, j≠ s^c ϕ(h_j(x), k_j) < 0, and thus for H(x) ≥ 0 it must hold that h_s(x) > 0, for all s ∈ [c]. Now, if H is a valid C-CBF over the set 𝒞(k), then 𝒞(k) is forward invariant and thus the trajectories of (<ref>) remain safe with respect to each constituent safe set 𝒮_s, ∀ s ∈ [c]. For a static gain vector (i.e., k̇ = 0_c × 1) the function H is a CBF on the set 𝒮 if there exists α_H ∈𝒦_∞ such that the following condition holds for all x∈𝒮⊃𝒞(k):L_FH(x, k) + L_GH(x, k)u(x) ≥ -α_H(H(x, k)),where from (<ref>) it follows thatL_FH(x)= -∑_s=1^c∂ϕ/∂ h_sL_Fh_s(x), L_GH(x)= -∑_s=1^c∂ϕ/∂ h_sL_Gh_s(x) .Again taking ϕ(h_s, k_s) = e^-h_sk_s as an example, we obtain that ∂ϕ/∂ h_s = -k_se^-h_sk_s, in which case it is evident that the role of the gain vector k is to weight the constituent constraint functions h_s and their derivative terms L_Fh_s and L_Gh_s in the CBF condition (<ref>). In this case, a higher value k_s indicates a weaker weight in the CBF dynamics, as the exponential decay overpowers the linear growth. Due to the combinatorial nature of these gains, for an arbitrary k there may exist some x∈𝒞(k) such that L_GH(x) = 0_1× M, which lead to the state exiting 𝒞(k) (and potentially 𝒮 as a result).Using online adaptation of k, however, it may be possible to achieve L_GH(x(t)) ≠0_1× M for all t ≥ 0, which motivates the following problem.Given a C-CBF candidate H: ℝ^N ×ℝ_+^c →ℝ defined by (<ref>), design an adaptation law k̇ = κ(x, k) such that L_GH(x(t)) ≠0_1× M, ∀ t ≥ 0. §.§.§ C-CBF Weight Adaptation Law Let the intersection of constraint sets be denoted 𝒮; the matrix of controlled constituent function dynamics L_G ∈ℝ^c × M is not all zero, i.e.,L_G(x) ≜[ L_Gh_1(x); ⋮; L_Gh_c(x) ]≠0_c × M,∀x∈𝒮. The above requires non-zero sensitivity of at least one constraint function h_s to the control input u. It is a mild condition, and is easily satisfiable when at least one h_s is of relative-degree one with respect to the system (<ref>). In what follows, it is shown that the ensuing QP-based adaptation law renders a C-CBF as valid. (C-CBF-QP) κ(x, k)= _μ∈ℝ^c 1/2(μ - μ_0(x))T̂P(μ - μ_0(x))s.t. μ_s + α_k(k_s-k_s,min) ≥ 0,∀ s ∈ [c], pT̂(x)Q(x)ṗ + 1/2pT̂(x)Q̇p(x) + α_p(h_p(x)) ≥ 0, where P∈ℝ^c × c is a positive-definite gain matrix, α_k, α_p ∈𝒦_∞, μ_0 ∈ℝ^c is the desired solution, k_min= [k_1,min,,k_c,min]T̂ is the vector of minimum allowable values k_s,min > 0, and p(x) ≜[∂ϕ(x)/∂ h_1∂ϕ(x)/∂ h_c]^⊤,Q(x) ≜I - 2NNT̂ - NNT̂NNT̂ with h_p(x) = 1/2pT̂(x)Q(x)p(x) - ε, ε > 0, and N = N(x) ≜ [n_1(x) n_r(x)], such that{n_1(x),,n_r(x)} constitutes a basis for the null space of L_GT̂(x), i.e., 𝒩(L_GT̂(x)) = span{n_1(x),,n_r(x)}, where L_G(x) is given by (<ref>).Suppose that there exist c>1 constraint functions h_s: ℝ^N →ℝ defining sets 𝒮_s = {x∈ℝ^N | h_s(x) ≥ 0}, ∀ s ∈ [c], that Assumptions <ref> and <ref> hold, and that 𝒰 = ℝ^M. If k(0) is such that L_GH(x(0)) ≠0_1 × M, then under k̇ = κ(x,k) the controlled C-CBF dynamics are non-vanishing provided that (<ref>) is feasible, i.e., L_GH(x(t)) ≠0_1 × M, ∀ t ≥ 0.It follows that if the premises of Theorem <ref> hold, then the function H defined by (<ref>) is a CBF for 𝒞(k(t)) = {x∈ℝ^N | H(x, k) ≥ 0}, ∀ t ≥ 0.§.§.§ Simulation Results: Multi-Robot Coordination Consider 3 non-communicative, but responsive robots (i ∈𝒜_nc∖𝒜_ncnr) in a warehouse environment seeking to traverse a narrow corridor intersected by a passageway occupied with 6 non-responsive agents (j ∈𝒜_ncnr). The non-responsive agents may be, e.g., humans walking or some other dynamic obstacles.Each robot is modeled according to a kinematic bicycle model described by <cit.>: which we omit here in the interest of space; the reader can refer to the complete case study in <cit.>.The safety constraints for each robot are to 1) obey the velocity restriction,2) remain inside the corridor areaand 3) avoid collisions with all other robots.As such, each robot has three constituent constraint sets (defined explicitly in <cit.>), the intersection of which constitutes the safe set for each robot.The robots i ∈𝒜_nc∖𝒜_ncnr are controlled using a C-CBF based decentralized controller with constituent functions h_c, h_s, h_r, an LQR based nominal control input (see <cit.>), and initial gains k(0) = 1_10 × 1. The non-responsive agents used a similar LQRcontroller to move through the passageway in pairs of two, with the first two pairs passing through the intersection without stopping and the last pair stopping at the intersection before proceeding.As shown in Figures <ref> and <ref>, the non-communicative robots traverse both the narrow corridor and the busy intersection to reach their goal locations safely. These results demonstrate that the C-CBF-based adaptive controllers maintained safety and control viability at all times amongst 10 state constraints. §.§ Parameter Adaptation with Rate Tunable CBFsIn a recent parallel thread of work, instead of designing adaptive laws for the control coefficient L_gh_i(t,x), we consider the adaptation of the parameters introduced through the class-𝒦 function in the CBF condition. In <cit.> we introduce a new notion of a Rate-Tunable Control Barrier Function (RT-CBF), which allows consideration of parametric class 𝒦 functions, and adaptation of their parameters online so that the response of the controller becomes less or more conservative, without jeopardizing safety. It is also noteworthy that this adaptation facilitates the consideration and satisfaction of multiple time-varying barrier constraints, by making them easier to tune for performance, especially when they do not represent similar physical quantities (e.g., when imposing constraints on the rotational dynamics, and constraints on the translational dynamics for a quadrotor). Designing the parameter dynamics is a non-trivial task, especially in the presence of multiple constraints. We have studied pointwise sufficient conditions on the rate of change of parameters for enforcing feasibility. Although the pointwise design is suboptimal, it is shown empirically to improve upon the standard CBF-QP controllers. It also allows the incorporation of user-designed rules (e.g., heuristics) for updating the class 𝒦 function and project it to a set of feasible update rules. As a case study, we design RT-CBFs for decentralized control for multi-robot systems in <cit.>. Specifically, we design the parameter dynamics based on a trust factor, which in turn is defined on the instantaneous ease of satisfaction of the CBF constraints, and illustrate how this can be applied to robots of heterogeneous dynamics.§.§.§ Definition of Rate-Tunable CBFs For ease of understanding, we illustrate our theory with examples that only consider linear class 𝒦 functions of the following form in the ensuing. α_i^k(z) = ν_i^k z, ν_i^k ∈^+. Since we allow parameters ν_i^k to vary with time, the derived barrier functions in (<ref>) for, for example, a second-order barrier function are given as follows ψ_i^0= h_i,ψ_i^1= ψ̇^0 + ν_i^1 ψ_i^0,ψ_i^2= ψ̈^0 + ν̇_i^1 ψ_i^0 + ν_i^1 ψ̇_i^0 + ν_i^2( ψ_i^0 + ν_i^1 ψ_i^0),where ψ_i^2 = ψ̇_i^1 + ν_i^2 ψ_i^1≥ 0 is the CBF condition (<ref>) that is used to design the control input. We denote the parameters and their derivatives contributing to the CBF condition (<ref>) as Θ_α_i∈^n_α and the objective is to design Θ̇_α_i.For example, for (<ref>), Θ_α_i^k=[ν_i^1, ν̇_i^1, ν_i^2]^T and Θ̇_α_i = [ ν̇_i^1; ν̈_i^1; ν̇_i^2 ] = [ 0 1 0; 0 0 0; 0 0 0; ]Θ_α_i + [ 0 0; 1 0; 0 1 ][ ν̈_i^1; ν̇_i^2 ]Note that the derivatives to be designed, namely ν̈_i^1, ν̇_i^2, do not appear in the CBF condition (<ref>) that is imposed in QP to design the control input. This allows for decoupling the design of control input and the parameter dynamics.Consider the system dynamics in (<ref>) augmented with the state Θ_α∈^n_α that obeys the dynamics[ẋ; Θ̇_α_i ] = [ f(x) + g(x) u; f_α_i(x, Θ_α_i) ],where f_α_i:𝒳×𝒜_i →𝒜_i is a locally Lipschitz continuous function w.r.t. (x,Θ_α_i), where 𝒜_i ⊂^n_α_i is a compact set, i ∈{1,…,N}.Assume that Θ_α_i(t)∈𝒜_i, ∀ t≥ 0, where 𝒜_i ⊂^n_α_i is a compact set.Let the set of allowable states _i(t) at time t be defined as the 0-superlevel set of a function h_i:ℝ^+×𝒳→ as in (<ref>). Suppose h_i has relative degree r_i w.r.t the control input u and define functions ψ^k_i as:ψ^0_i(t,x)= h_i(t,x),ψ^k_i(t,x)= ψ̇^k-1_i(t,x,Θ_α_i^k) + α^k_i (ψ^k-1_i(t,x,Θ_α_i^k)), k∈{1,2,..,r_i-1}. (Rate-Tunable CBF)A (single) constraint function h_i:ℝ^+ ×𝒳→ is a Rate-Tunable Control Barrier Function (RT-CBF) for the set 𝒮_i(t) under the augmented system (<ref>) if for every initial state x(0)∈𝒮_i(0), there existsΘ_α_i(0)∈𝒜_i such that ∀ t≥ 0sup_u∈𝒰[ ψ̇_i^r_i-1(t,x,u, Θ_α_i) + α_i^r_i (ψ_i^r_i-1(t,x,Θ_α_i))] ≥ 0.Note that for Θ̇_α_i≡0 and Θ_α_i(0)≡Θ_α_i (a constant independent of x(0)), we recover the definition of the classical CBF. In that regard, RT-CBF is a weaker notion of a classical CBF, which allows for tuning the response of the system. Note (<ref>) is required to be satisfied for all t≥ 0 and not for all x∈𝒳 as required in vanilla CBF (<ref>) and HOCBF (<ref>) conditions. This difference is essential as we allow for the initial parameter value Θ_α_i(0) that is dependent on the initial state x(0). While several works employ heuristics to tune the parameters of the CBF condition (<ref>) so that a solution to the CBF-QP (<ref>) exists for all t>0 <cit.>, most of these are equivalent to treating the parameter ν of a linearfunction α(h) = ν h as an optimization variable. However, a formal analysis encompassing all these heuristics and other possible ways to adapt thefunction has been lacking so far, and RT-CBFs aim to bridge this gap in theory and application. In <cit.> we show that under mild assumptions (existence and uniqueness of the system trajectories), the existence of a RT-CBF is a necessary and sufficient condition for safety. We also show that several existing parameter adaptation schemes <cit.> fall under the framework of the proposed RT-CBFs. We then state the following theorem that illustrates how the tuning of the parameter Θ can be used to shape the response of the CBF-QP controller.<cit.>Consider the system (<ref>), a first-order candidate barrier function h, a function α(h,x): ×^n → and the following CBF-QP controller with unbounded control input u_QP = min_u∈^m ||u-u_r(t,x)||^2s.t. ḣ(t,x,u) ≥ -α(x,h(t,x))where u_r: ^+ ×^n→ is the reference (nominal) control input. Let u_d(t,x):^+ ×^n→𝒰 be any desired safe response of the system and u_r≠ u_d w.l.o.g. Then the following choice of function α(·, ·) minimizes the norm ||u_d - u_QP|| α(x,h) = {[ √(2)L_gh/||L_gh||(u_r - u_d) if||L_gh||>0; - L_fh - L_ghu_r andL_gh u_d>L_gh u_r; ; - L_fh - L_ghu_r if||L_gh||=0; ;√(2)L_gh/||L_gh||(u_r - u') if||L_gh||>0; - L_fh - L_ghu_r andL_gh u_d<L_gh u_r ].where u' = u_d + 1/√(2)L_gh^T L_gh (u_r-u_d). The result of Theorem <ref> albeit simple gives us some important insights. First, for different desired responses u_d (such as conservative or aggressive) at state x and time t, the function α can be used to steer u_QP close to u_d. Second, to achieve the aforementioned steering, the function α cannot be just aof the barrier function h as (<ref>) depends not only on h but also on x. In our framework of RT-CBF, the parameter Θ is a function of t,x,h and thus can fulfill this objective at the points where α in (<ref>) is differentiable. We consider the following RT-CBF-QP controller with parametric class 𝒦 functions (RT-CBF-QP) u = min_u∈𝒰 ||u-u_r||^2 + M δ^2s.t. V̇(t,x,u) ≤ - k V(t,x) + δ,ψ̇_i^r_i-1(t,x,u, Θ_α_i)+ α_i^r_i (ψ_i^r_i-1(t,x,Θ_α_i)) ≥ 0,i∈{1,2,..,N}. While we have established the necessity of RT-CBFs, finding a valid update law Θ̇ for parameters, much like finding a valid CBF in the sense of (<ref>), is non-trivial. In <cit.> we present some suboptimal and heuristic methods for ensuring that (<ref>) admits a solution for all time. In the interest of space we omit the presentation of the algorithm from the current paper, and present directly some illustrative results.§.§.§ Simulation Results: Adaptive Cruise ControlWe simulate the Adaptive Cruise Control (ACC) problem with a decelerating leader. Let the ego agent move with velocity v and a leader agent move at velocity v_L with distance D between them. The dynamics are [ v̇; v̇_L;Ḋ ] = [ -1/MF_r(v) + 1/M u;a_L;v_L - v ]where M denotes the mass of ego agent, F_r(v) is the resistance forceand a_L is the acceleration of the leader. The safety objective of the ego agent is to maintain a minimum distance D̅ with the leader. The desired velocity is given by v_d. Additionally, the velocity and control input are constrained by v_min≤ v ≤ v_max, -c M g ≤ u ≤ c M g, where c is acceleration coefficient and g is acceleration due to gravity. To ensure safety, we formulate three barrier functions h_1 = D - D̅, h_2 = v - v_min, h_3 = v_max-v, where h_1 is second-order barriers and h_2, h_3 are first-order barrier. The CLF is chosen as V = (v-v_d)^2. We further choose linear functions and apply the Algorithm in <cit.> to ensure safety.[The following parameters are used for the simulation: M=1650 kg, f_0=0.1, f_1=5, f_2=0.25, g=9.81m/s^2, v_mijn=0, v_max=0, v_d=24m/s, c=0.4, D̅=10.]The objective of the CBF-QP to ACC is designed in the same way as <cit.>. We compare CBF-QP with RT-CBF-QP in Fig. <ref> (for more comparisons please refer to <cit.>).The simulation is run for the same initialparameters in both cases, for 50 seconds. The CBF-QP (with fixed parameters) becomes infeasible before the simulation finishes as shown in Fig. <ref>. The RT-CBF-QP on the other hand (with adapted parameters) can maintain feasibility and safety at all times.The variations of ν_1^2, and u with time are shown in Figs. <ref> and <ref> respectively. Note that ν_1^2 starts increasing as the control input bound is approached.Other parameters do not change in this example and their variation is thus not shown This example illustrates that, for a chosen barrier function h, the feasibility of CBF-QP controllers is highly dependent on parameters, but online adaptation can help circumvent this issue.§ PREDICTION: HOW TO REDUCE MYOPIC BEHAVIOR? We review some results that aim to mitigate the “myopic" nature of CBF-based controllers, i.e., the fact that the control input is optimal only pointwise and does not consider the system trajectories over a finite horizon ahead. The notion of a “future-focused CBF" is introduced in <cit.> as a solution to the unsignaled intersection-crossing problem for mixed traffic (communicating and non-communicating vehicles), so that the vehicles avoid collisions that are predicted to occur in the future. In the interest of space, we omit to present this work in detail, and refer the interested reader to <cit.>. The following section details a predictive approach related to <cit.> that is applicable to more general systems. §.§ Bird's Eye CBFsIn this section, we consider the problems of A) designing CBFs for systems for which it is difficult to find a CBF using existing methods, and B) designing CBF-based controllers to act more proactively to maintain safety. To this end, we propose a special form of CBF that we call a “bird's eye CBF” (BECBF), introduced in <cit.>. We previously called this a predictive CBF, but we note that in the broader CBF literature, the term “predictive CBF” is usually synonymous with “backup CBF”, whereas the following work is distinct from any backup-type formulation, e.g. <cit.>.This form of CBF was specifically developed with the intent of controlling satellites in Low Earth Orbits. In this environment, a small control input applied early can have a large effect on the system trajectory over time. However, if the control input is not applied until two satellites are near collision, then the satellites must apply a very large control input to alter their trajectories in time to avoid collision. This sort of “last-second” behavior is undesirable and wastes fuel. Moreover, in this environment, obstacles may be moving extremely fast, so it is important to incorporate the future positions of the obstacles into the CBF formulation. Thus, we sought a control law that could maintain safety proactively using predictions about the future. The resultant CBF, while inspired by satellite orbits, is extremely general, and has proven especially useful in collision avoidance settings, such as the cars at an intersection also simulated in this section (see also <cit.> for much more extensive simulations for this particular application). We call this tool a bird's eye CBF by analogy to having a bird's eye view of the environment and thus being able to 1) see far away obstacles entering the environment, and 2) choose avoidance maneuvers that take into account the complete size/shape of the obstacle instead of just the location of its boundary.At this point, we emphasize that this subsection is still ongoing work. While the BECBF design herein works well for the following simulations, we note that achieving the regularity conditions specified below is a nontrivial challenge that we are still studying. In this section, we consider the modelẋ = f(t,x) + g(t,x) uwith time t∈𝒯⊆, state x∈𝒳⊆^n, and control u∈𝒰 = ^m. Suppose we are given a constraint function h:𝒯×𝒳→ and safe set𝒮(t) = {x∈𝒳| h(t,x) ≤ 0Similar to Section <ref>, our goal is to find a CBF H:𝒯×𝒳→ and a CBF set 𝒮_H = {x∈𝒳| H(t,x) ≤ 0}. The CBF that we design will be based on finite time predictions, and thus might be non-differentiable when the “time-of-interest” in this horizon switches from an endpoint of this horizon to the interior of this horizon. Thus, we relax the definition of CBF to absolutely continuous functions. An absolutely continuous function φ : 𝒯×𝒳→, denoted φ(t,x), is a control barrier function (CBF) if there exists α∈𝒦 (not necessarily locally Lipschitz continuous) such thatinf_u∈^m[ ∂_t φ( t,x ) + L_f(t,x) + g(t,x) uφ( t,x )_=d/dt[φ(t,x)]] ≤α(-φ(t,x)),for almost every x ∈𝒮_φ(t), t ∈𝒯, where 𝒮_φ(t)≜{x∈𝒳|φ(t,x)≤ 0 } . §.§.§ CBF Construction To construct the BECBF, suppose that we are given a nominal control input μ:𝒯×𝒳→𝒰. This can be any control law, and does not need to be “safety-encouraging” as is required for the backup CBF. We assign the hypothetical flow of the system according to this control input to the function p:𝒯×𝒯×𝒳→𝒳 satisfying∂ p(τ, t, x)/∂τ = f(τ, p(τ, t, x)) + g(τ, p(τ, t, x)) μ(τ, p(τ, t, x)),p(t,t,x)= x.We call p the path function; p encodes the predicted future trajectories of the system from any initial state. Assume that p is continuously differentiable. The idea of the BECBF is to use the path function to analyze whether the nominal control law μ is safe in the future, and if not, then to use the sensitivities (i.e. derivatives) of p to choose a control input u≠μ that is safe, i.e. a control input that leads to forward invariance of some subset 𝒮_H ⊂𝒮.To perform this analysis, for some (t,x), consider the curve ϕ(τ) = h(τ, p(τ,t,x)) containing the future values of h along the path function for τ∈[t,t+T] for some fixed T∈_>0. We are interested in two “points-of-interest” along ϕ: 1) the maximum value of ϕ over [t, t+T], and 2) the time at which ϕ first exceeds zero (i.e. the first root). For simplicity, assume that ϕ has a unique maximizer and no more than two roots, as shown in Fig. <ref>. Then define the quantitiesK(t,x) = max_τ∈[t,t+T] h(τ, p(τ,t,x)),M(t,x) = _τ∈[t,t+T] h(τ, p(τ,t,x)),R(t,x) = {τ∈[t,t+T] | h(τ, p(τ,t,x)) = 0}minτ K(t,x) ≥ 0M(t,x)K(t,x) < 0. That is, K is the maximum of ϕ over [t,t+T], and M is the time at which K occurs (which was assumed to be unique). If K≥ 0, then R is the first root of the curve ϕ, and otherwise, R is equal to M. We note that the original paper <cit.> considered a wider set of possible curves ϕ, including curves with several maximizers and/or maximizer intervals. However, we do not cover those cases here 1) to encourage simplicity, and 2) because these more general trajectories are likely to conflict with the assumed regularity conditions of Theorem <ref>.Using (<ref>)-(<ref>), we then define the BECBF as the sum of the maximal (i.e. worst point) safety metric along the predicted trajectory and a relaxation function β of the time until the safety metric first becomes positive (i.e. unsafe), if ever. Let β:_≥ 0→_≥ 0 be a class-𝒦 function, and then chooseH(t,x) = K(t,x) - β(R(t,x) - t).Intuitively, (<ref>) says that a state (t,x) belongs to 𝒮_H if the time at which the hypothetical trajectory p first becomes unsafe is sufficiently far in the future, as measured by β, that we can adjust the value of K before the system reaches any unsafe states. As such, the selection of β will substantially impact the size of 𝒮_H and the amount of control effort utilized to correct the trajectory, as well as how early this control effort is applied.The main results of <cit.> are then as follows.For some constraint function h:𝒯×𝒳→, let 𝒮 be as in (<ref>) and H as in (<ref>) with 𝒮_H ≜{ x ∈𝒳| H(t,x) ≤ 0}. Then 𝒮_H(t)⊆𝒮(t), ∀ t ∈𝒯. Let the derivative β':_≥0→_≥0 of β in (<ref>) be strictly positive on (0,T). Assume that H in (<ref>) is absolutely continuous, and further assume that M in (<ref>) is continuously differentiable whenever M(t,x) ∈ (t,t+T). Let 𝒰 = ^m. Suppose that there exists γ such that d/dτ[ϕ(τ)] ≡d/dτ[h(τ,p(τ;t,x))] ≤γ for all τ∈𝒯,t∈𝒯,x∈𝒳. Suppose also that ∂ h(η, p(η; t,x))/∂ x∂ p(η; t,x)/∂ x g(t,x) ≠ 0 for all η∈ (t,t+T)∖M(t,x) and for all t∈𝒯,x∈𝒳, and that ∂ h(M, p(M;t,x))/∂ x∂ h(R, p(R;t,x))/∂ x≥ 0 whenever R(t,x)≠M(t,x) and for all t∈𝒯,x∈𝒳. Then H in (<ref>) is a CBF as in Definition <ref>. That is, under mild assumptions, the function H in (<ref>) is indeed a CBF, and its zero sublevel set 𝒮_H is a subset of the constraint set 𝒮. In brief, the assumptions of Theorem <ref> are 1) the slope of ϕ is bounded, 2) the trajectories encoded in p have non-zero sensitivity to the initial state (i.e. the system is controllable), and 3) the sensitivities of p satisfy a consistency condition so that decreasing K does not cause R to occur sooner. See <cit.> for further discussion. We also assume that 𝒰 is equivalent to ^m, though in practice, the function β can be tuned to achieve input constraint satisfaction. This is a powerful theorem because it implies the form (<ref>) can be applied to any system and any constraint function of any relative degree. It is also advantageous in practice, as will be clear in the simulations.Next, we provide two lemmas on how to compute the derivatives of the BECBF (<ref>). In these lemmas, the notation ∂ p(a,b,c)/∂λ_x means ∂ p(τ,t,x)/∂ x|_(τ,t,x) = (a,b,c).For some (t,x)∈𝒯×𝒳, suppose that h in (<ref>) is continuously differentiable in a neighborhood of M(t,x) and p^* = p(M(t,x),t,x). Assume that M in (<ref>) is Lipschitz continuous. Let ϕ' be the derivative d/dτ[ϕ(τ)] of ϕ(τ) = h(τ,p(τ,t,x)). Thend K(t,x)/dt = ∂ h(M, p^*)/∂λ_x∂ p(M, t, x)/∂λ_x g(t,x) (u - μ(t,x))+∂ h(M,p^*)/∂λ_t + ∂ h(M,p^*)/∂λ_xϕ'(M)M∈{t, t+T}0M∈ (t,t+T)For some (t,x)∈𝒯×𝒳, suppose that R(t,x) ≠M(t,x) and that h is continuously differentiable in a neighborhood of R(t,x) and p^∘ = p(R(t,x),t,x). Let ϕ' be the derivative d/dτ[ϕ(τ)] of ϕ(τ) = h(τ,p(τ,t,x)). ThendR(t,x)/dt = 1/-ϕ'(R)∂ h(R,p^∘)/∂λ_x∂ p(R,t,x)/∂λ_x g(t,x)(u - μ(t,x))That is, while we could compute the derivatives of H in (<ref>) numerically (which would require recomputing K, M, and R at least n times), we also have explicit expressions for the derivatives of H in terms of the derivatives of h and p. Moreover, these derivatives are zero if u = μ and M is not an endpoint. Unfortunately, the expression for dR/dt in the case of R = M is prohibitively complex, so this should be computed numerically. Note that if M(t,x) is continuously differentiable at x, thendM(t,x)/dt = ∂M(t,x)/∂λ_x g(t,x) (u - μ(t,x))has the same structure as (<ref>)-(<ref>), where ∂M/∂λ_x is instead computed numerically. One can also replace R-t in (<ref>) with the current safety metric h, as was done in <cit.>. This results in simpler derivative expressions than (<ref>)-(<ref>), but is less applicable to the satellites scenario for which this was intended.§.§.§ Simulations To demonstrate the advantages of the BECBF over conventional CBFs, we consider two case studies. Our first case study involves two cars passing through a four-way intersection. We assume that the cars are fixed in their lanes l_1:→^2 and l_2:→^2, respectively, with locations z_1∈ and z_2∈ along their lanes. Thus, the position of car 1 on the road is l_1(z_1)∈^2 and the position of car 2 is l_2(z_2)∈^2. For simplicity, we model the cars as double-integrators: z̈_1 = u_1 and z̈_2 = u_2, resulting in state vector x = [z_1; ż_1;z_2; ż_2] and control vector u = [u_1; u_2]. Suppose the cars nominally want to travel in their lanes at velocities v_1 and v_2, respectively, so the nominal control input is μ = [μ_1; μ_2] where μ_i(t,x) = k(v_i - ż_i) for some gain k > 0. The path function is then p = [p_1; p_2] where p_i is the solution to (<ref>) under μ. The function p is computed explicitly here, though we note that numerical solutions to (<ref>) are also fine.p_i(τ;t,x) = [ z_i + v_i(τ - t) + ż_i - v_i/k(1 - e^-k(τ - t));v_i + (ż_i - v_i)e^-k(τ-t) ].Let the safety constraint be h = ρ - l_1(z_1) - l_2(z_2).This system meets all the assumptions of Theorem <ref>. The system also meets the assumptions of the formulas (<ref>)-(<ref>), except on the critical manifolds l_1(p_1^*) = l_1(p_2^*) (where h is not continuously differentiable) and K(t,x) = 0 (where (<ref>) switches cases). These manifolds are of Lebesgue measure zero and can be ignored, so it is straightforward to apply the CBF H in (<ref>) in a QPu = _u∈^2 u - μ(t,x) ^2 s.t. dH(t,x)/dt≤α(-H(x)).Note that this is a centralized control law, since u_1 and u_2 are computed simultaneously.We then simulated two cars approaching an intersection with the control law (<ref>), with the same control law with an ECBF <cit.> in place of the BECBF, and with a nonlinear MPC control law. The results are best demonstrated by the video <https://youtu.be/0tVUAX6MCno>, and the safety function values are shown in Fig. <ref>. The BECBF and MPC cases performed generally similarly, with both cars approaching close and then continuing on opposite sides of the intersection. On the other hand, the ECBF caused both agents to come to a complete stop, so neither agent made it through the intersection. On average, the control computation time was 0.0011 s for the ECBF, 0.0061 s for the BECBF, and 0.40 s for the MPC approach, all in MATLAB on a 3.5 GHz CPU, though the code for all three cases could likely be further optimized. For more information and simulation code, see <cit.>.Thus, the BECBF achieved similar performance to the nonlinear MPC with substantially reduced computation time. This was possible because the BECBF only performs one prediction of ϕ per control cycle and only varies the current control input, whereas MPC varies all the states and control inputs within a horizon. Compared to the ECBF, the QP with BECBF chooses the control input that most encourages safety at the moment where the two cars are closest together—in this case, that means one car decelerating and one car accelerating so that the cars are never too close together. By contrast, the QP with ECBF always chooses the control that most encourages safety at the present—in this case, that means applying a control input that is opposite the present direction of the other car (and constrained to be along the lanes l_1 and l_2), which is a deceleration for both cars. The differences between these directions for a static obstacle is illustrated in Fig. <ref>.Next, we consider the case study of satellites in low earth orbit with state x = [r ṙ]∈^6 and dynamics r̈ = -μ r/r^3 + u. In this example, μ = 3.986(10)^14 is very large (much larger than the example in Section <ref>), so the uncontrolled term of r̈ will always be much larger than u and the system state will evolve rapidly. This means that 1) conventional approaches like the ECBF or Theorem <ref> will not work very well, and 2) a small control input will have a large effect on system trajectories over time.Let h(t,x) = ρ -r - r_o(t) where ρ = 1000 and r_o is the location of an uncontrolled piece of debris. Let μ = 0, so the satellite nominally applies no control input, unless u≠ 0 is necessary to ensure safety. Our simulation scenario places the controlled satellite and the debris initially very far apart, but in orbits that eventually intersect if no control action is taken. Simulations under the BECBF, under an ECBF, and under no control action are shown in Figs. <ref>-<ref> and in the video<https://youtu.be/HhtWUG63BWY>. Note how the BECBF trajectory (blue) takes a small control action as soon as the unsafe prediction enters the horizon at t=367 s, and then is very similar to the nominal trajectory (red). On the other hand, the ECBF trajectory (green) takes control action much later, when over 10 times as much thrust is required. This avoidance problem could in theory also be solved with MPC, but would require a very fine discretization, because h > 0 occurs for only 0.14 seconds since the satellites are moving so fast.Thus, utilizing the same length of prediction horizon would require more than 10^4 samples, making the MPC problem intractable.Thus, in addition to choosing a better control direction than the ECBF, the BECBF also provides a mechanism for tuning how early we want the system to detect and respond to predicted collisions. The BECBF is also able to take into account the future locations of time-varying obstacles with known paths rather than just their current positions. Thus, the BECBF can make use of paths like satellite orbits, and in the cars example, the BECBF can take into account whether each car intends to continue straight or turn. Note that both of these examples included fairly simple path functions, where the controlled agents were always in motion so that ϕ had a strictly negative hessian, and thus K, M, and R were always well-defined and differentiable. In the future, we seek to consider more general paths that may challenge the regularity assumptions presently made.§ PRACTICAL CHALLENGES: OUTPUT FEEDBACK CONTROL AND SAMPLED-DATA CONTROL WITH CONTROL BARRIER FUNCTIONS §.§ Output Feedback ControlSynthesizing safe controllers for nonlinear systems using output feedback can be a challenging task, since observers and controllers designed independently of each other may not render the system safe. In our recent work <cit.> we present two observer-controller interconnections that ensure that the trajectories of a nonlinear system remain safe despite bounded disturbances on the system dynamics and partial state information. The first approach utilizes Input-to-State Stable observers, and the second uses Bounded Error observers. Using the stability and boundedness properties of the observation error, we construct novel Control Barrier Functions that impose inequality constraints on the control inputs which, when satisfied, certify safety. We propose quadratic program-based controllers to satisfy these constraints, and prove Lipschitz continuity of the derived controllers.§.§.§ Tunable-Robust CBFsWe consider nonlinear control-affine systems of the form:ẋ = f(x) + g(x) u + g_d(x) d(t), y= c(x) + c_d(x) v(t), where x ∈𝒳⊂ℝ^n is the system state, u ∈𝒰⊂ℝ^m is the control input, y ∈ℝ^n_y is the measured output, d: ℝ^+ →ℝ^n_d is a disturbance on the system dynamics, and v: ℝ^+ →ℝ^n_v is the measurement disturbance. We assume d and v are piecewise continuous, bounded disturbances, sup_td(t)_∞ = d̅ and v(t)_∞≤v̅ for some known d̅, v̅ < ∞. The functions f : 𝒳→ℝ^n, g: 𝒳→ℝ^n × m, c : 𝒳→ℝ^n_y, g_d : 𝒳→ℝ^n × n_d, and c_d : 𝒳→ℝ^n_y× n_v are all assumed to be locally Lipschitz continuous. Notice that g_d(x) d(t) accounts for either matched or unmatched disturbances. We seek to establish observer-controller interconnections of the form:ẋ̂̇ = p(x̂, y) + q(x̂, y) u, u= π(t, x̂, y), where p : 𝒳×ℝ^n_y→ℝ^n, q: 𝒳×ℝ^n_y→ℝ^n × m are locally Lipschitz in both arguments. The feedback controller π : ℝ^+ ×𝒳×ℝ^p →𝒰 is assumed piecewise-continuous in t and Lipschitz continuous in the other two arguments. Then, the closed-loop system formed by (<ref>, <ref>) isẋ = f(x) + g(x)u + g_d(x) d(t), ẋ̂̇ = p(x̂, y) + q(x̂, y) u,x(0)= x_0,x̂(0) = x̂_0, where y and u are defined in (<ref>) and (<ref>) respectively. Under the stated assumptions, there exists an interval ℐ = ℐ(x_0, x̂_0) = [0, t_max(x_0, x̂_0)) over which solutions to the closed-loop system exist and are unique <cit.>. Safety is defined as the true state of the system remaining within a safe set, 𝒮⊂𝒳, for all times t ∈ℐ, where the safe set 𝒮 is defined as the super-level set of a continuously-differentiable function h: 𝒳→ℝ as in (<ref>).A state-feedback controller[In state-feedback the control input is determined from the true state, u = π(t, x). In estimate-feedback the input is determined from the state estimate and measurements, u = π(t, x̂, y).] π : ℛ^+ ×𝒳→𝒰 renders system (<ref>) safe with respect to the set 𝒮, if for the closed-loop dynamics ẋ = f(x) + g(x) π(t, x) + g_d(x) d(t), the set 𝒮 is forward invariant, i.e., x(0) ∈𝒮 x(t) ∈𝒮 ∀ t ∈ℐ. In output-feedback we define safety as follows:An observer-controller pair (<ref>) renders system (<ref>) safe with respect to a set 𝒮⊂𝒳 from the initial condition sets 𝒳_0, 𝒳̂_0 ⊂𝒮 if for the closed-loop system (<ref>) it holds that x(0) ∈𝒳_0and x̂(0) ∈𝒳̂_0x(t) ∈𝒮∀ t ∈ℐ.Now, inspired by <cit.> and <cit.>, we define the following CBF to account for disturbances and measurement noise.A continuously differentiable function h : 𝒳→ℝ is a Tunable Robust CBF (TR-CBF) forsystem (<ref>) if there exists a class 𝒦 function α, and a continuous, non-increasing function κ: ℝ^+ →ℝ with κ(0) = 1, s.t.sup_u ∈𝒰L_fh(x) +L_gh(x) u + α(h(x))≥κ(h(x)) L_g_dh(x)d̅, ∀ x ∈𝒮.Examples include κ(r) = 1 and κ(r) = 2/(1 + exp(r)). Given a TR-CBF h for (<ref>), the set of safe control inputs isK_trcbf(x) = { u ∈𝒰 : L_fh(x) + L_gh(x) u -κ(h(x)) L_g_dh(x)d̅≥ -α(h(x))},and a safe state-feedback controller is obtained by solving a QP, as in <cit.>. The main question is:Given a system (<ref>) with disturbances of known bounds d(t)_∞≤d̅, v(t)_∞≤v̅, and a safe set 𝒮 defined by (<ref>), synthesize an interconnected observer-controller (<ref>) and the initial condition sets 𝒳_0, 𝒳̂_0 to render the system safe. §.§.§ Observer-Controller Interconnection We review Approach 2 of our work in <cit.>, where we consider the class of Bounded-Error Observers: An observer (<ref>) is a Bounded-Error (BE) Observer, if there exists a bounded set 𝒟(x̂_0) ⊂𝒳 and a (potentially) time-varying bounded set 𝒫(t, x̂) ⊂𝒳 s.t. x_0 ∈𝒟(x̂_0)x(t) ∈𝒫(t, x̂)∀ t ∈ℐ. The idea is to find a common, safe input for all x ∈𝒫(t, x̂): For system (<ref>), suppose the observer (<ref>) is a Bounded-Error observer. Suppose the safe set 𝒮 is defined by a continuously differentiable function h : 𝒳→ℝ, where h is a Tunable Robust-CBF for the system. Suppose π: ℝ^+ ×𝒳→𝒰 is an estimate-feedback controller, piecewise-continuous in the first argument and Lipschitz continuous in the second, s.t. π(t, x̂) ∈⋂_x ∈𝒫(t, x̂) K_trcbf(x), where K_trcbf is defined in (<ref>). Then the observer-controller renders the system safe from the initial-condition sets x(0) ∈𝒳_0 = 𝒟(x̂_0) and x̂_0 ∈𝒳̂_0 = {x̂: 𝒫(0, x̂_0) ⊂𝒮}. In general, designing a controller satisfying (<ref>) can be difficult. Under certain assumptions, one can define certain forms of a Quadratic Program that defines a controller that meets the desired properties. In the interest of space, we refer the reader to <cit.> for the details, and we directly present some experimental results of the safe observer-controller interconnection. The objective for a quadrotor is to fly in a “figure of eight" trajectory, but to not crash into a physical barrier placed at x=0.5 meters. An Extended Kalman Filter is used as the bounded error observer. To design the controller, first π_des(t, x̂) is computed using an LQR controller, which computes desired accelerations wrt to an inertial frame to track the desired trajectory. This command is filtered using a safety-critical QP, either the baseline CBF-QP (Figure <ref>a) or the proposed QP using Approach 2 (Figure <ref>c). The trajectories from the two flight controllers are compared in Figure <ref>. In the baseline controller, the quadrotor slows down as it approaches the barrier, but still crashes into barrier. In the proposed controller, the quadrotor remains safe, Figure <ref>e.§.§ Zero-Order Hold (ZOH) Control In this section, we consider one of the major challenges that continuous-time CBF-based controllers (such as those derived in the previous sections) face in practice, namely that physical systems evolve in continuous time, under control inputs that are implemented at discrete time instances, such as zero-order-hold (ZOH) controllers with fixed time-step. One can easily construct counter-examples showing that the control laws developed from the CBF condition in <cit.> are no longer safe when the controller is executed in discrete steps. On the other hand, a controller implemented under discrete-time CBFs <cit.> may not satisfy the continuous safety condition between time steps <cit.>. In our paper <cit.> we study conditions for forward invariance of safe sets under ZOH controllers. We define two types of margins, the controller margin and the physical margin, to compare the conservatism of the conditions developed. In <cit.>, we present extensions to the approaches in <cit.> that reduce conservatism as measured by these margins, while similarly relying on proving that the continuous-time CBF condition is always satisfied. We also present a novel condition inspired instead by discrete-time set invariance conditions, and compare the conservatism of all the approaches studied using the above margins. For brevity, we only present the prior state-of-the-art and this last approach here, and we refer the reader to <cit.> for details about the other approaches. We also build upon the following approach further in <cit.>.§.§.§ Problem Formulation We consider the systemẋ = f(x) + g(x) u,with state x ∈ℝ^n, control input u∈ U ⊂ℝ^m where U is compact, and locally Lipschitz continuous functions f: ℝ^n →ℝ^n and g: ℝ^n →ℝ^n × m. Define u_max≜max_u∈ U ||u||. Let h: ℝ^n →ℝ where h ∈ C^1_loc, and define a safe set S as S ≜{ x ∈ℝ^n | h(x) ≤ 0} . For a continuous control law u(x), Theorem <ref> (with the adjusted sign of the CBF) can be used for guaranteeing safety of dynamical systems. To apply Theorem <ref>, we must ensure (<ref>) (with the inequality reversed) is satisfied along x(t) for all t ≥ 0. However, suppose instead that the state x is only measured discretely (and thus the control policy u(x) is updated in a discrete fashion too) at times t_k = kT, k=0,1,2,⋯ for a fixed time-step T∈ℝ_>0.Consider a ZOH control law[Under u as in (<ref>) for a compact set U, uniqueness of the maximal closed-loop solution x(t) (and hence x_k) is guaranteed by <cit.>.]u(t) = u_k, ∀ t ∈ [t_k, t_k+1),where u_k = u_k(x_k)∈ U and x_k = x(t_k), ∀ k∈ℕ.Satisfaction of (<ref>) only discretely is not sufficient for safety per Theorem <ref>. Thus, we seek a condition similar to (<ref>) under which safety can be guaranteed when the control input is updated only at discrete times. We consider the following problem.Design a function ϕ:ℝ_>0×ℝ^n →ℝ such that any bounded, piecewise-constant control input u∈ U of the form (<ref>) satisfyingL_f h(x_k) + L_g h(x_k) u_k ≤ϕ(T, x_k),at the sampled states x_k=x(kT), k∈ℕ renders S forward invariant along the closed-loop trajectories of (<ref>). We call (<ref>) the ZOH-CBF condition. The following result, adapted from <cit.>, provides one form of the function ϕ that solves Problem <ref> (see also <cit.>). Let the set S in (<ref>) be compact and α∈𝒦 be locally Lipschitz continuous. Let l_L_fh, l_L_gh, l_α(h) be the Lipschitz constants of L_fh, L_gh, α(-h), respectively. Then the function ϕ_0^g:ℝ_>0×ℝ^n, defined as ϕ_0^g(T,x) ≜α(-h(x)) - l_1 Δ/l_2( e^l_2 T - 1), solves Problem <ref>, where l_1 = l_L_f h + l_L_g hu_max + l_α(h), l_2 = l_L_f h + l_L_g hu_max, and Δ = sup_x∈ S,u∈ U||f(x) + g(x)u||. In practice, the form of the function ϕ_0^g in (<ref>) is conservative in the sense that many safe trajectories may fail to satisfy (<ref>) for ϕ=ϕ_0^g. To overcome this limitation, we define two metrics to quantify the conservatism of the solutions to <ref> and then develop novel solutions to Problem <ref> that are less conservative compared to (<ref>).§.§.§ Comparison MetricsIn this work, we consider functions ϕ of the form:ϕ (T,x)= α(-h(x)) - ν(T,x),where α is a class-𝒦 function that vanishes as h(x)→ 0, and ν:ℝ_>0×ℝ^n→ℝ is a function of the discretization time-step T and the state x that does not explicitly depend on h. This motivates our first metric of comparison, defined as follows.The function ν in (<ref>) is called the controller margin. Note that ν is the difference between the right-hand sides of conditions (<ref>) and (<ref>), and is a bound on the discretization error that could occur between time steps. At a given state x ∈ S, a larger controller margin will necessitate a larger control input to satisfy (<ref>) (hence, the name “controller margin"). A sufficiently large controller margin might also necessitate inadmissible control inputs, and thus make a CBF no longer applicable to a system. Thus, it is desired to design functions ϕ whose controller margins are small. For a given T, we call a solution ϕ_a less conservative than ϕ_b if the controller margins of ϕ_a and ϕ_b satisfy ν_a(T,x) ≤ν_b(T,x), ∀ x ∈ S.The controller margin is called local (denoted as ν^l(T,x)) if ν varies with x, and global (denoted as ν^g(T)) if ν is independent of x. The superscripts l and g, respectively, denote the corresponding cases, and ν is denoted with the same sub/superscripts as the corresponding ϕ function. For instance,ν_0^g(T)=l_1Δ/l_2(e^l_2T-1)is the controller margin of ϕ_0^g defined in (<ref>), and is a global margin because it is independent of x.Note that condition (<ref>) imposes that the time derivative of h vanishes as h approaches the boundary of the safe set. In contrast, the ZOH-CBF condition (<ref>) causes the time derivative of h to vanish at a manifold in the interior of the safe set. Inspired from this, we define a second metric of comparison, which captures the maximum distance between this manifold and the boundary of the safe set.For a solution ϕ of Problem <ref> with the form (<ref>), the physical margin is the function δ:ℝ_>0→ℝ defined as δ(T) ≜sup_{x∈ S| ϕ(T,x) = 0} -h(x). Intuitively, δ quantifies the effective shrinkage of the safe set due to the error introduced by discrete sampling. The condition (<ref>) may exclude closed-loop trajectories from entering the set S_δ = {x|-δ≤ h(x)≤ 0}, while the condition (<ref>) does not.Thus, a smaller physical margin δ implies a smaller subset S_δ of the safe set where system trajectories may not be allowed to enter.In our paper <cit.> we develop three solutions to Problem <ref> that have lower controller and/or physical margins than ϕ_0^g, in both local and global forms, which follow from either continuous-time CBF conditions such as (<ref>), or discrete-time CBF conditions <cit.>. In the interest of space, in this tutorial paper, we will refer to only one of the results and interested readers are referred to <cit.> for a thorough analysis and comparison among all three methods and their relation to the state-of-the-art.§.§.§ A Less Conservative Methodology Rather than choosing ϕ so as to enforce (<ref>) between sample times, as is done in <cit.>, here we start from a discrete-time CBF condition and apply it to an approximation of the continuous-time dynamics. One sufficient discrete-time CBF condition, as shown in <cit.>, ish(x_k+1) - h(x_k) ≤ -γ h(x_k),∀ k ∈ℕfor some γ∈ (0,1]. In general, this condition is not control-affine. However, its linear approximation is control-affine and thus amenable to inclusion in a QP. The error of a linear approximation of a twice differentiable function is bounded by the function's second derivative. For brevity, defineψ(x,u) ≜∇[ḣ(x)] (f(x) + g(x)u)which represents the second derivative of h between time steps. Since f,g,∇[h] are assumed locally Lipschitz, ψ is defined almost everywhere. Let ℛ(x,T) denote the set of states reachable from some x(0)∈ S in times t∈[0, T). Define the boundη(T,x) ≜max{(sup_z∈ℛ(x,T)∖𝒵,u∈ Uψ(z,u)), 0} ,where 𝒵 is any set of Lebesgue measure zero (to account for CBFs that are not twice differentiable everywhere). A solution to Problem <ref> is then as follows.The function ϕ_3^l:_>0×^n→, defined as ϕ_3^l(T,x) ≜ -γ/Th(x) - 1/2Tη(T,x)_ν_3^l(T,x) solves Problem <ref>, for any γ∈ (0,1]. In <cit.>, we provide a detailed discussion and proofs on how this method is less conservative as compared to (<ref>) and to the other methods derived. Here, we only demonstrate this by simulation. Finally, if one wishes to instead use a global margin to avoid needing to compute ℛ, the function ϕ_3^g:_>0→ as follows also solves Problem <ref>:ϕ_3^g (T,x) ≜ -γ/Th(x) - 1/2Tsup_z∈ Sη(T,z)_ν_3^g(T) .See also <cit.> for extensions of (<ref>). §.§.§ Simulation Results We present a case study involving a robotic agent modeled as the unicycle systemẋ_1 = u_1 cos(x_3),ẋ_2 = u_1 sin (x_3),ẋ_3 = u_2,where [x_1, x_2]^T is the position, x_3 is the orientation, and u_1,u_2 are the linear and angular velocity of the agent; its task is to move around an obstacle at the origin using the CBF <cit.>h = ρ - √(x_1^2 + x_2^2 - (_π(x_3 - σarctan2(x_2, x_1)))^2) ,where ρ is the radius to be avoided, and σ is a shape parameter. We choose T = 0.1, α(λ) = λ for ϕ_0^g, and γ = 1 for ϕ_3^l,ϕ_3^g. Other notable parameters are listed in <cit.>.The agent moves under the following controlleru = argmin_u∈ Uu - u_nom s.t. (<ref>) is satisfiedwhere u_nom is a nominal control law that ignores the obstacle.Note that the local margin (<ref>) requires ν_3^l to be computed online (or computed offline and stored in a function); this computation took 0.018 seconds per control cycle on a 3.5 GHz computer using MATLAB R2019b. The global margin (<ref>) only requires ν_3^g to be computed once offline, which took under a minute. In total, 7 solutions (ϕ_0^g, ϕ_1^g, ϕ_1^l, ϕ_2^g, ϕ_2^l, ϕ_3^g, ϕ_3^l) to Problem <ref> were tested; simulation code and details on the other methods ϕ_1^l, ϕ_1^g, ϕ_2^l, ϕ_2^g can be found in <cit.>. The trajectories of the unicycle around an obstacle are plotted in Fig. <ref>, where the green marker is the target location. Only four methods are shown because using ϕ_0^g,ϕ_1^g,ϕ_1^l resulted in the agent turning away from the target due to excessive conservatism. The instantaneously required controller margins ν for all 7 methods, computed for x(t) along the ϕ_3^l trajectories from Fig. <ref>, are plotted in the top plot in Fig. <ref>. Note the logarithmic scale; the prior work ν_0^g is omitted because ν_0^g = 1.3(10)^50, indicating that this method cannot be used at the chosen time step of T = 0.1.Next, consider the physical margins. As (<ref>) includes a supremum over S, the physical margin is inherently a global quantity. Physical margins for the prior work ϕ_0^g and the three new global margins ϕ_1^g, ϕ_2^g, and ϕ_3^g are computed in Table <ref>. Note how the physical margin for ϕ_3^g decreases quadratically with T, whereas the other methods decrease linearly with T; the reasoning for this is elaborated upon in <cit.>. Finally, the CBF values during the simulations from Fig. <ref> are also shown in Fig. <ref>. The peaks of the dashed lines agree well with the theoretical values in the first column of Table <ref>.Noting these physical margins, we added a second constraint that forced the unicycle to navigate through a narrow corridor only 0.3 units wide, shown in Fig. <ref>. The unicycle operating under ϕ_3^g or ϕ_3^l makes it through the obstacles, while the best of the other methods (ϕ_2^l) could not.§ ADVERSARIALLY-ROBUST CONTROL BARRIER FUNCTIONS We present applications of CBFs to the control of multi-agent/multi-robot systems in the presence of adversaries. We first review our results on Adversarially-Robust CBFs, which provide safety conditions for sampled-data distributed control when agents are behaving adversarially. Then we present an application of the method to resilient control against adversarial (chasing) agents.§.§ Sampled-Data Distributed ControlThe idea of assuring forward invariance of a setunder sampled-data implementation can be utilized and extended to control of sampled-data multi-agent systems. In our work <cit.> we consider a class of functions describing safe sets that have high relative degree with respect to (w.r.t.) the system dynamics, where the control inputs of the agents do not appear for one or more time derivatives of the safe-set function. We also consider asynchronous sampling times with clock disturbances, the presence of adversarially-behaving agents, and functions describing safe sets that have high relative degree w.r.t. the system dynamics. Our goal is to establish a framework under which a set of normally-behaving agents in a system with sampled-data dynamics can collaboratively render a safe set forward invariant despite the actions of adversarial agents. Our analysis considers asychronous sampling times and distributed calculation of agents' control inputs.More specifically, we consider a group of N ∈ℤ_>0 agents, with the set of agents denoted by 𝒱 and each agent indexed {1,…,N}. Each agent i ∈𝒱 has the state x_i ∈ℝ^n_i, n_i ∈ℤ_>0 and input u_i ∈ℝ^m_i, m_i ∈ℤ_>0.The system and input vectors x⃗, u⃗, respectively, denote the vectors that concatenate all agents' states and inputs, respectively, as x⃗ = x_1^T,…, x_N^T^T, x⃗∈ℝ^n̅ and u⃗ = u_1^T,…, u_N^T, u⃗∈ℝ^m̅, n̅ = ∑_i=1^N n_i, m̅ = ∑_i=1^N m_i. Agents receive knowledge of the system state x⃗ in a sampled-data fashion; i.e., each agent i ∈ has knowledge of x⃗(·) only at times 𝒯_i = {t_i^0, t_i^1, t_i^2, …},where t_i^k represents agent i's kth sampling time, with t_i^k+1 > t_i^k ∀ k ∈ℤ_≥ 0.The sampling times are such that there exists ϵ_𝒯 > 0 such that t_i^k+1 - t_i^k≥_𝒯 ∀ k ≥ 0, ∀ i ∈𝒱. In addition, at each t_i^k ∈𝒯_i the agent i applies a zero-order hold (ZOH) control input u(t_i^k) that is constant on the time interval t ∈ [t_i^k, t_i^k+1). For brevity, we denote x_i^k_i = x_i(t_i^k) and u_i^k_i = u_i(t_i^k). The sampled-data dynamics of each agent i ∈𝒱 under its ZOH controller on each interval t ∈ [t_i^k, t_i^k+1) is as follows:ẋ_i(t)= f_i(x_i(t)) + g_i(x_i(t)) u_i(t_i^k) + ϕ_i(t).The functions f_i, g_i may differ among agents, but are all locally Lipschitz on their respective domains ℝ^n_i.The functions ϕ_i: ℝ→ℝ^n_i, i ∈𝒱, are locally Lipschitz in t and model disturbances to the system (<ref>).Each ϕ_i is bounded as per the following assumption:For all i ∈𝒱, the disturbances ϕ_i(t) satisfy ϕ_i(t)≤ϕ_i^max∈ℝ_≥ 0, ∀ t ≥ 0.Safety of the multi-agent system can be collaboratively preserved by defining a multi-agent CBF h asS = {x⃗∈ℝ^n̅ : h(x⃗) ≤ 0},∂ S = {x⃗∈ℝ^n̅ : h(x⃗) = 0},int(S) = {x⃗∈ℝ^n̅ : h(x⃗) < 0}.Forward invariance of h can be guaranteed by satisfying the condition∑_i ∈𝒱L_f_i h^x_i(x⃗) + L_g_i h^x_i(x⃗) u_i + L_ϕ_i h^x_i(x⃗)≤ -α(h(x⃗)),for a class-𝒦_∞ function α, which follows from a comparison result <cit.>. Prior literature assumed the cooperation of all agents to collaboratively ensure the satisfaction of the safety condition for forward invariance of h. Our work dropped this assumption and considered the presence of a subset of adversarial agents 𝒜⊂𝒱 that apply the following control input for all sampling times t_j^k, k ∈ℤ_≥ 0, j ∈𝒜:u_j^max(x⃗^k_j) = max_u ∈𝒰_jL_f_jh^x_j(x⃗^k_j) + L_g_j h^x_j(x⃗^k_j) u.Agents that are not adversarial are called normal. The set of normal agents is denoted 𝒩 = 𝒱\𝒜. Dividing the left-hand side (LHS) of (<ref>) into normal and adversarial parts yields the following sufficient condition for set invariance in the presence of adversaries:∑_j ∈𝒜L_f_j h^x_j(x⃗) + L_g_j h^x_j(x⃗) u_j^max + L_ϕ_j h^x_j(x⃗) + ∑_i ∈𝒩L_f_i h^x_i(x⃗) + L_g_i h^x_i(x⃗) u_i + L_ϕ_i h^x_i(x⃗)≤ -α(h(x⃗)).The form of (<ref>) reflects sampled-data adversarial agents seeking to violate the set invariance condition in (<ref>) by maximizing their individual contributions to the LHS sum. The normal agents must compute control inputs that render the set S forward invariant using the sufficient condition in (<ref>) despite the worst-case behavior of the adversarial agents in 𝒜.In addition to the ZOH sampled-data dynamics and adversarial actions, the normal agents must take into account asynchronous sampling times. The assumption of identical, synchronous sampling times typically does not hold in practice. Also, a distributed system may not have access to a centralized entity to solve a QP computing control inputs for all normal agents. Finally, safety may be defined in terms of CBFs which may have high relative degree with respect to the agents' dynamics. It is therefore necessary to consider heterogeneous sampling times, distributed methods for computing local control inputs, and CBFs with high relative degree. In consideration of these challenges, our work <cit.> defined the following safety-preserving control set for each normal agent i ∈𝒩:K_i^ψ(x⃗^k_i)= u_i ∈𝒰_i : ψ_q^d(x⃗^k_i) ≤ 0, = {u_i ∈𝒰_i : L_f_i (ψ_q-1^d)^x_i(x⃗^k_i) + L_g_i (ψ_q-1^d)^x_i(x⃗^k_i) u_i+∑_l ∈\{i}L_f_l (ψ_q-1^d)^x_l(x⃗^k_i) + L_g_l (ψ_q-1^d)^x_l(x⃗^k_i) û_l^k_i + ∑_j ∈γ_j^max(x⃗^k_i) + α_q(ψ_q-1^d(x⃗^k_i))+ η'(Γ_i + δ^max) ≤ 0 }.In this equation, the functions (ψ_q-1^d)^x_i for each agent i ∈ each represent the last of a series of functions ψ_i+1≜ψ̇_i(x⃗) + α_i+1(ψ_i(x⃗) typically defined to account for CBFs with high relative degree <cit.>. The term γ_j^max is a function describing the worst-case adversarial behavior with respect to the CBF safety condition:γ_i^max(x⃗)= max_u_i ∈𝒰_iL_f_ih^x_i(x⃗) + L_g_ih^x_i(x⃗) u_i. The function η' in (<ref>) accounts for the evolution of system dynamics between sampling times and heterogeneous sampling instances between agents, and is defined asη'(Γ) = c_f' + c_g' u_max + c_α' + c_γ̂'(Γ).The variables Γ_i and δ^max in (<ref>) account for heterogeneous nominal sampling periods and an upper bound on disturbances to the nominal sampling times, respectively.For each normal agent i ∈𝒩, applying a control input u_i^k_i∈ K_i)x⃗^k_i guarantees that the trajectory of the combined normal agents' states remains within the safe set for all times within the sampling interval t ∈ [t_i^k, t_i^k+1): Consider the system (<ref>) with sampling times described by𝒯_i = {t_i^0, t_i^1, …} s.t.t_i^k+1 - t_i^k = Γ_i + δ_i(k), ∀ k ∈𝒵_≥ 0,. If at sampling time t_i^k for k ≥ 0, i ∈𝒩 it holds that x⃗^k_i∈ S, then for any u_i^k_i∈ K_i(x⃗^k_i) the trajectory x⃗(t) satisfies x⃗(t) ∈ S for all t ∈ [t_i^k, t_i^k+1). The safe control inputs u_i^k_i∈ K_i)x⃗^k_i can be computed for each normal agent i in a distributed manner using a convex quadratic programming formulation: u_i(x⃗^k_i) = u_i ∈𝒰_imin u_i - u_i,^k_i_2^2 s.t. L_f_ih^x_i(x⃗^k_i) + L_g_ih^x_i(x⃗^k_i)u_i + ∑_l ∈𝒩 \{i} L_f_lh^x_l(x⃗^k_i) + L_g_lh^x_l(x⃗^k_i)û_l^k_i + ∑_j ∈𝒜 γ_j^max(x⃗^k_i) + α(h(x⃗^k_i)) + η(Γ_i + δ^max) ≤0. Further details about these results can be found in <cit.>. § NON-SMOOTH CONTROL Many forward invariance results in prior literature on CBFs assume that the agent's control input is Lipschitz continuous. At the same time, the majority of CBF results rely upon computing safe control inputs via a parametric convex quadratic program (QP). Demonstrating that the Lipschitz continuity of a parametric QP is nontrivial in general.Our work in <cit.> addresses a simple question: Can forward invariance be assured for discontinuous control inputs? Our results in <cit.> answer this in the affirmative. Unlike prior literature, our work approaches the problem using the notions of Clarke tangent cones and transversality. We demonstrate that a constrained control input simultaneously rendering these subsets invariant can be generated by simply solving a feasibility problem with compact linear constraints. The control input is only required to be Lebesgue measurable and is not required to be continuous. This work considers control affine systems in the formẋ(t)= f(x(t)) + g(x(t))u(t), u(t)∈𝒰⊂ℝ^m ∀ t ≥ t_0.where the functions f: ℝ^n →ℝ^n and g : ℝ^n →ℝ^n × m are assumed to be locally Lipschitz on ℝ^n and the set 𝒰 is a compact, convex polytope with int(𝒰) ≠∅ which has the form𝒰 = {u ∈ℝ^m : A_u u ≤ b_u }, A_u∈ℝ^p × m, b_u ∈ℝ^p × 1,where A_u, b_u are constant. Our analysis requires the notion of strict CBFs, which are defined as follows:The continuously differentiable function h : ^n is called a strict CBF for the set S ⊂^n defined as S = {x|h(x)≤ 0} if the following holds:inf_u ∈𝒰L_f h(x) + L_g h(x) u < 0 ∀ x ∈∂ S,where f,g are defined as in (<ref>).Our work approaches the problem by designing a differential inclusion of the formG(x) = {f(x) + g(x)u : u ∈ K(x) },where the set-valued map K : ^n 𝒫(^m) satisfies K(x) ⊆𝒰 for all x ∈^n.When a single safe set is being considered, K is defined asK(x) = u ∈^m : A_S(x) A_u u ≤b_S(x) b_u,To ensure that the set-valued maps are locally Lipschitz, we consider a γ-contraction of K and G defined asK_γ(x)= int(K(x)) - γ B(0,1), = {u ∈ K(x) : d_K^c(u) ≥γ}, K^c = ^m \ K(x), G_γ(x)= v ∈^n :v = f(x) + g(x) u,u ∈ K_γ(x).The following result demonstrates conditions under which forward invariance of a safe set under a discontinuous control input can be guaranteed.Consider the system ẋ(t) ∈ G_γ(x(t)).Let S be a safe set for some strict control barrier function h. Let x(·) be any trajectory of (<ref>) under a Lebesgue measurable control input u(·) with x_0 = x(0) ∈int(S ∩Ω). Let [0,T(x_0)) be the (possibly empty) maximal interval such that x(t) ∈int(Ω) for all t ∈ [0,T(x_0)). Then x(t) ∈ S for all t ∈ [0,T(x_0)). It is common to consider multiple safe sets S_i = {x|h_i(x)≤ 0} simultaneously. In other words, we seek to render the composed set S_I = ⋂_i=1^N_h S_i strongly invariant. Towards this end we define the set-valued mapK(x) = u ∈^m : Â_S(x) A_u u ≤b̂_S(x) b_u, Â_S : ^n ^q × m, b̂_S : ^n ^q.where Â_S, b̂_S are defined asÂ_S(x)= L_g h_1(x)⋮L_g h_N_h(x),b̂_S(x)= -α_1(h_1(x)) - L_f h_1(x)⋮-α_N_h(h_N_h(x)) - L_f h_N_h(x).We similarly define the γ̂-contractions K_γ̂(x) = int(K(x)) - γ̂ B(0,1) and G_γ̂(x) = v ∈^n :v = f(x) + g(x) u,u ∈K_γ̂(x). The following result establishes conditions under which forward invariance of the composed set S_I holds under possibly discontinuous control inputs.Consider the system ẋ(t) ∈G_γ̂(x(t)).Consider the set S_I = ⋂_i = 1^N_h S_i and suppose that the transversality condition holds for the pair (S_i, S_j) for all i, j∈ I(x). Let x(·) be any trajectory of (<ref>) under a Lebesgue measurable control input u(·) with x_0 = x(0) ∈int(S_I ∩Ω). Let [0,T(x_0)) be the (possibly empty) maximal interval such that x(t) ∈int(Ω) for all t ∈ [0,T(x_0)). Then x(t) ∈ S_I for all t ∈ [0,T(x_0)).A full exposition of the details can be found in <cit.>.§ RECENT EXTENSIONS AND ONGOING/FUTURE WORK This tutorial paper presented only a small fraction of the rich literature on the theory and applications of Control Barrier Functions, focusing on challenges such as safety under spatiotemporal and input constraints, safety constraints of higher relative degree, robustness to disturbances/noise, performance via prediction and online parameter adaptation, safety under sampled-data implementation and output feedback control, safety against adversarial inputs, and safety under non-smooth inputs. Many open problems remain, including how to scale the results to multi-agent systems without adding conservatism, how to online adapt to unknown nonlinear disturbances, and how to systematically define CBFs that guide certifiable safe learning and exploration. Notable recent extensions of the authors' work include: how adversarially-robust CBFs can be used for the detection and mitigation of adversarial agent effects <cit.>; how CBFs can be used in case of actuator failures and/or cyber attacks on actuators <cit.>; how CBFs ensure safety when an autonomous agent is exploring an unknown environment to maximize clarity (or minimize uncertainty) about the environment <cit.>; how one can define CBFs and CLFs for systems with impulsive actuators and dwell time constraints <cit.>; how to synthesize safe controllers using Koopman-based identification of nonlinear models <cit.>; how to incorporate risk awareness for the safety verification of stochastic systems <cit.>; how to define multi-rate architectures for safety verification across the planning and control layers of differentially-flat systems <cit.>; and how learning-based methods can be used for safety of large-scale MAS using distributed CBFs <cit.>. Ongoing work includes the investigation of methods for computing safety certificates/CBFs online with provable guarantees, of methods that enable adaptable resilience against adversaries, and of methods that guide the exploration of multiple robots in unknown constrained environments.§ ACKNOWLEDGEMENTSThe authors would like to acknowledge the support of the (1) Air Force Office of Scientific Research (AFOSR) under award number FA9550-17-1-0284; (2) National Science Foundation (NSF) under the award numbers 1942907, 1931982 and the Graduate Research Fellowship Program; (3) Office of Naval Research (ONR) under grant number N00014-20-1-2395. The views and conclusions contained herein are those of the authors only and should not be interpreted as representing those of ONR, the U.S. Navy or the U.S. Government; (4) Automotive Research Center (ARC) in accordance with Cooperative Agreement W56HZV-19-2-0001 U.S. Army CCDC Ground Vehicle Systems Center (GVSC) Warren, MI; (6) François Xavier Bagnoud Foundation. elsarticle-num
http://arxiv.org/abs/2312.16719v1
{ "authors": [ "Kunal Garg", "James Usevitch", "Joseph Breeden", "Mitchell Black", "Devansh Agrawal", "Hardik Parwana", "Dimitra Panagou" ], "categories": [ "math.OC", "cs.SY", "eess.SY" ], "primary_category": "math.OC", "published": "20231227210846", "title": "Advances in the Theory of Control Barrier Functions: Addressing Practical Challenges in Safe Control Synthesis for Autonomous and Robotic Systems" }
Article Title]Fractional-statistics-induced entanglement from Andreev-like tunneling1]Gu Zhang 2]Pierre Glidic 2]Frédéric [email protected]]Igor [email protected]]Yuval [email protected][1]Beijing Academy of Quantum Information Sciences, Beijing 100193, China[2], , 91120 Palaiseau, France [3]Institute for Quantum Materials and Technologies and Institut für Theorie der Kondensierten Materie, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany[4]Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 761001, IsraelThe role of anyonic statistics stands as a cornerstone in the landscape of topological quantum techniques. While recent years have brought forth encouraging and persuasive strides in detecting anyons, a significant facet remains unexplored, especially in view of connecting anyonic physics to quantum information platforms—whether and how entanglement can be generated by anyonic braiding. Here, we demonstrate that even if the two anyonic subsystems are connected only by electron tunneling, anyonic entanglement, manifesting fractional statistics, is generated. Specifically, we address this question for fractional quantum Hall edges bridged by a quantum point contact that allows only transmission of fermions (so-called Andreev-like tunneling), invoking the physics of two-beam collisions in an anyonic Hong-Ou-Mandel collider We define an entanglement pointer—a current-noise-based function tailored to quantify entanglement and show that it reflects the role of quasiparticle statistics. A striking feature of our statistics-induced-entanglement pointer is its relative resilience to entanglement stemming from electrostatic interactions between the two anyonic subsystems. [ [ =====§ INTRODUCTION One of the most fascinating classes of quasiparticles is known as anyons. These quasiparticles, defying conventional exchange statistics, are predicted to reside in topologically intricate states, e.g., those realized in the regime of fractional quantum Hall (FQH) effect <cit.>. In particular, anyonic quasiparticles are hosted by the edges of Laughlin quantum-Hall states. Furthermore, the landscape of anyons extends to encompass Majorana modes, foreseen to materialize at the edges of topological superconducting materials <cit.>.Recent years have borne witness to an intensified spotlight on anyons within the condensed-matter community. The focal point of this scrutiny stems from the pivotal role that exotic anyonic statistics should play in quantum technology. Over two decades have passed since the pioneering confirmation of the fractional charge of Laughlin quasiparticles <cit.>. Inspired by earlier endeavors in the exploration of fractional statistics (see e.g., Refs. <cit.>, most recently, highly persuasive signals of anyonic statistics have been directly or indirectly observed in Fabry–Perot <cit.> and Hong-Ou-Mandel interferometers <cit.>. This boost of progress in the quest for anyonic statistics is accompanied by a series of exhilarating experiments that have unveiled a plethora of exotic anyonic features in FQH systems. Among these observations are the existence of charge neutral modes <cit.>, fractional Josephson relation <cit.>, and Andreev-like tunneling <cit.> in an anyonic system <cit.>. What is particularly striking is the agreement between the experimental findings and their theoretical predictions. This alignment not only emboldens our understanding but also invigorates the search for further pathways to identify and comprehend anyons. Indeed, in addition to earlier theoretical ideas <cit.>, most recently, there has been another surge of theoretical proposals <cit.> on understanding and detecting anyonic features, and possibly harnessing them for quantum information processing platforms (see, e.g., Refs. <cit.>).Entanglement is another fundamental quantum mechanical element and a prerequisite for the development of quantum technology platforms. Despite its significance, experimentally quantifying entanglement remains a challenging endeavor, often entailing intricate considerations specific to each case. Recently, Ref. <cit.> proposed to measure entanglement stemming from quantum statistics of quasiparticles by a certain combination of the current cross-correlation functions.Following Ref. <cit.>,the statistics-induced entanglement (i) showcases resilience against disruptions introduced by interactions, rendering it more robust in real-world scenarios, (ii) targets the genuine quantum entanglement that can be assessed through Bell-inequality <cit.> measurements (cf. Ref. <cit.>), and (iii)establishes a possibility of directly measuring the von Neumann entanglement entropy in transport experiments, thus deepening the understanding of entanglement manifestations. Experimental validation in integer quantum Hall systems lends further weight to these advantages <cit.>. However, when transitioning to anyonic systems, the quantification of entanglement becomes even more formidable, which is accentuated by the absence of readily available fractional statistics in natural environments. Furthermore, the quasiparticle collisions that can directly reveal the anyon's statistics <cit.> in entanglement are now commonly believed to be irrelevant (e.g., Refs. <cit.>) for the current-noise measurements in anyonic Hong-Ou-Mandel colliders <cit.>. Nevertheless, the measurement of anyons' entanglement through their collisions holds immense potential for the identification of anyons. Despite the importance of anyonic statistics in quantum techniques and the recent advances in the research of anyons, to date, the observation and quantification of anyonic statistics-induced entanglement remained a challenge.§ ENTANGLEMENT POINTER FOR ANDREEV-LIKE TUNNELING In this work, we combine anyonic statistics and quantum entanglement, and define the entanglement pointer to quantify the statistics-induced entanglement in a Hong-Ou-Mandel interferometer on FQH edges with filling factor ν (Fig. <ref>a). The model at hand is crucially distinct from more conventional anyonic colliders <cit.> in that its central quantum point contact (QPC) only allows transmission of fermions <cit.>. The dilute non-equilibrium currents in the middle arms consist of anyons with charge ν e (Fig. <ref>b), where ν is the filling fraction.Since only electrons are allowed to tunnel across the central QPC, such a tunneling event must be accompanied by leaving behind (“reflection”) a fractional hole of charge -(1-ν)e; the latter continues to travel along the original middle edge (Fig. <ref>c). Such a “reflection” event is reminiscent of the reflection of a hole in an orthodox Andreev tunneling from a normal metal to a superconductor; hence, such an event is commonly dubbed “quasiparticle Andreev reflection” <cit.>. As distinct from the conventional normal metal-superconductor case, in an anyonic Andreev-like tunneling process, (i) both the incoming anyon and reflected “hole” carry fractional charges, and (ii) the absolute values of anyonic and hole charges are different.We divide the system into two subsystems 𝒜 and ℬ see Fig. <ref>a. To characterize the statistics-induced entanglement between these two parts, we introduce the entanglement pointer <cit.> for the Andreev-like tunneling:𝒫_Andreev≡S_T (T_A,0) + S_T (0,T_B) - S_T (T_A, T_B) /e I_+.Here, S_T (T_A,T_B)refers to the noise of tunneling current between two subsystems, as a function of corresponding “bare” transmission probabilities T_A and T_B of the two diluters, and T_C stands for the transmission probability at the central QPC that couples the subsystems. The entanglement pointer effectively subtracts out redundant contributions present whenonly one of the two sources is biased.The current I_+ = I_A0 + I_B0 is a sum of non-equilibrium currents I_A0 and I_B0 in arm A and B (see Fig. <ref>a), respectively. Although we have defined the entanglement pointer through the tunneling-current noise, it can also be measured with the cross-correlation noise, see Eq. (<ref>) below and the ensuing discussion. Importantly, following its definition, 𝒫_Andreev excludes the single-source contributions and contains only noise from the two-particle collisions. For anyons, such processes are expected to involve braiding, so that 𝒫_Andreev should be capable of displaying anyonic statistics, an intrinsic feature of anyons. For 𝒫_Andreev to quantify statistics-induced entanglement, we need it to be robust against unwanted effects related to interaction along the middle channels. This is shown in Supplementary Information (SI). On a technical side, the evaluation of 𝒫_Andreev within the Keldysh framework involves both “connected” and “disconnected” diagrams (cf, Secs. IA and IB of the SI). This distinguishes our work from Ref. <cit.>, where only “connected” diagrams were considered (only for a single-source setting, where anyonic collisions are not present). § TUNNELING CURRENT NOISE Before moving to the analysis of the tunneling-current noise, we would like to emphasize a crucial feature of the correlation functions determining the noise [Eq. (<ref>) of Methods]: the separation of the non-equilibrium contributions. The non-equilibrium noise is, in turn, split into a single-source [the product of equilibrium and non-equilibrium terms in Eq. (<ref>)] and double-source contributions (the product of non-equilibrium terms). The time integrals in all terms are dominated by ultraviolet contributions (cf, Sec. IC of the SI), which greatly contrasts the situation of anyonic tunneling. It is worth noting that this separation did not occur for self-contracted anyonic pairs in Refs. <cit.>. Indeed, the resummation procedures employed in Refs. <cit.> involve only dominant processes where non-equilibrium anyons form into self-contracted pairs, with which the double-source contribution distinguishes from a simple product of single-source ones only by a modification of the infrared cutoff. Collisions and braiding of two non-equilibrium anyons were thus neglected for the anyonic-tunneling setup in Refs. <cit.>.For Andreev-like transmission through the central QPC, the expression for the tunneling noise can be decomposed as follows: S_T = S_1A + S_1B + S_2, whereS_1A,1B= T_C T_A,B/π^2 2 νsin^2(πν)e I_A0,B0/(2 - 3ν + 2ν^2 )(2- 4ν + 2ν^2 )are single-source noises for sources sA, sB, respectively, and S_2 =T_C T_AT_B/π^22 νsin^2(πν)e I_+ /(2- 5ν + 4ν^2 )(2- 6ν + 4ν^2 )is the double-source “collision contribution” (ν < 1/2 is assumed) (cf, Sec. I of the SI). Following Eq. (<ref>), the entanglement pointer is proportional to S_2, and, after removing the dependence on I_+, reads𝒫_Andreev = T_A T_B T_C/π^22 νsin^2(πν ) /(2- 5ν + 4ν^2 )(2- 6ν + 4ν^2 ) . The entanglement pointer, 𝒫_Andreev, has two advantages over a single tunneling current noise. Firstly, 𝒫_Andreev reflects the statistics-induced extra Andreev-like tunneling for two-anyon collisions. It provides an alternative option (other than the braiding phase <cit.> and two-particle bunching or anti-bunching preferences <cit.>) to disclose anyonic statistics. For 0<ν < 1/2, 𝒫_Andreev is positive, meaning that when two anyons collide at the central QPC, they prefer to promote Andreev-like tunnelings. Secondly, 𝒫_Andreev has a better resilience against interactions than the current noise. Indeed, with interactions, processes with only self-contracted pairs, which vanish for the non-interacting scenario, can become dominant in the tunneling current noise in the strongly diluted limit. This interaction-induced contribution is however removed in 𝒫_Andreev (cf. Sec. III of the Supplementray Information). Nevertheless, when evaluating 𝒫_Andreev, interactions between the arms A and B slightly renormalize (through interaction-induced fractionalization, see e.g., Ref. <cit.>) the statistical factors determined byν in the two-particle terms (see Refs. <cit.> for a discussion of scaling dimensions vs. anyonic phases in related setups).§ INTERPRETATION OF ENTANGLEMENT POINTER The essence of entanglement pointer can be illustrated by resorting to single-particle (Fig. <ref>c) and two-particle (Fig. <ref>) scattering formalism revealing the statistical properties of anyons in the course of two-particle collisions. We emphasize that the picture of scattering of non-equilibrium anyons does not apply to Refs. <cit.>. Indeed, these works consider only the correlations (“braiding”) of non-equilibrium anyons with the equilibrium excitations at the central QPC. The entanglement pointer 𝒫_Andreev is designed to capture only contributions of scattering events involving two non-equilibrium particles, through which particle statistics is manifested.For models where the central QPC allows “intrinsic” non-equilibrium carriers (i.e., fermions for integer, or anyons for FQH edges) to tunnel, the influence of two-particle scatterings is manifested by their bunching or anti-bunching preference <cit.>. For the model under consideration, two-particle scattering instead influences the probability of Andreev-like tunneling events. To see this more clearly, we denote by W_A and W_B the probabilities (determined by the diluters and proportional to T_A and T_B) that an anyon from the corresponding source participates in the scattering at the central QPC. With this convention, a two-anyon scattering occurs with the probability W_A W_B. Andreev-like tunneling produces then fractional charges on both arms, as shown in Fig. <ref>. After including both single-particle and two-particle scattering events, we obtain the differential noises at a given voltage V (cf, Sec. IV of the SI):s_T= (s_T)_single + (s_T)_collision= (W_A+W_B) W_C - ( W_A^2 +W_B^2 ) W_C^2+W_A W_B P_Andreev^stats_AB= (s_AB)_single + (s_AB)_collision = -(1-ν) W_C (W_A+ W_B) - W_C ( ν- W_C ) (W_A^2 + W_B^2 ) - W_A W_B P_Andreev^stat,where s_T = ∂_I_+ S_T, s_AB = ∂_I_+ S_AB are the differential noises, and S_AB = ∫ dt ⟨δÎ_A (t) δÎ_B (0) ⟩ is the irreducible zero-frequency cross-correlation (where δÎ_A,B≡Î_A,B -I_A,B refers to the fluctuation of the current operator Î_A,B), and subscripts “single” and “collision” indicate contributions from single-particle and two-particle scattering events, respectively. Here W_C refers to the transmission probability of an Andreev-like tunneling W_C ∝ T_C. The factor P_Andreev^stat, which is proportional to the entanglement pointer 𝒫_Andreev, refers to extra Andreev-like tunneling induced by anyonic statistics. It would be equal to zero if anyons from subsystem 𝒜 were distinguishable from those in ℬ. In this case, the noise would be equal to the sum of two single-source ones. By comparing to Eq. (<ref>), W_A, W_B, W_C, and P_Andreev^stat can be expressed via the microscopic parameters (cf, Secs. I and VI of the SI); in particular, W_A,B = ∂_V I_A0,B0h/(e^2ν) are directly related to G_A,B from Eq. (<ref>). As another feature of Andreev-like tunnelings, S_T in Eq. (<ref>) does not explicitly depend on ν, since the central QPC allows only charge e particles to tunnel.Equation (<ref>) discloses several features of Andreev-like tunneling in an anyonic model. Firstly, in the strongly diluted limit, s_AB≈ (ν-1)s_T, when considering only the leading contributions to the noise, i.e., the terms linear in both W_A (or W_B) and W_C. Both (s_T)_single and (s_AB)_single correspond to S_1A or S_1B in Eq. (<ref>) and will be removed following our definition of the entanglement pointer Eq. (<ref>). In both functions, the double-source contributions, i.e., the bilinear terms ∝ W_A W_B, involve P_Andreev^stat – exactly the difference generated by statistics, when two anyons collide at the central QPC. Most importantly, bilinear terms of both functions of Eq. (<ref>) have the same magnitude, i.e., (s_T)_collision = -(s_AB)_collision. Consequently, the experimental measurement of 𝒫_Andreev, though defined with tunneling current noise, can be performed by measuring the cross-correlation of currents in the drains, which is more easily accessible in real experiments: 𝒫_Andreev = W_A W_B ∫ dϵ P_Andreev^stat (ϵ)/I_+ = S_AB (T_A, T_B)-S_AB (T_A,0) - S_AB (0,T_B)/e I_+.§ COMPARISON TO EXPERIMENT We now compare the theoretical prediction with the experimental data from Ref. <cit.>, see Fig. <ref>. Panel a shows the raw data for the double-source noise S_AB and for the sum of single-source S_AB. For the single-source data, the x-axis of Panel a represents I_A0 (T_A , 0) + I_B0 (0,T_B), i.e., the sum of non-equilibrium current in two single-source situations. In contrast to the theoretical result Eq. (<ref>), the double-source S_AB in Fig. <ref>a has a smaller amplitude than the sum of single-source ones. This fact is explained by the different values of the transmission T_C of the central QPC for the single-source and double-source cases, as shown in Fig. <ref>b, indicating a non-local influence of the voltage sources on T_C. To remove this electrostatic effect, we take the single-source transmission as the reference, to rescale the double-source S_AB (see SI for details), leading to S_2 of Fig. <ref>c. The rescaled data is further compared to that evaluated from S_2 of Eq. (<ref>). The corresponding entanglement pointer, 𝒫_Andreev, is presented in Fig. <ref>d. These plots demonstrate good agreement between the theory and experiment for both S_2 and 𝒫_Andreev.§ CONCLUSIONS In this work, we have defined the entanglement pointer in an anyonic (with filling factor ν<1/2) Hong-Ou-Mandel interferometer that allows Andreev-like tunneling through the central QPC. The entanglement pointer and associated noise functions are obtained by considering non-equilibrium anyon-triggered Andreev-like tunneling and “braiding” between reflected anyonic charges and non-equilibrium anyons that do not tunnel at the central QPC. The obtained entanglement pointer is a universal function of the filling factor ν (assuming vanishing interaction between arms of the two subsystems). In the presence of interactions along arms and across the QPCs, the entanglement pointer is anticipated to be highly resilient, as it involves the statistical phase gained by two-anyon scattering. The Andreev-like tunneling in an anyonic collider is “halfway” from the integer case of Ref. <cit.> (where both tunneling and dynamics along the arms are fermionic) to purely anyonic colliders (both tunneling and dynamics are anyonic). The latter case will be addressed elsewhere, with insights from the present work suggesting that quasiparticle collisions do matter in the collider geometry, (in contrast to a commonplacebelief stipulating that anyonic collider dynamics is dominated by time-domain braiding <cit.>). This provides us with a convenient setup for a direct inspection and study of real (non-virtual) anyonic collisions. We compare the theory predictions with the experiment. The measured data agrees remarkably well with the theoretically calculated one, for both S_AB and 𝒫_Andreev. We have thus demonstrated the crucial role of two-particle scattering—collisions—in establishing fractional-statistics-induced entanglement in anyonic colliders.While preparing our manuscriptwe have noticed Ref. <cit.>, which concerns a single source platform. Technically, the analysis of Ref. <cit.> involves only connected diagrams. We note that the present analysis consists of(i) the inclusion of double-source noise, (ii) the designed entanglement pointer, as well as its resilience against interaction (cf, Sec. III of the SI), and (iii) “braiding” between reflected anyonic holes and non-equilibrium anyons. § METHODS§.§ Theoretical model We consider the anyonic setup shown in Fig. <ref>a, which consists of two source arms (sA, sB) and two middle ones (A, B).The system is viewed as comprising two subsystems, 𝒜 (including sA and A) and ℬ (sB and B). The system Hamiltonian contains the three parts: H = H_arms + H_diluter + H_T. The arms, carrying charge-ν e quasiparticles, can be described by the bosonized edge Hamiltonian H_arms = v_F ∑_α∫ dx [∂_x ϕ_α (x)]^2/4π, with ϕ_α the bosonic field labeled by α = sA, sB, A, B. Fractional charges tunnel from sources to middle arms through the quantum-Hall bulk at two QPCs. These two “diluter” QPCs are described by the Hamiltonian H_diluter = ζ_A ψ^†_A ψ_sA + ζ_B ψ^†_B ψ_sB + H.c.. Via bosonization, tunneling operators can be written as ψ^†_αψ_α' = F^†_α F_α'exp[i√(ν)( ϕ_α' - ϕ_α)]/(2π a), with F^†_α F_α' the product Klein factors, and aan ultra-violet cutoff. The tunneling amplitudes ζ_A and ζ_B define the tunneling probabilities at the diluters, T_A=|ζ_A|^2 and T_B=|ζ_B|^2. The dynamical bosonic phase obeys the standard commutation relation [∂_x ϕ_α (x), ϕ_β (x') ] = i πδ_αβδ (x-x'). We assume strong dilution, T_A, T_B ≪ 1.In this work, the same voltage bias V is assumed in both sources, and the single-source scenario is realized by pinching off either diluter.The middle arms A and B communicate at the central QPC characterized by the transmission probability T_C. The central QPC is placed at a distance L from two diluters, at the downstream transport direction [Fig. <ref>a]. In comparison to the two diluters, where the two depletion gates (the black area in Fig. <ref>b) are well separated in space, the central QPC is in the opposite limit where the gates are almost “touching” each other (Fig. <ref>c). Following self-duality of tunneling through FQH QPCs (see, e.g., Refs. <cit.>), only fermionic tunneling is allowed in this limit. Physically, there is no bulk state with filling factor ν between the two arms (red and blue in Fig. <ref>), and, hence, between the subsystems 𝒜 and ℬ, at the central QPC. The tunneling is then described by the Hamiltonian H_T = ζ_C Ψ^†_A Ψ_B + H.c., with ζ_C∝√(T_C) and Ψ_α = F_αexp(iϕ_α/√(ν))/√(2π a). This bosonized expression contains √(1/ν) instead of √(ν) encountered above, which is a hallmark of electron tunneling in anyonic systems. §.§ Correlation functions The building blocks of the entanglement pointer are current correlators. Considering the Andreev-like transmission limit at the collider, T_C ≪ 1, the noise of the current operator Î_T = i ζ_C Ψ_B^†Ψ_A + H.c. is given byS_T =v_F^2 e^3 T_C ∫ dt ⟨{Ψ_B^† (0) Ψ_A (0), Ψ_A^† (t) Ψ_B (t) }⟩_T_C = 0,with { , } denoting an anticommutator. Evaluation of S_T involves correlators ⟨Ψ_A^†(t) Ψ_A (0) ⟩ and ⟨Ψ_B^†(t) Ψ_B (0) ⟩ at the position of the central QPC. At zero temperature, these read:.[ ⟨Ψ^†_A(t^-) Ψ_A (0^+) ⟩; ⟨Ψ_B(t^-) Ψ_B^† (0^+) ⟩ ]}=τ_0^1/ν-1/2π v_F (τ_0 + it)^1/ν×{ 1 +c_νit I_A0,B0 e^± i ν e V t/ħ/e(i ν e V t/ħ)^2ν - 1exp[-( 1 - e^± 2iπν) I_A0,B0t/ν e] },where τ_0≡ a/v_F, c_ν = 2π^2/[νsin (2πν) Γ (1-2ν) ] with Γ the Gamma function, and the signs + and - in the phase factors correspond to A and B, respectively. In the strong-dilution limit, the non-equilibrium currents read as I_A0,B0 =G_A,B(V) V,G_A,B =T_A,B (ν e)^2/2π^2 ħsin(2πν) Γ (1 - 2ν)(ν e V τ_0/ħ)^2ν- 2,with G_A,B the non-equilibrium conductancesaffected by the “Luttinger renormalization” of diluters. The correlators in Eq. (<ref>) contain the equilibrium contribution (unity in the square brackets) and the non-equilibrium one, induced by the bias from the corresponding source. As bias is fixed in our setup, the single-source contribution is obtained by pinching off one of two diluters (i.e., by setting either T_A or T_B to zero). Experimentally, another option is to set one voltage bias as zero. To describe this situation analytically, a finite temperature or finite system size must be included in Eqs. (<ref>) and (<ref>), to avoid the infrared divergence (see SI).Similarly to non-interacting fermions (see, e.g., Refs. <cit.>), connected diagrams that introduce the phase factor exp(i ν e V t), are required for non-equilibrium contribution. Indeed, a non-equilibrium anyon is the prerequisite of Andreev-like tunneling. This connected diagram is, however, believed to be unimportant when anyons are allowed to tunnel at the central QPC. In Refs. <cit.>, the so-called “bubble” diagrams <cit.> (i.e., self-contracted non-equilibrium anyon pairs) prevail over connected ones in the current noise calculated in the strongly diluted limit. Contributions from self-contracted anyonic pairs, however, vanish for Andreev-like tunneling to leading order in dilution, since exchanging positions of an anyon and an electron that tunnels at the central QPC produces only a trivial phase. This is in stark contrast to a non-trivial phase, ±πν, which appears in the setups with anyonic tunneling at the central QPC, where it was interpreted as the anyonic “braiding” phase <cit.>.In addition to an electron, a fractional hole (Fig. <ref>c) is also generated by the Andreev-like tunneling. As a quasiparticle with the fractional charge (ν - 1) e, this hole can produce a non-trivial phase ± (ν-1) π, when exchanging position with a non-equilibrium anyon. Consequently, when considering processes of higher-order in dilution or, equivalently, involving more non-equilibrium anyons (the black pulses in Fig. <ref>), extra non-trivial phases will be produced by “braiding” non-equilibrium anyons and the fractional hole generated at the central QPC, at time moments 0 and t. After the resummation over such higher-order non-equilibrium processes, we obtain the exponential suppression factor exp{ -I_A0,B0 t [1 - exp (± 2 i πν) ]/ν e } in Eq. (<ref>), which has the same structure as that in Refs. <cit.> but involves anyonic holes produced from Andreev-like tunnelings, see Fig. <ref>. §.§ ExperimentThe measurements are realized at T≈35 mK on a 2DEG set to ν=1/3. The device includes two nominally identical source QPCs positioned symmetrically with respect to a central QPC (see SI and Ref. <cit.>). Gate voltages allow us to tune the QPCs in the configuration where the Andreev tunneling of quasiparticles takes place.The source QPCs are set in theanyonic-tunneling regime (Fig. <ref>b) and exhibit a shot noise Fano factor corresponding to a fractional charge e^*≈ e/3, whereas the central QPC is tuned in the Andreev-like tunneling regime (Fig. <ref>c) with the tunneling charge e^*≈ e, as deduced from shot noise <cit.>. An experimental challenge is to be able to obtain reliably the entanglement pointer.Indeed, 𝒫_Andreev is a small difference between larger quantities measured separately, which increases the sensitivity to experimental artifacts such as drifts in time between compared configurations or unwanted small capacitive cross-talks. As further detailed in the supplementary information, the data set presently used to extract the entanglement pointer was obtained following a specific protocol reducing such artifacts. In particular, there are no changes in the device gates voltages and the time between compared configurations is minimized. Further details on the experiment can be found in Sec. V of the SI.Supplementary InformationIn the Supplementary Information, we provide extra information on (i) detailed derivations of Eq. (<ref>), on time-dependent correlation functions; (ii) finite-temperature expressions; (iii) influence of interaction on correlation functions and noises; (iv) Detailed derivation on single-particle and two-particle expressions, for different types of correlation functions; (v) more experimental information, and (vi) detailed information on the data analysis.AcknowledgmentsWe are grateful to Gabriele Campagnano, Domenico Giuliano, Moty Heiblum, Thierry Martin, Bernd Rosenow, Inès Safi, and Kyrylo Snizhko for fruitful discussions. We thank O. Maillet, C. Piquard, A. Aassime, and A. Anthore for their contribution to the experiment. IG and YG acknowledge the support from the DFG grant No. MI658/10-2 and German-Israeli Foundation (GIF) grant No. I-1505-303.10/2019. YG acknowledges support from the Helmholtz International Fellow Award, by the DFG Grant RO 2247/11-1, by CRC 183 (project C01), the US-Israel Binational Science Foundation, and by the Minerva Foundation. FP acknowledges the support of the European Research Council (ERC-2020-SyG-951451) and of the French RENATECH network.§ DECLARATIONS * Conflict of interest: The authors declare no conflict of interest.* Ethics approval: Not applicable* Consent to participate: All coauthors participate in the preparation of this work. * Consent for publication: All coauthors agree to publish this work. * Availability of data and materials: Raw data of this work can be accessed via Zenodo: https://doi.org/10.5281/zenodo.10434474. * Code availability: Relevant Mathematica notebook can be accessed via Zenodo: https://doi.org/10.5281/zenodo.10434474. * Authors' contributions: GZ, IG, and YG carried out the theoretical analysis.PG obtained the experimental data and performed the low-level data analysis under the supervision of FP. GZ designed and performed the data-theory comparison, with critical inputs from FP. All authors participated in the scientific discussions, contributed to the preparation of this work and to the writing of the paper, and proofread the manuscript. 60#1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook [Laughlin1983]LaughlinPRL83Laughlin, R.B.: Anomalous quantum Hall effect: An incompressible quantum fluid with fractionally charged excitations. Phys. Rev. Lett. 50, 1395–1398 (1983) 10.1103/PhysRevLett.50.1395 [Arovas et al.1984]Arovas1984Arovas, D., Schrieffer, J.R., Wilczek, F.: Fractional statistics and the quantum Hall effect. Phys. Rev. Lett. 53, 722–723 (1984) 10.1103/PhysRevLett.53.722 [Kitaev2001]Kitaev2001UFNKitaev, A.Y.: Unpaired Majorana fermions in quantum wires. Uspekhi Fizicheskikh Nauk (UFN) 44(10S), 131–136 (2001) 10.1070/1063-7869/44/10s/s29 [Mong et al.2014]MongPRX14Mong, R.S.K., Clarke, D.J., Alicea, J., Lindner, N.H., Fendley, P., Nayak, C., Oreg, Y., Stern, A., Berg, E., Shtengel, K., Fisher, M.P.A.: Universal topological quantum computation from a superconductor-Abelian quantum Hall heterostructure. Phys. Rev. X 4, 011036 (2014) 10.1103/PhysRevX.4.011036 [Saminadayar et al.1997]PSaminadayarPRL97Saminadayar, L., Glattli, D.C., Jin, Y., Etienne, B.: Observation of the 𝑒/3 fractionally charged Laughlin quasiparticle. Phys. Rev. Lett. 79, 2526–2529 (1997) 10.1103/PhysRevLett.79.2526 [de Picciotto et al.1998]de-PicciottoPhysBConMatt98de-Picciotto, R., Reznikov, M., Heiblum, M., Umansky, V., Bunin, G., Mahalu, D.: Direct observation of a fractional charge. Physica B: Condensed Matter 249-251, 395–400 (1998) 10.1016/S0921-4526(98)00139-2 [Camino et al.2005]CaminoPRB05Camino, F.E., Zhou, W., Goldman, V.J.: Realization of a Laughlin quasiparticle interferometer: Observation of fractional statistics. Phys. Rev. B 72, 075342 (2005) 10.1103/PhysRevB.72.075342 [Ofek et al.2010]OfekPNAS10Ofek, N., Bid, A., Heiblum, M., Stern, A., Umansky, V., Mahalu, D.: Role of interactions in an electronic Fabry–Perot interferometer operating in the quantum Hall effect regime. Proceedings of the National Academy of Sciences 107(12), 5276–5281 (2010) 10.1073/pnas.0912624107 https://arxiv.org/abs/https://www.pnas.org/content/107/12/5276.full.pdfhttps://www.pnas.org/content/107/12/5276.full.pdf [Willett et al.2013]WillettPRL13Willett, R.L., Nayak, C., Shtengel, K., Pfeiffer, L.N., West, K.W.: Magnetic-field-tuned Aharonov-Bohm oscillations and evidence for non-Abelian anyons at =5/2. Phys. Rev. Lett. 111, 186401 (2013) 10.1103/PhysRevLett.111.186401 [Nakamura et al.2019]NakamuraNatPhys19Nakamura, J., Fallahi, S., Sahasrabudhe, H., Rahman, R., Liang, S., Gardner, G.C., Manfra, M.J.: Aharonov–Bohm interference of fractional quantum Hall edge modes. Nature Physics 15(6), 563–569 (2019) 10.1038/s41567-019-0441-8 [Nakamura et al.2020]NakamuraNatPhys20Nakamura, J., Liang, S., Gardner, G.C., Manfra, M.J.: Direct observation of anyonic braiding statistics. Nature Physics 16(9), 931–936 (2020) 10.1038/s41567-020-1019-1 [Nakamura et al.2022]NakamuraNC22Nakamura, J., Liang, S., Gardner, G.C., Manfra, M.J.: Impact of bulk-edge coupling on observation of anyonic braiding statistics in quantum Hall interferometers. Nature Communications 13(1), 344 (2022) 10.1038/s41467-022-27958-w [Nakamura et al.2023]Nakamura2023Nakamura, J., Liang, S., Gardner, G.C., Manfra, M.J.: Fabry-Perot interferometry at the ν = 2/5 fractional quantum Hall state (2023) [Bartolomei et al.2020]BartolomeiScience20Bartolomei, H., Kumar, M., Bisognin, R., Marguerite, A., Berroir, J.-M., Bocquillon, E., Plaçais, B., Cavanna, A., Dong, Q., Gennser, U., Jin, Y., Fève, G.: Fractional statistics in anyon collisions. Science 368(6487), 173–177 (2020) 10.1126/science.aaz5601 [Glidic et al.2023]PierrePRX23Glidic, P., Maillet, O., Aassime, A., Piquard, C., Cavanna, A., Gennser, U., Jin, Y., Anthore, A., Pierre, F.: Cross-correlation investigation of anyon statistics in the =1/3 and 2/5 fractional quantum Hall states. Phys. Rev. X 13, 011030 (2023) 10.1103/PhysRevX.13.011030 [Lee et al.2023]LeeNature23Lee, J.-Y.M., Hong, C., Alkalay, T., Schiller, N., Umansky, V., Heiblum, M., Oreg, Y., Sim, H.-S.: Partitioning of diluted anyons reveals their braiding statistics. Nature 617(7960), 277–281 (2023) [Ruelle et al.2023]RuellePRX23Ruelle, M., Frigerio, E., Berroir, J.-M., Plaçais, B., Rech, J., Cavanna, A., Gennser, U., Jin, Y., Fève, G.: Comparing fractional quantum Hall Laughlin and Jain topological orders with the anyon collider. Phys. Rev. X 13, 011031 (2023) 10.1103/PhysRevX.13.011031 [Bhattacharyya et al.2019]RajarshiPRL19Bhattacharyya, R., Banerjee, M., Heiblum, M., Mahalu, D., Umansky, V.: Melting of interference in the fractional quantum Hall effect: Appearance of neutral modes. Phys. Rev. Lett. 122, 246801 (2019) 10.1103/PhysRevLett.122.246801 [Dutta et al.2022]DuttaScience22Dutta, B., Umansky, V., Banerjee, M., Heiblum, M.: Isolated ballistic non-abelian interface channel. Science 377(6611), 1198–1201 (2022) 10.1126/science.abm6571 [Kapfer et al.2019]GlattliScience19Kapfer, M., Roulleau, P., Santin, M., Farrer, I., Ritchie, D.A., Glattli, D.C.: A Josephson relation for fractionally charged anyons. Science 363(6429), 846–849 (2019) 10.1126/science.aau3539 [Safi and Schulz1995]SafiSchulzPRB95Safi, I., Schulz, H.J.: Transport in an inhomogeneous interacting one-dimensional system. Phys. Rev. B 52, 17040–17043 (1995) 10.1103/PhysRevB.52.R17040 [Sandler et al.1998]Sandler1998Sandler, N.P., Chamon, C.d.C., Fradkin, E.: Andreev reflection in the fractional quantum Hall effect. Phys. Rev. B 57, 12324–12332 (1998) 10.1103/PhysRevB.57.12324 [Hashisaka et al.2021]Hashisaka2021Hashisaka, M., Jonckheere, T., Akiho, T., Sasaki, S., Rech, J., Martin, T., Muraki, K.: Andreev reflection of fractional quantum Hall quasiparticles. Nature Communications 12, 2794 (2021) 10.1038/s41467-021-23160-6 [Cohen et al.2022]cohen2022universalCohen, L.A., Samuelson, N.L., Wang, T., Taniguchi, T., Watanabe, K., Zaletel, M.P., Young, A.F.: Universal chiral Luttinger liquid behavior in a graphene fractional quantum Hall point contact (2022) [Glidic et al.2023]PierreNC23Glidic, P., Maillet, O., Piquard, C., Aassime, A., Cavanna, A., Jin, Y., Gennser, U., Anthore, A., Pierre, F.: Quasiparticle Andreev scattering in the ν= 1/3 fractional quantum Hall regime. Nature Communications 14(1), 514 (2023) 10.1038/s41467-023-36080-4 [Comforti et al.2002]Comforti2002Comforti, E., Chung, Y.C., Heiblum, M., Umansky, V., Mahalu, D.: Bunching of fractionally charged quasiparticles tunnelling through high-potential barriers. Nature 416, 515–518 (2002) 10.1038/416515a [Safi et al.2001]SafiDevilardPRL01Safi, I., Devillard, P., Martin, T.: Partition noise and statistics in the fractional quantum Hall effect. Phys. Rev. Lett. 86, 4628–4631 (2001) 10.1103/PhysRevLett.86.4628 [Kane and Fisher2003]KaneFisherPRB03Kane, C.L., Fisher, M.P.A.: Shot noise and the transmission of dilute Laughlin quasiparticles. Phys. Rev. B 67, 045307 (2003) 10.1103/PhysRevB.67.045307 [Vishveshwara2003]VishveshwaraPRL03Vishveshwara, S.: Revisiting the Hanbury Brown–Twiss setup for fractional statistics. Phys. Rev. Lett. 91, 196803 (2003) 10.1103/PhysRevLett.91.196803 [Kim et al.2005]KimPRL05Kim, E.-A., Lawler, M., Vishveshwara, S., Fradkin, E.: Signatures of fractional statistics in noise experiments in quantum Hall fluids. Phys. Rev. Lett. 95, 176402 (2005) 10.1103/PhysRevLett.95.176402 [Law et al.2006]LawPRB06Law, K.T., Feldman, D.E., Gefen, Y.: Electronic Mach-Zehnder interferometer as a tool to probe fractional statistics. Phys. Rev. B 74, 045319 (2006) 10.1103/PhysRevB.74.045319 [Feldman et al.2007]FeldmanPRB07Feldman, D.E., Gefen, Y., Kitaev, A., Law, K.T., Stern, A.: Shot noise in an anyonic Mach-Zehnder interferometer. Phys. Rev. B 76, 085333 (2007) 10.1103/PhysRevB.76.085333 [Rosenow and Halperin2007]RosenowHalperinPRL07Rosenow, B., Halperin, B.I.: Influence of interactions on flux and back-gate period of quantum Hall interferometers. Phys. Rev. Lett. 98, 106801 (2007) 10.1103/PhysRevLett.98.106801 [Campagnano et al.2012]CampagnanoPRL12Campagnano, G., Zilberberg, O., Gornyi, I.V., Feldman, D.E., Potter, A.C., Gefen, Y.: Hanbury Brown–Twiss interference of anyons. Phys. Rev. Lett. 109, 106802 (2012) 10.1103/PhysRevLett.109.106802 [Campagnano et al.2013]CampagnanoPRB13Campagnano, G., Zilberberg, O., Gornyi, I.V., Gefen, Y.: Hanbury Brown and Twiss correlations in quantum Hall systems. Phys. Rev. B 88, 235415 (2013) 10.1103/PhysRevB.88.235415 [Rosenow et al.2016]RosenowLevkivskyiHalperinPRL16Rosenow, B., Levkivskyi, I.P., Halperin, B.I.: Current correlations from a mesoscopic anyon collider. Phys. Rev. Lett. 116, 156802 (2016) 10.1103/PhysRevLett.116.156802 [Campagnano et al.2016]CampagnanoPRB16Campagnano, G., Lucignano, P., Giuliano, D.: Chirality and current-current correlation in fractional quantum Hall systems. Phys. Rev. B 93, 075441 (2016) 10.1103/PhysRevB.93.075441 [Han et al.2016]SimNC16Han, C., Park, J., Gefen, Y., Sim, H.-S.: Topological vacuum bubbles by anyon braiding. Nature Communications 7(1), 11131 (2016) 10.1038/ncomms11131 [Lee et al.2019]LeePRL19Lee, B., Han, C., Sim, H.-S.: Negative excess shot noise by anyon braiding. Phys. Rev. Lett. 123, 016803 (2019) 10.1103/PhysRevLett.123.016803 [Rosenow and Stern2020]RosenowSternPRL20Rosenow, B., Stern, A.: Flux superperiods and periodicity transitions in quantum Hall interferometers. Phys. Rev. Lett. 124, 106805 (2020) 10.1103/PhysRevLett.124.106805 [Rech et al.2020]MartinDeltaT20Rech, J., Jonckheere, T., Grémaud, B., Martin, T.: Negative delta-T noise in the fractional quantum Hall effect. Phys. Rev. Lett. 125, 086801 (2020) 10.1103/PhysRevLett.125.086801 [Schiller et al.2023]schillerPRL23Schiller, N., Shapira, Y., Stern, A., Oreg, Y.: Anyon statistics through conductance measurements of time-domain interferometry. Phys. Rev. Lett. 131, 186601 (2023) 10.1103/PhysRevLett.131.186601 [Morel et al.2022]MorelPRB22Morel, T., Lee, J.-Y.M., Sim, H.-S., Mora, C.: Fractionalization and anyonic statistics in the integer quantum Hall collider. Phys. Rev. B 105, 075433 (2022) 10.1103/PhysRevB.105.075433 [Schiller et al.2022]KyryloPRB22Schiller, N., Oreg, Y., Snizhko, K.: Extracting the scaling dimension of quantum Hall quasiparticles from current correlations. Phys. Rev. B 105, 165150 (2022) 10.1103/PhysRevB.105.165150 [Zhang et al.2022]GuPRB22Zhang, G., Gornyi, I.V., Spånslätt, C.: Delta-T noise for weak tunneling in one-dimensional systems: Interactions versus quantum statistics. Phys. Rev. B 105, 195423 (2022) 10.1103/PhysRevB.105.195423 [Lee and Sim2022]LeeSimNC22Lee, J.-Y.M., Sim, H.-S.: Non-Abelian anyon collider. Nature Communications 13(1), 6660 (2022) 10.1038/s41467-022-34329-y [Jonckheere et al.2023]JonckheerePRL23Jonckheere, T., Rech, J., Grémaud, B., Martin, T.: Anyonic statistics revealed by the Hong-Ou-Mandel dip for fractional excitations. Phys. Rev. Lett. 130, 186203 (2023) 10.1103/PhysRevLett.130.186203 [Iyer et al.2023]JonckheerePRB23Iyer, K., Martin, T., Rech, J., Jonckheere, T.: Quasiparticle Andreev reflection in the laughlin fractions of the fractional quantum hall effect. Phys. Rev. B 108, 155404 (2023) 10.1103/PhysRevB.108.155404 [Nielsen and Chuang2010]NielsenChuangBookNielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, Cambridge (2010). 10.1017/CBO9780511976667 [Wilde2013]WildeBookWilde, M.M.: Quantum Information Theory. Cambridge University Press, Cambridge (2013). 10.1017/CBO9781139525343 [Nayak et al.2008]NayakRevModPhys08Nayak, C., Simon, S.H., Stern, A., Freedman, M., Das Sarma, S.: Non-Abelian anyons and topological quantum computation. Rev. Mod. Phys. 80, 1083–1159 (2008) 10.1103/RevModPhys.80.1083 [Alicea and Fendley2016]AliceaFendleyParaReview16Alicea, J., Fendley, P.: Topological phases with parafermions: Theory and blueprints. Annual Review of Condensed Matter Physics 7(1), 119–139 (2016) 10.1146/annurev-conmatphys-031115-011336 https://arxiv.org/abs/https://doi.org/10.1146/annurev-conmatphys-031115-011336https://doi.org/10.1146/annurev-conmatphys-031115-011336 [Zhang et al.2022]GuX2022Zhang, G., Hong, C., Alkalay, T., Umansky, V., Heiblum, M., Gornyi, I.V., Gefen, Y.: Measuring statistics-induced entanglement entropy with a Hong-Ou-Mandel interferometer (2022) [Bell1964]Bell64Bell, J.S.: On the Einstein-Podolsky-Rosen paradox. Physics Physique Fizika 1, 195–200 (1964) 10.1103/PhysicsPhysiqueFizika.1.195 [Chtchelkatchev et al.2002]ChtchelkatchevPRB02Chtchelkatchev, N.M., Blatter, G., Lesovik, G.B., Martin, T.: Bell inequalities and entanglement in solid-state devices. Phys. Rev. B 66, 161320 (2002) 10.1103/PhysRevB.66.161320 [Fendley et al.1995]FendleyLudwigSaleurPRL95Fendley, P., Ludwig, A.W.W., Saleur, H.: Exact nonequilibrium dc shot noise in Luttinger liquids and fractional quantum Hall devices. Phys. Rev. Lett. 75, 2196–2199 (1995) 10.1103/PhysRevLett.75.2196 [Shopen et al.2005]ShopenGefenMeirPRL05Shopen, E., Gefen, Y., Meir, Y.: Quasiparticle tunneling through a barrier in the fractional quantum Hall regime. Phys. Rev. Lett. 95, 136803 (2005) 10.1103/PhysRevLett.95.136803 [Weiss2012]WeissBook12Weiss, U.: Quantum Dissipative Systems, 4th edn. World Scientific, Singapore (2012). 10.1142/8334 [Bruus and Flensberg2004]FlensbergBookBruus, H., Flensberg, K.: Many-Body Quantum Theory in Condensed Matter Physics: An Introduction, 2nd edn. Oxford University Press, London (2004) [Mahan2000]MahanBookMahan, G.D.: Many-particle Physics. Springer, New York (2000)Sequation SfigureSupplementary Information for “Statistics-induced entanglement generated by Andreev-like tunneling between fractional quantum Hall edges” Gu Zhang, Pierre Glidic, Frédéric Pierre, Igor Gornyi, and Yuval Gefen(Dated: December 27, 2023)In this Supplementary Information, we provide details on (i) Time-dependent correlation functions of operators from each subsystem; (ii) Finite-temperature correlation functions; (iii) Influence of interactions, on the tunneling-current noise; (iv) Analysis on noises, with the picture of single-particle and two-particle scatterings; (v) Experimental details, and (vi) Details on the experiment-theory comparison.§ I. TIME-DEPENDENT CORRELATION FUNCTIONS AT ZERO TEMPERATURE In this section, we provide details of the derivation of Eq. (8) in the main text, i.e., correlation functions ⟨Ψ_A^†(L,t) Ψ_A (L,0) ⟩ and ⟨Ψ_B^†(L,t) Ψ_B (L,0) ⟩, for edges A and B, respectively. To the leading order of tunneling at the central QPC T_C, these two correlation functions are needed to obtain both tunneling current, and current noises. §.§ IA. Leading-order correlations We begin with expansions of the correlation functions to leading order in dilution T_A,B at the corresponding diluter. For simplicity, in this section we take v_F = e = ħ = 1 during the derivation. For concreteness, we focus on the correlation function of operators in edge A, i.e., ⟨Ψ_A^†(L,t) Ψ_A (L,0) ⟩. After the leading-order expansion, the correlator is represented as a double time integral,D_A1≡ -T_A ∑_η_1η_2η_1η_2∬ds_1 ds_2 e^-iν V(s_1 - s_2)⟨Ψ^†_A (L,t^-) Ψ_A (L,0^+) ψ^†_A (0, s_1^η_1) ψ_A (0, s_2^η_2) ⟩×⟨ψ_sA (0, s_1^η_1) ψ_sA^† (0, s_2^η_2) ⟩=- T_A/(2π a)^3∑_η_1η_2η_1η_2 ∬ ds_1 ds_2 e^-iν V(s_1 - s_2)/(a + it)^1/ν [a + i (s_1 - s_2) χ_η_1η_2 (s_1 - s_2)]^2ν a^1/ν + 2ν×[a + i (t - s_1 - L) χ_-η_1 (t-s_1)] [a + i (-s_2 - L) χ_+η_2 (-s_2)]/[a + i (t - s_2 - L) χ_-η_2 (t-s_2)] [a + i (-s_1 - L) χ_+η_1 (-s_1)],where “A1” indicates the expansion to the leading order of T_A, s_1 and s_2 are the time moments when non-equilibrium anyons tunnel from sA to A, with η_1 and η_2 the corresponding Keldysh indexes. The function χ_ηη'(t-t') reflects the relative positions of t^η and t^η': it equals one if t^η is in front of (t')^η' along the Keldysh contour, equals minus one for the opposite situation, and equals zero if t = t' and η = η'. The voltage bias is included in the phase factor exp[-iν V (s_1 - s_2)], with the standard transformation as introduced in, e.g., Ref. <cit.>.In the main text, we comment that Eq. (<ref>) contains only the contribution from “connected” diagrams, i.e., Andreev-like tunneling (at the central QPC) triggered by an incoming non-equilibrium anyon. This process is addressed as “connected” in the main text, because of its similarity to tunnelings of non-interacting fermionic systems <cit.>, where disconnected diagrams (addressed as “self-contracted anyonic pairs” in the main text) are known to vanish. Before moving to evaluation of connected diagrams, we pause for a while, to further clarify meanings of the terminologies: connected and disconnected diagrams. Briefly, these terms are introduced in non-interacting fermionic systems, where a disconnected diagram, corresponding to s_1→ s_2, is exactly decomposable into two fully separated “sub-diagrams” that produce zero results for both current and noise. In anyonic systems, however, strictly speaking, diagrams corresponding to s_1 → s_2 are not necessarily “disconnected”, due to possible (local or non-local) “braiding” between self-contracted non-equilibrium operators, and anyonic operators that tunnel at the central QPC. This is exactly the casewhen the central QPC allows anyons to tunnel <cit.>, where non-equilibrium anyonic operators produce a non-trivial phase exp[iπν (η_1 - η_2)] when “braiding” with anyonic excitations at the central QPC. In this work, we however address the s_1→ s_2 as disconnected, for the convenience of discussion.Although important for anyonic tunneling at the central QPC, disconnected diagrams are irrelevant to Andreev-like tunnelings, to leading order in diluter transmission. Indeed, the last line of D_A1 [Eq. (<ref>)] equals exp[iπ (η_1 - η_2)] = 1 when taking s_1→ s_2, which indicates the absence of “braiding” (being “disconnected” in the physical sense) and yields a vanishing result after summations over Keldysh indexes η_1 and η_2. The vanishing contribution of this leading disconnected diagram is fully reasonable, as a non-equilibrium anyon, from the physical point of view, is the prerequisite of an Andreev-like tunneling. Indeed, without an incoming non-equilibrium anyon, this system is effectively equivalent to two equilibrium edges connected by a QPC, where both anyonic and electronic tunneling processes are forbidden.Now we move to evaluate the leading contribution from connected diagrams. Here, we use the identity (whose validity, as discussed in Ref. <cit.>, requires a much smaller bosonic short-time cutoff than the fermionic counterpart),1/(iτ_0 -t) [iτ_0 χ_η_1η_2 (s_1 - s_2) - (s_1 - s_2)][iτ_0 χ_-η_1 (t - s_1) - (t-s_1-L)] [iτ_0 χ_+η_2(-s_2) - (-s_2 - L)]/[iτ_0 χ_-η_2 (t-s_2) - (t-s_2-L)] [iτ_0 χ_+η_1(-s_1) - (-s_1 - L)] = 1/(iτ_0 - t) [iaχ_η_1η_2(s_1 - s_2) - (s_1 -s_2)]+1/ [iτ_0χ_-η_2 (t-s_2) - (t-s_2-L)] [iτ_0 χ_+η_1 (-s_1) - (-s_1 - L)],to rewrite Eq. (<ref>) as- T_A/(2πτ_0)^3∑_η_1η_2η_1η_2 ∬ ds_1 ds_2 e^-iν V(s_1 - s_2) [iχ_η_1η_2 (s_1 - s_2)]^2ν (i)^1/ν/(iτ_0- t)^1/ν [iτ_0χ_η_1η_2 (s_1 - s_2) - (s_1 - s_2)]^2ντ_0^1/ν + 2ν×[iτ_0 χ_-η_1 (t-s_1) - (t - s_1 - L) ] [i τ_0 χ_+η_2 (-s_2) - (-s_2 - L) ]/[i τ_0 χ_-η_2 (t-s_2)-(t - s_2 - L) ] [i τ_0 χ_+η_1 (-s_1) - (-s_1 - L) ]χ_-η_2 (t - s_2) χ_+η_1 (-s_1)/χ_-η_1 (t - s_1) χ_+η_2 (-s_2)= -T_A/(2πτ_0)^3∑_η_1η_2η_1η_2 ∬ ds_1 ds_2 e^-iν V(s_1 - s_2)/(iτ_0- t)^1/ν - 1[iτ_0χ_η_1η_2 (s_1 - s_2) - (s_1 - s_2)]^2ν - 1τ_0^1/ν + 2ν× [iχ_η_1η_2 (s_1 - s_2)]^2ν (i)^1/νχ_-η_2 (t - s_2) χ_+η_1 (-s_1)/χ_-η_1 (t - s_1) χ_+η_2 (-s_2)×{1/(iτ_0 - t) [iτ_0 χ_η_1η_2(s_1 - s_2) - (s_1 - s_2)]+1/ [iτ_0 χ_-η_2 (t-s_2) - (t-s_2-L)] [iτ_0 χ_+η_1(-s_1) - (-s_1 - L)]},where τ_0 refers to the ultraviolet cutoff (in time). In Eq. (<ref>), first term of the last line corresponds to the “disconnected” diagram, which vanishes following our analysis above. After ignoring this “disconnected” contribution, we notice that as ν < 1/2 (such that the red term is not singular), Eq. (<ref>) contains only two poles: s_1 = -L and s_2 = t-L. We also notice that when ν > 1/2, the red term contains the pole s_1 → s_2 even after choosing the connected contribution [i.e., by taking the second term of the last line of Eq. (<ref>)]. This is actually the case for a non-interacting fermionic system. This complexity is, however, avoided by choosing ν < 1/2, a satisfactory requirement for all Laughlin edge states. When considering only poles s_1 = -L and s_2 = t-L, we are allowed to carry out the integral above by means of the contour integration, i.e.,∫ d(t - s_2) e^-iν V (t - s_2)/i τ_0 η_2 - (t-s_2) ∫ ds_1 e^-iν V s_1/ iτ_0 η_1 + s_1 = (η_2 - 1) (η_1 + 1) π^2,which indicates that only the option η_1 =1 and η_2 = -1 is allowed, to obtain a finite result. These choices of Keldysh indexes are related to the fact that V > 0 pre-selects the allowed contour (i.e., upper or lower half of the complex plane) when performing the integral. With the integrated result, we obtainD_A1 = T_A e^iν V t 1 /2πτ_0^3-1/ν - 2ν1/(τ_0 + i t)^2 ν + 1/ν - 2. We can combine D_A1 with the equilibrium contribution given byD_A0≡τ_0^1/ν-1 (τ_0 + it)^-1/ν/2π,and borrow the leading-order expression of the corresponding non-equilibrium current I_A0 = T_A ντ_0^2ν-2sin(2πν) Γ (1-2ν) (ν V)^2ν-1/2π^2, to arrive atD_A0 + D_A1= τ_0^1/ν-1/2π (τ_0 + it)^1/ν[ 1 + e^iν V t c(ν) I_A0/(ν V)^2ν - 1 ( it)^2-2ν],c (ν) = 2π^2/sin (2πν) Γ (1-2ν) ν.It is instructive to compare the ν→ 1 limit of Eq. (<ref>), where lim_ν→ 1 c(ν) = 2π, with that of a non-interacting fermionic system: the latter has the correlation function⟨Ψ^†_A(t) Ψ_A (0) ⟩_fermion = 1/2π (τ_0 + it)( 1 + e^i V t T_A - T_A ).After taking ν→ 1, and I_A0/V = 2 π T_A, we notice that the Eq. (<ref>) perfectly captures the first two terms of the non-interacting fermionic result, however, misses the last term. This missing term requires taking the s_1→ s_2 pole (the one marked out in red) of Eq. (<ref>), after choosing the connected diagram [the second term of the last line of Eq. (<ref>)]. This term is absent in Eq. (<ref>), as the (to be integrated) function becomes regular when ν > 1/2 for Laughlin quasiparticles. §.§ IB. Processes of higher-order in dilution coefficients Now we proceed to analyze the contribution of higher-order processes involving multiple non-equilibrium anyons that arrive at the central QPC between two Andreev-like tunneling events. As has been analyzed in the main text, these non-equilibrium anyons induce non-trivial phases, via “braiding” with fractional holes generated by Andreev-like tunnelings. In this section, we provide a detailed analysis to illustrate this point. We begin with the fourth-order term in the expansion of the correlationfunction in diluter tunneling amplitudes (second order in T_A):D_A2≡T_A^2 2^4/4!∑_η_1η_2η_3η_4η_1η_2 η_3η_4 ∬ ds_1 ds_2 ds_3 ds_4 e^-iν V(s_1 - s_2 + s_3 - s_4)×⟨ψ_sA (0,s_1^η_1) ψ^†_sA (0,s_2^η_2) ψ_sA (0,s_3^η_3) ψ^†_sA (0,s_4^η_4) ⟩×⟨Ψ^†_A (L,t^-) Ψ_A (L,0^+) ψ^†_A (0, s_1^η_1) ψ_A (0, s_2^η_2)ψ_A^† (0, s_3^η_3) ψ_A (0, s_4^η_4) ⟩=T_A^2/48 π^5 τ_0^5∑_η_1η_2η_3η_4η_1η_2 η_3η_4 ∬ ds_1 ds_2 ds_3 ds_4 e^-iν V(s_1 - s_2 + s_3 - s_4) × ⟨ e^i√(ν)ϕ_sA(0,s_1^η_1) e^-i√(ν)ϕ_sA(0,s_2^η_2) e^i√(ν)ϕ_sA(0,s_3^η_3) e^-i√(ν)ϕ_sA(0,s_4^η_4)⟩ × ⟨ e^-i/√(ν)ϕ_A (L,t^-)e^i/√(ν)ϕ_A (L,0^+)e^-i√(ν)ϕ_A(0,s_1^η_1) e^i√(ν)ϕ_A(0,s_2^η_2) e^-i√(ν)ϕ_A(0,s_3^η_3) e^i√(ν)ϕ_sA(0,s_4^η_4)⟩,which contains non-equilibrium operators taken at the four time moments: s_1,s_2,s_3 and s_4. We assume that two out of four non-equilibrium vertexes contract with electron operators at the central QPC (corresponding to the connected diagram). The other two then have to perform self-contraction (the disconnected diagram). Without loss of generality, we choose operators labeled as 3 and 4 (i.e., with time variables s_3, s_4, and Keldysh indexes η_3,η_4) to have self-contraction. With this option, we can the last two lines of Eq. (<ref>) asτ_0^2ν + 1/ν/(τ_0 + i t)^1/ν - 1[τ_0 + i(s_1 - s_2 ) χ_η_1η_2 (s_1 - s_2)]^2ν - 1τ_0^2ν/[τ_0 + i (s_3 - s_4) χ_η_3η_4 (s_3 - s_4)]^2 ν × 1/[τ_0 + i (t-s_2 - L) χ_-η_2 (t-s_2)] [τ_0 + i (-s_1 - L) χ_+η_1 (-s_1)]× [τ_0 + i (t - s_1 - L) χ_-η_1 (t-s_1)] [τ_0 + i ( - s_2 - L) χ_+η_2 (-s_2)]/(τ_0 + it) [τ_0 + i(s_1 - s_2 ) χ_η_1η_2 (s_1 - s_2)] × [τ_0 + i(t-s_3 - L) χ_-η_3 (t-s_3)] [τ_0 + i (-s_4 - L) χ_+η_4 (-s_4)]/[τ_0 + i(t-s_4 - L) χ_-η_4 (t-s_4)] [τ_0 + i (-s_3 - L) χ_+η_3 (-s_3)] × [τ_0 + i(s_1-s_3 ) χ_η_1η_3 (s_1-s_3)]^2ν [τ_0 + i(s_2-s_4 ) χ_η_2η_4 (s_2-s_4)]^2ν/[τ_0 + i(s_1-s_4 ) χ_η_1η_4 (s_1-s_4) ]^2ν [τ_0 + i(s_2-s_3 ) χ_η_2η_3 (s_2-s_3 )]^2ν ,where terms in red highlight contributions from two extra non-equilibrium pairs, at moments s_3 and s_4. The red term of the first line indicates the self-contraction of two extra anyonic pairs. The last two lines, on the other hand, refer to two possible extra phases, produced due to “braiding” between non-equilibrium operators and (i) the fermionic operators at the central QPC, or (ii) the other two anyonic operators (with labels 1 and 2) that participate Andreev-like tunneling. The phase from source (i) equals[τ_0 + i(t-s_3 - L) χ_-η_3 (t-s_3)] [τ_0 + i (-s_4 - L) χ_+η_4 (-s_4)]/[τ_0 + i(t-s_4 - L) χ_-η_4 (t-s_4)] [τ_0 + i (-s_3 - L) χ_+η_3 (-s_3)] = exp[ iπ (η_3 -η_4) ] = 1,leading to a trivial result. This is a direct indicator that one electron (that tunnels at the central QPC) does not braid with an anyon. For case (ii), the phase equals[τ_0 + i(s_1-s_3 ) χ_η_1η_3 (s_1-s_3)]^2ν [τ_0 + i(s_2-s_4 ) χ_η_2η_4 (s_2-s_4)]^2ν/[τ_0 + i(s_1-s_4 ) χ_η_1η_4 (s_1-s_4) ]^2ν [τ_0 + i(s_2-s_3 ) χ_η_2η_3 (s_2-s_3 )]^2ν = exp[ iπν (η_4 -η_3) ],which is, in contrast to that of case (i), a non-trivial phase, as it involves ν. Here this phase factor comes from “braiding” between both source operators, and these in the middle arm. Following the analysis above, this phase is produced due to “braiding” between reflected anyonic holes and non-equilibrium anyons (with labels 3 and 4, of the case under consideration) that do not participate in Andreev-like tunneling. Notice that this phase factor actually equals the “braiding” phase in Refs. <cit.> for anyonic tunnelings at the central QPC. With these two phase factors, we can figure out the contribution of two extra non-equilibrium pairs, by integrating over s_1 and s_2, leading to∑_η_3η_4η_3η_4 ∬ ds_3 ds_4 e^-iν V (s_3 - s_4)/[τ_0 + i (s_3 - s_4) χ_η_3η_4 (s_3 - s_4)]^2 ν= t ( 1 - e^2iπν) 2 sin (2 πν) Γ (1 - 2ν) (ν V)^2ν - 1. Combining all factors, we arrive atD_A2 = τ_0^1/ν-1 c(ν)/2π (τ_0 + it)^1/νI_A0/(ν V)^2ν - 1 ( it)^2-2ν e^ iν V t[ -I_A0/ν( 1 - e^2iπν) t ],where the term in the square bracket has exactly the same form as that of the leading disconnected contribution of Ref. <cit.>. We thus arrive at the conclusion that an extra pair of non-equilibrium anyons, introduced by the next-leading-order processes, induces a correction to the correlation function (of a connected diagram), due to “braiding” between this extra pair of non-equilibrium anyons and fractional-charge hole produced in the course of Andreev-like tunneling. We can further extend the expansion of the correlation function to higher orders. By doing so, non-equilibrium anyons that do not Andreev-tunnel at the central QPC play the role of non-equilibrium anyons in Refs. <cit.>. A resummation is thus again valid for non-equilibrium anyons that correspond to disconnected diagrams. We can perform this resummation by, for instance, considering the expansion to the 2nth order in diluter transmissions. Note that at this stage, each expanded operator can either be a creation, or annihilation operator. Without loss of generality, we assume operators at moments s_1 and s_2 as connected ones, and for the rest of them, the 2i-1th operator (assumed as an annihiliation operator) contracted with the 2ith one (assumed as a creation operator), with i an integer between 2 and n. For this option, the correlation becomesτ_0^2ν + 1/ν[τ_0 + i (t - s_1 - L) χ_-η_1 (t-s_1)] [τ_0 + i ( - s_2 - L) χ_+η_2 (-s_2)]/(τ_0 + i t)^1/ν - 1[τ_0 + i(s_1 - s_2 ) χ_η_1η_2 (s_1 - s_2)]^2ν - 1(τ_0 + it) [τ_0 + i(s_1 - s_2 ) χ_η_1η_2 (s_1 - s_2)] × 1/[τ_0 + i (t-s_2 - L) χ_-η_2 (t-s_2)] [τ_0 + i (-s_1 - L) χ_+η_1 (-s_1)]× ∏_j=2^n ∬ ds_2j-1 ds_2jexp[iπν (η_2j - η_2j - 1)] τ_0^2ν/[τ_0 + i (s_2j-1- s_2j) χ_η_2j-1η_2j (s_2j-1- s_2j)]^2 ν,where the first two lines are the leading-order result, with only connected-diagram contribution taken into consideration. The last line, on the other hand, refers to contribution with extra n-1 pairs of self-contracted non-equilibrium operators, where the “entanglement phase”, akin to Eqs. (<ref>) and (<ref>), has already been included. Notice that by doing so, corresponding contract option [as mentioned above Eq. (<ref>)] has been taken. Importantly, following Eq. (<ref>), with multiple (i.e., n-1) pairs of self-contracted operators, the contribution of these pairs [i.e., the last line of Eq. (<ref>)] simply equals the product of n-1 copies of the single-pair result. This fact is the prerequisite of resummation performed in e.g., Ref. <cit.>.Now we consider the number of options. To start with, we have 2n (2n - 1) ways to choose two operators (one creation and one annihilation) for a connected diagram. Next, we need to pair up the rest 2n-2 non-equilibrium operators (disconnected contractions), with the number of options2^n-1/(n-1)! C_2n-2^2 C_2n-4^2 ··· C_2^2 = (2n-2)!/(n-1)!,where the factor 2^n-1 indicates that one can choose any operator of the chosen self-contracted pair to be the creation operator, while the factor 1/(n-1)! removes repeated options, as it does not make any difference to pick up one pair earlier or later. Now, restoring the prefactor 1/(2n!) from the expansion, the resummation becomes∑_n=1^∞[ -I_A0/ν( 1 - e^2iπν) t ]^n-1(2n-2)!/(n-1)! (2n-1) 2n 1/(2n)! =∑_n=1^∞[ -I_A0/ν( 1 - e^2iπν) t ]^n-1/(n-1)!= exp[ -I_A0/ν( 1 - e^2iπν) t ],which equals the exponential term of Eq. (8) of the main text, after taking I_A0→ I_A0/e, i.e., adding back constant factors. Finally, we arrive at correlation functions, after resummation over disconnected pairs∑_n=0^∞ D_An=τ_0^1/ν-1/2π (a + it)^1/ν{ 1 + e^iν V t c(ν) i t I_A0/(i ν V t)^2ν-1exp[-( 1 -e^ 2iπν) I_A0t/ν] }. §.§ IC. Integral over time t To obtain the expressions for the tunneling noises, Eqs. (2) and (3) of the main text, we need to perform an integration over time for correlation functions displayed in Eq. (8). It involves integral of the type∫ dt e^-b |t| + i c t/(τ_0 + it)^n_0≈2b/(n_0 - 2) (n_0 - 1)τ_0^2- n_0 + 4bc/(n_0 - 3) (n_0 - 2) (n_0 - 1)τ_0^3 - n_0,where b > 0 and c are both real numbers and we have expanded the result to the leading order in the ultraviolet cutoff τ_0. For the cases we study, n_0 equals 2/ν + 2ν -2 for single-source contributions, and 2/ν + 4ν -4 for double-source ones. For Laughlin quasiparticles ν≤ 1/3, n_0 > 3 is satisfied for both contributions. Following Eq. (<ref>), to the leading order in a, only the value of b matters. This fact indicates that, in contrast to the anyonic-tunneling case <cit.>, for electronic tunnelings at the central QPC, the involved integrals are dominated by the ultraviolet limit, t→ a. Within our analysis of noise, b corresponds to non-equilibrium current: I_A0 or I_B0 for the single-source case, and I_+ = I_A0 + I_B0 for the double-source case. The proportionality of the time integral to the non-equilibrium current indicates that one cannot neglect disconnected diagrams (which produce the exponential suppression exp{ -[1 - exp(± 2iπν) I_A0,B0/ν e]t }) when calculating noise or current, in systems with Andreev-like tunneling.We are now ready to calculate tunneling current and tunneling current noise, which involve the integral of correlation functionsS_T= v_F^2 e^3 T_C ∫ dt ⟨{Ψ_B^† (0) Ψ_A (0), Ψ_A^† (t) Ψ_B (t) }⟩_T_C = 0,I_T= e^2 v_F^2 T_C ∫ dt ⟨[] Ψ_B^† (0) Ψ_A (0), Ψ_A^† (t) Ψ_B (t) ] ⟩_T_C = 0,where v_F is the Fermi velocity. Integrals of Eq. (<ref>) can be evaluated with Eqs. (<ref>) and (<ref>), leading to explicit expressions (assuming V_sA = V_sB = V)I_T (T_A, 0) = - T_C T_A/π^22 [I_A0/ν (1 - cos 2πν) ][e/ħν V +I_A0/e νsin2πν]/(2/ν + 2ν - 3)(2/ν + 2ν - 4)(2/ν + 2ν - 5)τ_0,I_T (0, T_B) = T_C T_B/π^22 [I_B0/ν (1 - cos 2πν) ] [e/ħν V +I_B0/e νsin2πν]/(2/ν + 2ν - 3)(2/ν + 2ν - 4)(2/ν + 2ν - 5)τ_0, I_T (T_A ,T_B ) = - T_C T_AT_B/π^28 sin^3πνcosπνI_+ I_-/e ν^2/(2/ν + 4ν - 5)(2/ν + 4ν - 6) (2/ν + 4ν - 7) τ_0,S_T (T_A,0) = T_C T_A/π^2 2 νsin^2(πν)e I_A0/(2 - 3ν + 2ν^2 )(2- 4ν + 2ν^2 ) , S_T (0, T_B) = T_C T_B/π^2 2 νsin^2(πν)e I_B0/(2 - 3ν + 2ν^2 )(2- 4ν + 2ν^2 ),S_T (T_A, T_B)= T_C T_A/π^2 2 νsin^2(πν)e I_A0/(2 - 3ν+ 2ν^2 )(2 - 4ν+ 2ν^2 ) + T_C T_B/π^2 2 νsin^2(πν)e I_B0/(2 - 3ν+ 2ν^2 )(2 - 4ν+ 2ν^2 )+ T_C T_AT_B/π^22 νsin^2(πν)e (I_A0+I_B0) /(2- 5ν + 4ν^2 )(2- 6ν + 4ν^2 ) ,which, in comparison to the noise, are proportional to the ultraviolet cutoff τ_0. § II. FINITE-TEMPERATURE EXPRESSIONS In the main text, we assume that both sources are fixed by the voltage bias V, with respect to two middle edges A and B. With this assumption, single-source measurement can be obtained by pinching off one of two diluters, which tunes either T_A or T_B to be zero.In real experiments, the single-source correlation measurement can be alternatively performed by turning off either source, i.e., keeping the corresponding source grounded. This option (i.e., taking zero voltage bias) is, however, ill-captured by Eq. (8), because of the divergence of Eq. (10) in the V → 0 limit. Indeed, when V → 0, a finite temperature must be assumed, to avoid the current divergence, and keep the diluter in the anyonic tunneling limit. In this section, we thus take a finite temperature T. We assume that (i) T is much smaller than the corresponding bias, if the source is on, and (ii) T is also large enough to keep the diluter in the anyonic tunneling limit, in which anyons are allowed to tunnel through the diluting QPC. Notice that a finite temperature, or even the temperature difference, is capable of disclosing anyonic statistical feature <cit.>, due to the connection between delta-T noise (noise induced by a temperature difference) and operator scaling dimension <cit.>, which is proportional to the filling factor. §.§ IIA. Modifications on the non-equilibrium current The inclusion of a finite temperature introduces two modifications: a modification of the non-equilibrium current through diluters and the modification of contour integrals. At finite temperatures, the non-equilibrium current through a diluter involves the following integral:∫ dt (π k_B T)^2ν e^iν e V/ħ t/sin^2ν [π k_B T (τ_0 + i t)/ħ] = (2π k_B T)^2ν - 1ħ/2πΓ (2ν) e^ν e V/4π k_B T| Γ(ν + iν e V/4π^2 k_B T) |^2,where 2ν < 1 is assumed, as in the main text. This integral, which refers to the current from one source, was addressed, in particular, in Ref. <cit.>. With this integral, the leading-order currents that enter the two middle edges becomeI_A0,B0 (V_sA,sB, T) = e^2/τ_0T_A,B/4π^2(2π k_B T τ_0/ħ)^2ν-1×ν/πΓ (2ν)sinh( ν e V_sA,sB/2 k_B T) | Γ(ν + iν e V_sA,sB/2π k_B T) |^2,where V_sA and V_sB refer to the bias in sources sA and sB, respectively. We can briefly capture the current features by checking the asymptotic scaling of the function that depends on ν e V_sA,sB/k_B T, i.e.,sinh (x) |Γ (ν + i x/π) |^2∝ x,if x ≪ 1, x^2ν - 1,if x≫ 1.Following the asymptotic features above, I_A0,B0∝ T_A,B (e V_sA,sB)^2ν - 1for ν V_sA,sB≫ 2 T, in agreement with Eqs. (9) and (10) of the main text. In the opposite limit ν e V_sA,sB≪ 2 k_B T, we getI_A0,B0∝ T_A,B (k_B T)^2ν - 2 e V_sA,sB.In both limits, the current equals the product of e V_sA,sB, and the renormalization factor [max(ν e V_sA,sB, 2 k_B T)]^2ν - 2, in agreement with the scaling analysis for anyonic tunneling through a QPC. §.§ IIB. Finite-temperature contour integral The introduction also modifies the contour integral. In this section, we focus, without loss of generality, on subsystem 𝒜. We once again take e = ħ = v_F = 1 in this subsection, to simplify the derivation. These constant factors will be included when showing the final results. The finite-temperature situation involves two integrals below∫ ds_1 e^-iν V_sA s_1/sinh{π T[i τ_0 χ_+η_1 (-s_1) - (-s_1 - L) ]}∫ ds_2 e^iν V_sA s_2/sinh{π T [i τ_0 χ_-η_2 (t-s_2) - (t- s_2 - L) ] } = ∫ ds_1 e^iν V_sA s_1/sinh [ π T (s_1 - i τ_0 η_1) ]∫ ds_2 e^-iν V_sA s_2/sinh [ π T (s_2 - i τ_0 η_2) ],where we have shifted s_1 → - s_1 - L, s_2 → - s_2 + t - L, and taken the large-L limit for the second line. These integrals contain poles at s_1 = iτ_0η_1 + n_1/T and s_2 = iτ_0η_2 + n_2/T, where n_1 and n_2 are integers, with their value ranges determined by η_1 and η_2. Indeed, since V_A > 0, integrals over s_1 and s_2 include the upper and lower half-planes, respectively. As a consequence, n_1 ≥ 0 if η_1 = 1, and n_1 ≥ 1 otherwise; n_2 ≤ 0 if η_2 = -1, and n_2 ≤ -1 otherwise. In contrast to the zero-temperature case, now η_1 and η_2 can take both values, as thermal fluctuations allow (exponentially suppressed) tunneling from A to sA.Since poles of s_1 and s_2 contain integer factors, the contour integrals will be expressed in terms of series over n_1 and n_2. Now, the correlator D_A1 is evaluated asD_A1= -T_A/(2πτ_0)^3∬ds_1 ds_2 ∑_η_1η_2η_1η_2 (π T τ_0)^1/ν + 2ν e^-iν V_sA(s_1 - s_2)/sin^1/ν[π T (τ_0 + it)] sin^2ν{ 2π^2 T [τ_0 + i (s_1 - s_2) χ_η_1η_2 (s_1 - s_2)] }×sin{π T[τ_0 + i (t - s_1 - L) χ_-η_1 (t-s_1)] }sin{π T[τ_0 + i (-s_2 - L) χ_+η_2 (-s_2)] }/sin{π T[τ_0 + i (t - s_2 - L) χ_-η_2 (t-s_2)] }sin{π T [τ_0 + i (-s_1 - L) χ_+η_1 (-s_1)] } = T_A/(2πτ_0)^3∑_η_1η_2η_1η_2 (π T τ_0)^1/ν + 2ν/sin^1/ν- 1 [π T (τ_0 + it)] e^iν V_sA t×∬ ds_1 ds_2 χ_η_1η_2 (s_2 - s_1 - t) sin^1-2ν{ [π T (τ_0 + i (s_2-s_1 - t) χ_η_1η_2(s_2 - s_1 - t)]}/e^-iν V_sA ( s_1 - s_2)sinh [ π T (s_1 - i τ_0 η_1) ]sinh [ π T (s_2 - i τ_0 η_2) ] = T_A/(2πτ_0)^3∑_η_1η_2η_1η_2 (π T τ_0)^1/ν + 2ν/sin^1/ν- 1 [π T (τ_0 + it)] e^iν V_sA t4π^2/(π T)^2 (-η_1) sin^1-2ν (i π T t η_1) ×∑_n_1 = (1-η_1)/2^∞∑_n_2 = (1 + η_2)/2^∞e^-ν V_sA/ T n_1 e^-ν V_sA/ T n_2θ^1-2ν (n_1 + n_2)= T_A/2πτ_0(π T τ_0)^1/ν + 2ν-2/sin^1/ν+ 2 ν - 2 [π T (τ_0 + it)] e^iν V_sA t[1 - exp( -ν V_sA/ T)] [1 - (-1)^-2νexp( -ν V_sA/ T)]/ 1 + exp( -2ν V_sA/ T),where θ (n) = 1 if n is even, and equals -1 if n becomes odd. Equation (<ref>) transforms into the zero-temperature expression when V_sA≫ T, and becomes proportional to V_sA/T in the opposite limit. This fact, in combination with Eq. (<ref>) for the non-equilibrium current, indicates that in the V_sA≪ T limit (i.e., when source sA turned-off), one should replace I_A0,B0/(ν V_sA,sB)^2ν - 1 of Eqs. (8)-(10) of the main text, by the modified expression I_A0,B0/(2π T)^2ν - 1.Based on the expressions above, we conclude that one can obtain the single-source contribution by tuning one of the source voltage biases (V_sA or V_sB) to zero. It is equivalent to pinching off the corresponding diluter (i.e., setting T_A or T_B to zero), as suggested in Eqs. (2) and (3) in the main text. Finally, to end this section, we, as promised, present the result after adding back all constant factors, leading toD_A1= T_A/2πτ_0 v_F^2 (π k_B T τ_0/ħ)^1/ν + 2ν-2/sin^1/ν+ 2 ν - 2 [π k_B T (τ_c + it)/ħ] e^iν e V_sA t/ħ × [1 - exp( -ν e V_sA/ k_B T)] [1 - (-1)^-2νexp( -ν e V_sA/ k_B T)]/ 1 + exp( -2ν e V_sA/ k_B T).§ III. MODIFICATIONS FROM INTERACTIONS In the main text, we comment that the entanglement pointer 𝒫_Andreev has a stronger resilience (than that of the tunneling-current noise) against interaction effects. In this section, we demonstrate this by including the screened Coulomb interaction that couples the charge density in edge A with the charge density in edge B. We once again take v_F = ħ = e = 1 during derivations. These constant factors will be re-introduced when showing the final results. §.§ IIIA. Interaction's effect on correlation functions In this section, we consider the model of Fig. <ref>, where we introduce Coulomb interaction between edges A and B only in the shadowed area (i.e., -d ≤ x ≤ d in Fig. <ref>, with d < L). For simplicity, we further assume a constant interaction (quantified by the Luttinger liquid parameter K) within the interacting area ( in the experiment, this would correspond to the effect of screening of the long-range Coulomb repulsion by gates).It is worth noting that in this section, we are choosing a different convention of spatial coordinates: now the two diluters are placed at x = ± L, and the central QPC is located at x = 0. Indeed, (formally) non-local interactions would be introduced, if staying with the convention of other sections (i.e., in the way that x increases along corresponding downstream directions of each edge).Within the interacting area, Luttinger-type interactions are easily incorporated within the bosonization approach. For our later convenience, we follow Ref. <cit.>, and use canonical fields to bosonize fermionic operators Ψ_A and Ψ_B, Ψ_A (x) = F_A/√(2π a) e^i k_F x e^i [θ (x ) - ϕ (x) ], Ψ_B (x) = F_B/√(2π a) e^-i k_F x e^i [θ (x ) + ϕ (x) ],where canonical phases follow the standard commutator [ ϕ(x), θ (x') ] = iπδ (x' - x), and are related to our original fields following ϕ_A = θ - ϕ and ϕ_B = θ + ϕ. Without interaction, ϕ_A and ϕ_B are the right-going and left-going modes in edges A and B, respectively. However, they are no longer chiral modes in the interacting area. Indeed, now the left-going and right-going chiral modes become ϕ_± = K θ∓ϕ, where K refers to the Luttinger liquid parameter. In the interacting area, edge fields and chiral fields are connected viaϕ_A (x) = ϕ_+ (x) + ( 1/2 K -1/2) [ϕ_+ (x) + ϕ_- (x)], ϕ_B(x) = ϕ_- (x) + ( 1/2 K -1/2) [ϕ_+ (x) + ϕ_- (x)]. For later convenience, we further defineδ_edge≡1/2 K- 1/2,as the parameter that quantifies the effective difference from a non-interacting situation. In addition to Eq. (<ref>) for the rotation of fields, interaction also modifies the quasiparticle velocity. Indeed, following Ref. <cit.>, within the interacting area, the velocity u and the Luttinger liquid parameter K are related to the inter-edge interaction, following (after taking v_F = 1)u = √(1 - (g_2/2)^2),K = √(1 - g_2/2/1 + y_2/2),where g_2 refers to the strength of the inter-edge Coulomb interaction (interaction between counterpropagating bare modes). We can then express the “plasmon” velocity in the interacting area in terms of K,u = 2K/ 1 + K^2≈ 1 - (K-1)^2/2,where we have taken the weak-interaction assumption (|K-1| ≪ 1) to expand u to leading order in interaction. In comparison to δ_edge≈ (1 - K )/2 that is linear to (1-K),the leading interaction-induced modification of the velocity is quadratic in (1-K), underscoring a comparatively smaller correction from weak interactions. In this section, we thus approximately take u = 1 in our calculation.As we introduce sharp boundaries x = ± d that abruptly separate interacting and non-interacting areas, boundary conditions should be installed at these two boundaries. These boundary conditions describe the Fresnel scattering of bosonic modes at the interfaces separating two media with different “optical” properties. This type scattering gives rise to fractionalization of the charge excitations at the interfaces <cit.>. Briefly, since edges A and B are spatially separated, we can enforce current conservation in each edge separately, at different boundaries. For the boundary at x = -d, the incoming current in edge A equals ∂_x ϕ_A/(2π). This current should be equal to the current in edge A, right into the interacting area. The current conservation requires the knowledge of current operators inside and outside of the interacting area. More specifically, outside the interacting area, ϕ_A,B (x,t) = ϕ_± (x ∓ t), meaning that the current operatorÎ_A,B (|x| > d) = -∂_t ϕ_± (x∓ t)/2π = ±∂_x ϕ_±/2π (as a reminder, we take Fermi velocity v_F = 1 for simplicity in this work). Inside the interacting area, instead ϕ_A,B (x,t) = (1 + δ_edge) ϕ_± (x ∓ t) + δ_edgeϕ_∓ (x ± t), leading to a modified current operator (we recall that we have neglected the difference of u from 1, which is quadratic in the interaction strength)Î_A,B (|x| < d) = - 1+δ_edge/2π ∂_t ϕ_± (x ∓ t) - δ_edge/2π ∂_t ϕ_∓ (x ± t) =±1+δ_edge/2π ∂_x ϕ_± (x ∓ t) ∓δ_edge/2π ∂_x ϕ_∓ (x ± t).Current conservation then leads to the following relations between the phases at the interfaces:ϕ_A (-d^-) = ϕ_+ (-d^+) + δ_edge [ϕ_+ (-d^+) - ϕ_- (-d^+) ],ϕ_B (d^+) = ϕ_- (d^-) + δ_edge [ -ϕ_+ (d^-) + ϕ_- (d^-) ],where superscript ± in d^± labels the right and left sides, respectively, of a given boundary. Since ϕ_A,B are free chiral fields in the non-interacting regions, with expressions of Eq. (<ref>), one can keep track of the positions of ϕ_± fields at earlier time moments, to express fields at diluters asϕ_A (-L, s) =ϕ_A (-d, s + L - d) = (1 +δ_edge) ϕ_+ ( -d,s + L - d)-δ_edgeϕ_- ( -d,s + L - d), ϕ_B (L, s) = ϕ_B (d, s + L - d) = (1 + δ_edge) ϕ_- (d, s + L - d)- δ_edgeϕ_+ (d, s + L - d). For further convenience, it is useful to imagine an auxiliary wire where the interaction region would be extended to the positions of diluters. In the interacting part of our setup, |x|<d, the chiral fields are equivalent to those in the auxiliary one: ϕ_±(x,t)=ϕ̃_±(x,t).In the auxiliary system, we can further useϕ̃_+(-d,s+L-d)=ϕ̃_+(L,s) and ϕ̃_-(-d,s+L-d)=ϕ̃_-(-2d+L,s). Thus, the fields ϕ_A,B(-L,s) in the noninteracting parts of our setup near the diluters can be replaced by the combinations of the chiral fields ϕ̃_± of the virtual wire, where interaction is everywhere:ϕ_A (-L, s) → (1 + δ_edge) ϕ̃_+ ( -L,s )- δ_edgeϕ̃_- ( -2d + L,s ), ϕ_B (L, s) → (1 + δ_edge) ϕ̃_- ( L,s)- δ_edgeϕ̃_+ (2d - L , s).Equations (<ref>) indicate that although field operators at two diluters are non-interacting, they can be written in terms of two counter-propagating fields of the auxiliary wire. This substitution of fields is taken since ϕ_A and ϕ_B are not independent fields at the central QPC [see Eq. (<ref>) below]. In what follows, for brevity, we will remove the tildes from fields in the virtual wire. With these expressions, we are ready to analyze the correlation function, in the presence of interaction. To begin with, after including interactions, the correlator at the central QPC becomesT_C ⟨ T_K Ψ^†_A(0, t^-) Ψ_B (0, t^-) Ψ^†_B (0,0^+) Ψ_A (0,0^+) ⟩ =T_C/(2π a)^2⟨ T_Ke^-i1/√(ν) [ϕ_A (0,t^-) - ϕ_B (0,t^-)]e^i1/√(ν) [ϕ_A (0,0^+) - ϕ_B (0,0^+)] ⟩=T_C/(2π a)^2⟨ T_Ke^-i1/√(ν)ϕ_+ (0,t^-) e^i1/√(ν)ϕ_+ (0,0^+) ⟩⟨ T_Ke^i 1/√(ν)ϕ_- (0,t^-) e^-i 1/√(ν)ϕ_- (0,0^+) ⟩,where now we need to evaluate the correlation function for ± modes, instead of A and B modes. As a reminder, Eq. (<ref>) can not capture the situation where edges A and B are biased at different voltages, as this situation requires the inclusion of another voltage-difference-dependent (and time-dependent) phase factor: in this non-equilibrium case the ± modes are not at equilibrium. However, we can still use Eq. (<ref>) for calculating perturbative expansions in diluter's transmissions even in the single-source case, since all the involved averages will be taken with respect to the equilibrium state.As a direct consequence of interaction between the edges, now we cannot evaluate the correlation function of each subsystem (𝒜 and ℬ) separately.For instance, when considering leading-order expansion at the upper diluter, the modified D_A2 of Eq. (<ref>) contains correlations like [following ϕ_A - ϕ_B = ϕ_+ - ϕ-, as given by Eq. (<ref>)]:∑_η_1η_2η_1η_2 ∬ ds_1 ds_2 ⟨ e^-i√(ν)ϕ_sA (-L, s_1^η_1)e^i√(ν)ϕ_sA (-L, s_2^η_2) ⟩ e^iν V (s_1 - s_2)×⟨ T_Ke^-iϕ_+ (0,t^-) /√(ν) e^iϕ_+ (0,0^+)/√(ν)e^iϕ_- (0,t^-)/√(ν) e^-i ϕ_- (0,0^+)/√(ν) e^i√(ν)ϕ_A (-L, s_1^η_1)e^-i√(ν)ϕ_A (-L, s_2^η_2) ⟩ = ∑_η_1η_2η_1η_2 ∬ ds_1 ds_2 τ_0^ν e^iν V (s_1 - s_2)/[ τ_0 + i (s_1 - s_2) χ_η_1η_2 (s_1 - s_2) ]^ν×⟨ T_Ke^-i1/√(ν)ϕ_+ (0,t^-) e^i1/√(ν)ϕ_+ (0,0^+) e^i√(ν) (1 + δ_edge) ϕ_+ (-L, s_1^η_1)e^-i√(ν) (1 + δ_edge) ϕ_+ (-L, s_2^η_2) ⟩×⟨ T_K e^iϕ_- (0,t^-)/√(ν) e^-iϕ_- (0,0^+)/√(ν) e^- i√(ν)δ_edgeϕ_- (L - 2d, s_1^η_1)e^i√(ν)δ_edgeϕ_- (L - 2d, s_2^η_2) ⟩ = ∑_η_1η_2η_1η_2 ∬ ds_1 ds_2 e^iν V (s_1 - s_2)τ_0^1/2ν + 2ν̃/(τ_0 + it)^1/2ν [ τ_0 + i (s_1 - s_2) χ_η_1η_2 (s_1 - s_2) ]^2ν̃×{[ τ_0 + i ( t - s_2 - L + 2 d ) χ_-η_2 (t - s_2) ] [ τ_0 + i (- s_1 - L + 2 d ) χ_+η_1 ( - s_1) ]/[ τ_0 + i ( t - s_1 - L + 2 d ) χ_-η_1 (t - s_1) ] [ τ_0 + i (- s_2 - L + 2 d ) χ_+η_2 ( - s_2) ]}^δ_edge×{[ τ_0 + i ( t - s_2 - L ) χ_-η_2 (t - s_2) ] [ τ_0 + i (- s_1 - L ) χ_+η_1 ( - s_1) ]/[ τ_0 + i ( t - s_1 - L ) χ_-η_1 (t - s_1) ] [ τ_0 + i (- s_2 - L ) χ_+η_2 ( - s_2) ]}^ν̃/ν,where ν̃≡ν (1 + δ_edge) is influenced by interaction. Notice that bosonic operators with different chirality have different correlations: ⟨ϕ_± (x,t) ϕ_± (x',t') ⟩∝ln [(t ∓ x) - (t' ∓ x') ].The last line of Eq. (<ref>) refers to the Coulomb-interaction-influenced “tanglement” of the right-going field ϕ_+. It reduces to the last line of Eq. (<ref>) for the non-interacting case, after taking δ_edge = 0. The last but one line instead refers to the field from the left-going field ϕ_-. This term is fully interaction-induced, as ϕ_- and ϕ_A are uncorrelated for the non-interacting situation.We proceed by taking s_1 → s_2 in equation above, to visit the disconnected diagram. We also perform shifts in time s_1 → s_1 - L and s_2 → s_2 - L, with which the last two lines of Eq. (<ref>) equal{[ τ_0 + i ( t - s_1+ 2 d ) η_2 ] [ τ_0 + i (- s_1+ 2 d ) η_1 ]/[ τ_0 + i ( t - s_1+ 2 d ) η_1 ] [ τ_0 + i (- s_2 + 2 d ) η_2 ]}^δ_edge{[ τ_0 + i ( t - s_1 ) η_2 ] [ τ_0 + i (- s_1) η_1]/[ τ_0 + i ( t - s_1) η_1 ] [ τ_0 + i (- s_1) η_2 ]}^ν̃/ν.The second part of Eq. (<ref>), which corresponds to the so-called “braiding” between right-going non-equilibrium anyons and ϕ_+ mode at the central QPC, equals exp[iπ (η_2 - η_1) ν̃ / ν]. In contrast to the non-interacting result, this phase is non-trivial, and will not vanish after summations over Keldysh indexes. The first term of Eq. (<ref>) instead indicates the “braiding” between the ϕ_- mode at the central QPC, and counter-propagating non-equilibrium anyonic mode in the interacting area. In this section, we assume the large-d situation (d>t), with which this extra term equals one, a trivial value. Notice that for a small value of d (more specifically, when 2d < s_1 , s_2 < t), this term equals exp[iπδ_edge(η_2 - η_1)]. Combining this factor with the previous one (i.e., exp[iπ (1 + δ_edge)(η_2 - η_1)]), the total “tanglement” part equals exp[iπ (1 + 2δ_edge)(η_2 - η_1)] when considering disconnected diagrams, leading to an even stronger modification from interactions.As another important piece of message, Eq. (<ref>) indicates that, when considering disconnected diagrams in the large-d assumption, interactions mainly influence the correlation between ϕ_A at the diluter and ϕ_+ at the central QPC. The ϕ_A - ϕ_- correlation instead remains negligible even with interaction involved. Similarly, even for the interacting situation, we only need to worry about the correlation between ϕ_B at the diluter and ϕ_- at the central QPC.In addition to disconnected diagrams, the interaction between edges A and B also influences the connected contraction. Briefly, by choosing the connected contraction (i.e., s_1 → t - L and s_2 → -L), integrals over s_1 and s_2 of Eq. (<ref>) can be rewritten as(S39) = τ_0^1/2ν + 2ν̃/(τ_0 + it)^1/2ν∑_η_1η_2η_1η_2( τ_0 + i t η_2)^ν̃/ν [ τ_0 + i (-t ) η_1 ]^ν̃/ν/ [ τ_0 + i t χ_η_1η_2 (t) ]^2ν̃×{[ τ_0 + i ( t+ 2 d ) η_2 ] [ τ_0 + i (-t + 2 d ) η_1 ]/( τ_0 + i2 d η_1 ) ( τ_0 + i2 d η_2 )}^δ_edge×∬ ds_1 ds_2e^iν V (s_1 - s_2)/{[ τ_0 + i ( t - s_1 - L ) χ_-η_1 (t - s_1) ] [ τ_0 + i (- s_2 - L ) χ_+η_2 ( - s_2) ]}^ν̃/ν = I_ν̃ = ν× (it /τ_0)^(2-2ν) ν̃/ν/Γ^2( ν̃/ν),where I_ν̃ = ν refers to the non-interacting result, where ν̃ = ν. The factor multiplying I_ν̃ = ν describes the modification from interactions. With both modifications induced by the interaction taken into consideration, we arrive at modified correlation functions (notice, importantly, that the suppressing factor of the connected part is also influenced by interactions)⟨Ψ^†_+(t^-) Ψ_+ (0^+) ⟩ =τ_0^1/ν-1/2π (τ_0 + it)^1/ν{ e^-( 1 - e^ 2iπν̃/ν) I_A0 t/eν..+ c(ν) (it /τ_0)^(2-2ν) ν̃/ν/Γ^2( ν̃/ν) it I_A0/e e^ iν e V t/ħ/(i ν e V t/ħ)^2ν - 1 e^-( 1 - e^ 2iπν̃) I_A0 t/eν}, ⟨Ψ_-(t^-) Ψ^†_- (0^+) ⟩ =τ_0^1/ν-1/2π (τ_0 + it)^1/ν{ e^-( 1 - e^- 2iπν̃/ν) I_B0 t/ν.. + c(ν) (it /τ_0)^(2-2ν) ν̃/ν/Γ^2( ν̃/ν) it I_B0/e e^ -iν e V t/ħ/(i ν e V t/ħ)^2ν - 1e^-( 1 - e^- 2iπν̃) I_B0 t/eν},where the first term in each correlation function comes from interaction-induced disconnected diagrams. Notice that I_A0 and I_B0 are not influenced, as both diluters, where the non-equilibrium current values are emitted, are outside of the interacting area. In Eq. (<ref>), we have added back constant factors. §.§ IIIB. Interaction effects on noise With interaction-modified correlation functions, Eq. (<ref>), we are ready to calculate the tunneling-current noise in the presence of interactions. More explicitly, in the paragraphs below, we will show that in the strongly diluted limit, interaction-induced disconnected diagrams [i.e., those yielding the first terms of Eq. (<ref>)] have the chance to produce a dominant contribution. Indeed, by picking up the first terms in bothlines of Eq. (<ref>), we obtain the correction of the tunneling current noiseS^int_disconnected= T_C τ_0^2 /ν-2/4π^2∫ dt e^-( 1 - e^ 2iπν̃/ν) I_A0 t/eν-( 1 - e^- 2iπν̃/ν) I_B0 t/eν/ (τ_0 + it)^2/ν ≈ T_C 1/4π^22ν^2 e I_+ [ 1 - cos( 2πν̃/ν) ] /(2 - 2ν ) (2- ν ),where I_+ ≡ I_A0 + I_B0, and we keep only leading power of the ultraviolet cutoff a. A careful inspection of Eq. (<ref>) discloses two interesting features.As the first one, in contrast to Eqs. (2) and (3) of the main text, S^int_disconnected, which comes from the interaction-induced disconnected diagrams, does not explicitly depend on T_A and T_B (although implicitly it depends on these two quantities through I_+). This fact indicates that S^int_disconnected can become much larger than the interaction-free results [i.e., terms in Eqs. (2) and (3) of the main text] in the strongly diluted limit.With interaction involved, the original single-source and double-source tunneling current noises becomeS_1A,1B^int= T_C T_A,B/π^2 Γ^2( ν̃/ν)ν{1 - cos [2πν̃ ]} e I_A0,B0 +ν[1 - cos( 2πν̃/ν)] e I_B0,A0/(2- 3ν + 2ν̃ν )(2- 4ν + 2ν̃ν ) , S_2 ^int= T_C T_AT_B/π^2 Γ^2( ν̃/ν)ν[1 - cos(2πν̃ )] e I_+ /(2 - 5ν + 4 ν̃ν) (2 - 6ν + 4 ν̃ν) .Equation (<ref>) also contains corrections from interactions, as indicated by its explicit dependence on ν̃. The modification is proportional to the interaction parameter δ_edge = (ν̃ -ν)/ν in the weak-interacting limit. Since S_1A,1B^int∝ T_A,B, S_2^int∝ T_A T_B are proportional to small tunneling probabilities, we clearly see that the interaction-induced disconnected diagrams can introduce a correction, S^int_disconnected, that prevails over other interaction effect, given large enough interaction or strong enough dilutions. Actually, the correction from disconnected diagrams can even become potentially comparable to interaction-free results.As another feature of Eq. (<ref>), the contribution from the interaction-induced disconnected diagrams is proportional to the total non-equilibrium current I_+ = I_A0 + I_B0, which is a sum of the partial contributions of the two edges (this separation into the parts associated with the two edges occurs in spite of the electrostatic between them). This proportionality, importantly, indicates that S^int_disconnected, which becomes dominant in the strongly diluted limit (as analyzed above), can be avoided by removing single-source tunneling noises from the double-source one. This removal of single-source contribution is indeed what is taken into account by the design of 𝒫_Andreev, which gives rise to a strong resistance of 𝒫_Andreev against interactions. We emphasize that the proportionality of S_disconnected^int to I_+ arises from the fast decay in time 2/ν > 1, in the integral of Eq. (<ref>). Indeed, for anyonic tunneling, 2ν < 1, the result of the integral instead contains an anomalous power of the non-equilibrium current <cit.>.Before closing this section, we stress that S_1A,1B^int contains a contribution ∝ T_A,B I_B0,A0 that vanishes with either source off—the same as S_2. Physically, this term refers to the situation where a tunneling electron from edge A (or B) “braids” with non-equilibrium anyons in edge B (or A). When interaction is weak, this term leads to the interaction-induced correction ∝ T_B,Aδ_edge. This fact indicates that our invented function 𝒫_Andreev is only weakly influenced by interactions, as long as δ_edge≪ T_A,B, when the interaction-induced electron-anyon “braiding” is negligible.§ IV. SINGLE-PARTICLE AND TWO-PARTICLE ANALYSIS In the main text, we provide analysis on the tunneling current noise and cross correlation noise. In this section, we show details on how to arrive at Eq. (5) of the main text.As is known (see e.g., Refs. <cit.>) single-particle and two-particle scattering pictures apply to the analysis of non-interacting fermionic and bosonic systems, where non-equilibrium particles participate in scatterings at the tunneling QPC. This tunneling of non-equilibrium particles at the QPC turns out to be crucial to the application of the scattering method. Indeed, in Refs. <cit.> where non-equilibrium anyons do not tunnel at the central QPC, the obtained result fully disagrees with the scattering-theory-expression of Ref. <cit.>. This inapplicability of scattering theory greatly reduces the transparency of anyonic scatterings. For the Andreev situation, luckily, the leading contribution once again involves tunnelings of non-equilibrium particles, thus enabling the application of scattering theory. Indeed, as a piece of evidence, now correlation functions Eq. (8) can be divided into the equilibrium and non-equilibrium contributions, which is impossible for Refs. <cit.> that allow anyons to tunnel. §.§ IVA. The tunneling current noise We first look into the tunneling current noise. As the classical benchmark, we begin by considering the reducible tunneling current noise⟨Î_T^2 ⟩_dist = W_A (1 - W_B) W_C + W_B (1 - W_A) W_C + W_A W_B 2 W_C (1 - W_C) = (W_A + W_B) W_C - 2 W_C^2 W_A W_B ,where Î_T is the operator for the tunneling current from A to B, “dist” is short for “distinguishable”, while W_A and W_B refer to the probability to have a non-equilibrium anyon from sources A and B, respectively. We notice that Eq. (<ref>) contains a term W_C^2 W_A W_B that is proportional to the two-particle scattering probability W_A W_B. This term however disappears in the irreducible correlation, after the removel of the current average product ⟨Î_T⟩^2 = W_C^2 (W_A - W_B)^2. Indeed, now the irreducible correlation of the distinguishable case becomes⟨δÎ_T^2⟩_dist = (W_A + W_B) W_C - (W_A^2 + W_B^2) W_C^2,where δÎ_T≡Î_T - ⟨Î_T⟩ is tunneling current fluctuation operator. Here Eq. (<ref>) is irrelevant to the two-particle scattering probability ∝ W_A W_B. More specifically, it equals the summation of that of two independent single-particle tunneling processes: a solid benchmark for the missing of quantum statistics. This fact, importantly, indicates that one can observe the statistical message from bilinear terms ∝ W_A W_B.Now we move to consider indistinguishable particles. When two anyons arrive at the central QPC simultaneously, the chance to have an Andreev-like tunneling is modified, in comparison to the distinguishable case. For simplicity, we call the chance to have the Andreev-like tunneling (when two anyons collide) as P_Andreev^anyon, leaving 1 - P_Andreev^anyon the chance without the Andreev-like tunneling. The modification only exists in the reducible part, leading to s_T=⟨δÎ_T^2 ⟩ = W_A (1-W_B) W_C + W_B (1 - W_A) W_C + W_A W_B P_Andreev^anyon- W_C^2 (W_A - W_B)^2 = (W_A + W_B) W_C - ( W_A^2 + W_B^2 ) W_C^2 + W_A W_B P_Andreev^stat,where P_Andreev^stat = P_Andreev^anyon - P_Andreev^dist is the function that quantifies the Andreev-like tunneling probability from pure anyonic statistics. Indeed, P_Andreev^stat equals the difference between two functions: (i) the chance of Andreev-like tunneling when two distinguishable anyons collide at the central QPC, P_Andreev^dist = 2 W_C (1 - W_C); and (ii) the function P_Andreev^anyon that refers to the Andreev-like tunneling when all anyons are indistinguishable. After removing statistics-irrelevant contributions [first two terms of Eq. (<ref>), which perfectly equals that in Eq. (<ref>)], the rest noise discloses the influence of anyonic statistics on Andreev-like tunnelings. Actually, by comparing Eq. (<ref>) to Eqs. (2) and (3) of the main text, i.e.,S_1A= T_C T_A/π^2 2 νsin^2(πν) e I_A0/(2 - 3ν + 2ν^2 )(2- 4ν + 2ν^2 ) , S_1B= T_C T_B/π^2 2 νsin^2(πν) e I_B0/(2 - 3ν + 2ν^2 )(2- 4ν + 2ν^2 ) , S_2 = T_C T_AT_B/π^22 νsin^2(πν) e I_+ /(2- 5ν + 4ν^2 )(2- 6ν + 4ν^2 ) ,one immediately notice that S_1A + S_1B corresponds to the linear term (the first term) of Eq. (<ref>). Its bilinear term, on the other hand, is captured by S_2. With this message in mind, and the definition of 𝒫_Andreev in the main text, we can also express 𝒫_Andreev as𝒫_Andreev = 1/e I_+∫ dϵ W_A W_B P_Andreev^stat (ϵ).As another feature, all tunneling current noise expressions do not contain fractional charge ν, as only electrons are allowed to tunnel through the central QPC. As will be shown shortly, this feature greatly contrasts that for cross and auto correlations.Before ending this subsection, we can obtain values of W_A, W_B, and P_Andreev^stat by comparing Eq. (<ref>) and Eqs. (2)-(3) of the main textW_A = h/e^2∂_V I_A0/ν,W_B = h/e^2∂_V I_B0/ν, P_Andreev^stat = T_C T_A T_B/π^2e^2/h2 νsin^2(πν) /(2- 5ν + 4ν^2 )(2- 6ν + 4ν^2 ) W_A +W_B/W_A W_B,where the value of P_Andreev^stat depends on the combination of microscopic tunneling amplitudes, i.e., (T_A + T_B) T_C. Here, the dependence on (T_A + T_B) can be removed by dividing the function by the non-equilibrium current I_+ = I_A0 + I_B0∝ T_A + T_B. §.§ IVB. Cross correlation noise Now we move to consider the cross correlation noise. Once again, we start with the distinguishable situation. By distinguishable, we refer to an imagined situation where anyons in A and B are marked out in different ways, and thus being distinguishable from each other. However, tunnelings at the central QPC still resort to Andreev-like tunnelings that accompany reflections of holes with fractional charges. The reducible part of the cross correlation noise then becomes⟨Î_AÎ_B⟩_dist = (ν - 1) W_C W_A (1-W_B) + (ν - 1) W_C W_B (1 - W_A)+ W_A W_B [P_Andreev^dist (ν^2 - 1) + (1 - P_Andreev^dist) ν^2 ] = (ν - 1) W_C [ W_A (1-W_B) + W_B (1-W_A) ] + W_A W_B (ν^2 - P_Andreev^dist),while the current average product now equals ⟨Î_A⟩⟨Î_B⟩ = [ν W_A - W_C (W_A - W_B)][ν W_B + W_C (W_A - W_B)], where the factor of ν refers to the fractional charge of an non-equilibrium anyon, and the factor of “1” in the other term (the term proportional to W_C) refers to the transmission of a full electron across the central QPC. With the reducible noise, and the current average product, we arrive at the irreducible cross correlation noise⟨δÎ_AδÎ_B⟩_dist= -(1-ν) W_C (W_A + W_B) - W_C ( ν - W_C ) (W_A^2 + W_B^2 ),which, similarly as the tunneling current noise Eq. (<ref>), does not contain the bilinear contribution ∝ W_A W_B, and thus can be considered as the summation of two single-particle processes. The second term of Eq. (<ref>), corresponding to the current average product of single-source situation, has an apparent Andreev tunneling signature: the charge equals ν (corresponding to the non-equilibrium anyon) without charge tunneling, but becomes ν - 1 (corresponding to the reflected hole) after the tunneling.Now we move to the physical situation, where anyons in edges A and B are indistinguishable, leading to the irreducible cross correlation functions_AB = ⟨δÎ_AδÎ_B⟩_anyon = (ν -1) W_C W_A (1 - W_B)+(ν -1) W_C W_B (1 - W_A)+ W_A W_B [P_Andreev^anyon (ν^2 - 1) + (1 - P_Andreev^anyon) ν^2 ] = (ν - 1) W_C W_A (1-W_B) + (ν - 1) W_C W_B (1-W_A)+ W_A W_B { ( P_Andreev^dist + P_Andreev^anyon - P_Andreev^dist) (ν^2 - 1) . . + [1 - ( P_Andreev^dist + P_Andreev^anyon - P_Andreev^dist)] ν^2 } = (ν- 1) W_C W_A (1-W_B) + (ν- 1) W_C W_B (1-W_A)+ W_A W_B [ P_Andreev^dist(ν^2 - 1 )+ (1 - P_Andreev^dist) ν^2]- W_A W_B (P_Andreev^anyon - P_Andreev^dist) = -(1-ν) W_C (W_A + W_B) - W_C ( ν - W_C ) (W_A^2 + W_B^2 ) - W_A W_B P_Andreev^stat.Comparison between Eqs. (<ref>) and (<ref>) shows that for the leading-contribution, i.e., terms linear in W_A or W_B, the tunneling noise and the cross correlation noise are proportional to each other. This proportionality agrees with the experimental measurement of Ref. <cit.>. More importantly, the bilinear term, i.e., the statistics-induced contribution, can be extracted via either tunneling current noise, or the cross correlation: indeed, the obtained statistics-induced noise has only a difference in sign. This result indicates that one can obtain the entanglement pointer of Andreev-like tunnelings, through either tunneling current, or cross correlation noise measurements, whichever is more convenient.In our previous work Ref. <cit.>, we define another entanglement pointer 𝒫_E, for the integer situation. With its previous definition, and Eq. (<ref>) for cross correlation of the anyonic version, we find out that𝒫_E = ∫ dϵ s_AB (W_A ,W_B, ϵ) - s_AB (W_A ,0, ϵ) -s_AB (0,W_B, ϵ) = - ∫ dϵ W_A W_B P_Andreev^stat (ϵ),which is proportional to the integral of the statistics-influenced central factor P_Andreev^stat. In addition, since W_A W_B P_Andreev^stat is the only bilinear (in W_A and W_B) term in both S_T and S_AB, the function 𝒫_E can also be obtained with tunneling current noise, with only an extra minus sign in definition. §.§ IVC. The auto-correlation Finally, we move to consider two auto-correlations, ⟨δÎ_A^2 ⟩_anyon and ⟨δÎ_B^2 ⟩_anyon. Once again, we start with the benchmark scenario where anyons in A are distinguishable from those in B. In this case, the reducible correlations equal⟨Î_A^2 ⟩_dist= ν^2 (1 - W_C ) W_A ( 1- W_B ) + W_C W_B (1 - W_A) + W_C W_A ( 1- W_B) (1 - ν)^2+ W_A W_B [ P_Andreev^dist/2 (ν - 1)^2 + P_Andreev^dist/2 (ν + 1)^2 + (1 - P_Andreev^dist) ν^2 ] = ν^2W_A ( 1- W_B ) + W_C [W_B (1 - W_A) + (1 - 2ν) W_A (1 - W_B)]+ W_A W_B (P_Andreev^dist + ν^2),⟨ I_B^2 ⟩_dist= ν^2 (1 - W_C ) W_B ( 1- W_A ) + W_C W_A (1 - W_B) + W_C W_B ( 1- W_A) (1 - ν)^2+ W_A W_B [ P_Andreev^dist/2 (ν - 1)^2 + P_Andreev^dist/2 (ν + 1)^2 + (1 - P_Andreev^dist) ν^2 ] = ν^2W_B ( 1- W_A ) + W_C [W_A (1 - W_B) + (1 - 2ν) W_B (1 - W_A)]+ W_A W_B (P_Andreev^dist+ν^2)We can again use the current averages ⟨Î_A⟩ = ν W_A - W_C (W_A - W_B) and ⟨Î_B⟩ = ν W_B + W_C (W_A - W_B), to rewrite the reducible correlations into irreducible ones⟨δÎ_A^2⟩_dist= W_A [ W_C + ν (1 - W_A) (ν - 2W_C) ] + W_B W_C - (W_A^2 + W_B^2) W_C^2, ⟨δÎ_B^2⟩_dist= W_B [ W_C + ν (1 - W_B) (ν - 2W_C) ] + W_A W_C - (W_A^2 + W_B^2) W_C^2,which also display the separation of contributions from different edges. For the situation where all anyons are perfectly indistinguishable, once again only the value of P_Andreev^dist is replaced by P_Andreev^anyon, leading to modified auto-correlationss_AA= ⟨δÎ_A^2⟩_anyon= W_A [ W_C + ν (1 - W_A) (ν - 2W_C) ] + W_B W_C- (W_A^2 + W_B^2) W_C^2 + W_A W_B P_Andreev^stat , s_BB= ⟨δÎ_B^2⟩_anyon= W_B [ W_C + ν (1 - W_B) (ν - 2W_C) ] + W_A W_C - (W_A^2 + W_B^2) W_C^2 + W_A W_B P_Andreev^stat,which contains the same form of the statistical term W_A W_B P_Andreev^stat, as that in Eqs. (<ref>) and (<ref>), for tunneling current noise and cross correlation, respectively. In addition, the same as cross correlation and tunneling current noise, the only bilinear term of auto correlations equal W_A W_B P_Andreev^stat, meaning that 𝒫_Andreev can also be measured with the auto correlation,i.e.,𝒫_Andreev= [S_T (T_A,0) + S_T (0,T_B) - S_T (T_A, T_B) ]/e I_+ =- [S_AB (T_A,0) + S_AB (0,T_B) - S_AB (T_A, T_B) ]/e I_+ = [S_AA (T_A,0) + S_AA (0,T_B) - S_AA (T_A, T_B) ]/I_+ = [S_BB (T_A,0) + S_BB (0,T_B) - S_BB (T_A, T_B) ]/e I_+,with similar definitions, by removing single-source contributions. Here S_AA≡∫ dt ⟨δÎ_A (t) δÎ_A (0) ⟩ and S_BB≡∫ dt ⟨δÎ_A (t) δÎ_B (0) ⟩ refer to auto correlations. § V. EXPERIMENT In this section, we briefly describe the experimental setup of Ref. <cit.> used to obtain the data, which we analyze in the main text in the context of the theory of entanglement pointer. The experiment is performed on the Ga(Al)As device shown in Fig. <ref> (see Ref. <cit.> for details).It is cooled at an electronic temperature of 35 mK and set at the center of the ν=1/3 fractional quantum Hall plateau. The spectral density of the current auto- and cross-correlations <δÎ_A^2>, <δÎ_B^2> and <δÎ_AδÎ_B> are simultaneously measured around a frequency of 0.86 MHz. The dc currents I_A,B,T are obtained by integrating the differential conductances ∂ I_A,B,T/∂ V_sA,sB directly measured by standard lock-in techniques at frequencies below 100 Hz.Importantly, the present data-theory comparison is performed on a specific data set, which was measured following a protocol optimized to limit as much as possible any changes between the different configurations of the sources. This is essential for the entanglement pointer, which is obtained from the small difference of large signals. Note that the data shown in the main text of Ref. <cit.> do not fully follow the procedure described below:First, the source QPCs are activated not by changing the gate voltage controlling their transmission parameter T_A,B but instead by setting the dc bias voltage V_sA,sB to V. Indeed, changing the gate voltage controlling one source (e.g. in branch A) would also change the other transmissions (T_B and T_C) by capacitive crosstalk, and thereby introduce unwanted artifacts in P_Andreev. Note that the applied dc bias voltage itself also acts electrostatically on the QPCs.This can play a role, as further discussed in the experiment-theory comparison, yet it is a smaller effect since the bias voltage changes (V_sA,sB≲0.1 mV) are much smaller than the gate voltage changes to open or close a QPC (∼1V).Second, the necessary averaging time is split in several sequences alternating between the following successive configurations: (i) source sA is ON and sB is OFF (V_sA=V, V_sB=0), (ii) source sA is OFF and sB is ON (V_sA=0, V_sB=V), (iii) The central QPC is directly voltage biased for tunneling charge characterization, and (iv) sources sA and sB are both ON (V_sA=V_sB=V). This allows us to effectively cancel out in P_Andreev the small drifts of the QPCs with time, which could otherwise have a noticeable impact.§ VI. DETAILS ON THE EXPERIMENT-THEORY COMPARISON In this section, we present details of the experiment-theory analysis that leads to Fig. 3 of the main text. To begin with, Fig. 3a shows the raw data from Ref. <cit.>. Following Fig. 3a, the sum of the two single-source cross-correlation functions has a larger magnitude than that of the double-source cross correlation. This apparently disagrees with our major results Eqs. (4) and (6), which instead predicts a larger magnitude of the correlation function for the double-source scenario.The origin of this discrepancy can be understood by noticing Fig. 3b, wherethe values of the transmission coefficient in the single-source settings clearly deviate from their corresponding double-source values. This deviation is minor and implicit when plotting with V (V_sA or V_sB); it however becomes manifest if plotting with the total nonequilibrium current I_+, as in Fig. <ref>e. This can be attributed to the non-local electrostatic effects present in the setup. Depending on the bias voltages the overall electrostatic potential landscape changes, thus shifting the edges in real space. This shift affects the transparency of the barriers between the edges. Therefore, in order to compare the single-source and double-source current correlation functions, a rescaling of the double-source transmission is required (see below for detailed steps). In addition, following Fig. <ref>a, transmission probabilities W_A, W_B greatly deviate from the non-interacting chiral Luttinger liquid prediction (i.e., W_A,B∝ I_+^(2ν - 2)/(2ν - 1), denoted by the gray dashed line of the inset of Fig. <ref>a), which implies the energy and/or voltage dependence of the bare transmission probabilities T_A and T_B in the Hamiltonian. In particular, the voltage dependence can be again related to the global electrostatics of the setup.In this work, this dependence is included by working with differential noises. Below, we list the steps of the data processing that lead to Fig. 4 of the main text. More specifically, Fig. 4 contains four sets of data: the “theoretical data” [obtained from Eq. (3) of the main text], and the rescaling of the experimental data.Theoretical data:To obtain the “theoretical data” following Eq. (3) of the main text, one needs to know the tunneling parameters (T_A, T_B and T_C), and the ultraviolet cutoff τ_0, from the experimental data. Here, we take the following steps.(i) The effective transmission probabilities W_A (V_sA,V_sB = 0) and W_B (V_sA = 0, V_sB) for the single-source cases, defined asW_A (V_sA,0) = ∂ I_A0/∂ V_sAh/ν e^2,W_B (0,V_sB) = ∂ I_B0/∂ V_sBh/ν e^2,are experimentally obtained by applying a small AC voltage on top of a DC background voltage (shown in Fig. <ref>b).(ii) Next, we find the values of W_C (V_sA , V_sB=0) and W_C (V_sA =0, V_sB) for the single-source scenario. These two quantities, defined as W_C (V_sA,0) = ∂ I_T/∂ I_A0ν,W_C (0, V_sB) = -∂ I_T/∂ I_B0ν,are also experimentally obtained by applying a weak AC bias on top of a relatively large DC background.(iii) We obtain the value of the ultraviolet cutoff, with Eq. (<ref>), or more specifically,τ_0 = ħ/ν V_sAI_T (V_sA, 0) /S_T (V_sA, 0)2 - 5ν + 2ν^2/2 [ν + W_A (V_sA, 0) sin (2πν)] = ħ/ν V_sBI_T (0,V_sB)/S_T (0,V_sB)2 - 5ν + 2ν^2/2 [ν + W_B (0,V_sB) sin (2πν)].Afterwards, we obtain the values T_A, T_B and T_C, using W_A (V_sA,0) = T_A (V_sA,0) sin(2πν) Γ (1-2ν)/2π^2( τ_0 ν e V_sA/ħ)^ 2ν - 2,W_B (0,V_sB) = T_B (0,V_sB) sin(2πν) Γ (1-2ν)/2π^2( τ_0 ν e V_sB/ħ)^ 2ν - 2, W_C ( V_sA, 0 )= T_A (V_sA,0) T_C [I_+ (V_sA)]2ν^2 /π^2(1 -cos 2πν)[ν^2 e/ħ V_sA+I_A0 (V_sA)/esin 2πν] / (2 - 3ν+ 2ν^2 ) (2 - 4ν+ 2ν^2 ) (2 - 5ν+ 2ν^2 ) τ_0,W_C (0, V_sB)= T_B (0,V_sB)T_C [I_+ (V_sB)] 2ν^2 /π^2(1 -cos 2πν)[ν^2 e/ħ V_sB+I_B0 (V_sB)/esin 2πν] / (2 - 3ν+ 2ν^2 ) (2 - 4ν+ 2ν^2 ) (2 - 5ν+ 2ν^2 ) τ_0 .Notice that in Eq. (<ref>), all transmission coefficients are expressed as functions of the corresponding voltages. The coefficients T_A, T_B and T_C, which would be constants in a conventional Luttinger liquid theory, depend on voltage through the energy and/or voltage dependence of bare transmission probabilities. This leads to the deviation of the voltage dependencies of currents and noises from the expressions predicted by the Luttinger liquid theory, where voltage enters in the form of an anomalous scaling. With the above procedure, we have thus included the extra (non-Luttinger) voltage dependence of coefficients that appear in the Hamiltonian, (i.e., ζ_A, ζ_B and ζ_C). (iv)To obtain entanglement pointer [given by Eq. (4) of the main text], we also need to know the value of the product T_A T_B T_C, with the equalityT_A T_B T_C = √(W_A W_B W_C^2)π^4/ν^3( τ_0 ν e V/ħ)^1 - 2ν(2 - 3 ν+ 2ν^2) (2 - 4 ν+ 2ν^2) (2 - 5 ν+ 2ν^2)/sin (2πν) [1 - cos (2πν)] Γ (1 - 2ν) ,where explicit and implicit dependencies on V_sA = V_sB = V have been omitted for simplicity. The factor W_C^2 under the square root is the product of two single-source contributions: W_C^2 = W_C (V , 0) W_C (0,V). The dependence of T_A T_B T_C on I_+ is presented by Fig. <ref>c. To avoid influence from strong fluctuation, we neglect points that greatly deviate from the fitting ∼ I_+^4/3 (red line of Fig. <ref>c). This dependence of T_A, T_B and T_C, as analyzed at the beginning of this section, leads to the deviation of the I_+ (V) dependence predicted by the Luttinger liquid theory (where I_+ ∝ V^2ν - 1). The product T_A T_B T_C obtained this way from the measured data has a meaning of “would-be” product of “virtual” Luttinger-liquid transmission probabilities that would yield the same current at given voltages in the experiment. (v) We are now ready to calculate the differential noise, which is defined as s_2(I_+) ≡∂_I_+ S_2 (I_+) (shown in Fig. <ref>d). The integral of s_2(I_+) yields the blue points in Fig. 3c of the main text. Rescaling of the experimental data:In this part, we will use current as the argument of W_C for clarity of the description of the procedure. We consider the same values of total current I_+ in the single-source (where I_+ = I_A0 or I_+ = I_B0) and double-source (where I_A0 = I_B0 = I_+/2) scenarios.(i) For a given value of I_+, read out the data of single-source effective transmission probabilities W_C (I_+, 0) and W_C (0,I_+), following the definition of Eq. (<ref>). We also obtain the double-source effective transmission amplitudes W_C (I_+/2, I_+/2), defined asW_C (I_+/2, I_+/2)≡ν∂ I_T (I_+, I_-)/∂ I_-|_I_- = 0.Once again, it is experimentally obtained by applying a weak AC voltage on top of a DC background. However, different from single-source scenarios, the measurement of W_C (I_+/2, I_+/2) requires the application of equal AC bias in both sources. For double-source situations, the total non-equilibrium current is larger than that of single-source ones. We thus have to take a linear fit (shown in Fig. <ref>e), to rescale W_C (I_+/2, I_+/2) for the entire relevant range of I_+.(ii) For each value of I_+, we obtain the differential cross correlation for double-source correlations, i.e., s_AB,double (I_+) ≡∂_I_+ S_AB (I_+/2, I_+/2). We then rescale the double source differential noise, followings_AB,modified (I_+) ≡ s_AB,double (I_+)W_C(I_+,0) + W_C (0, I_+) /2 W_C (I_+/2, I_+/2).We further use s_2,modified(I_+) ≡ s_AB,modified (I_+) - s_AB,single (I_+,0) - s_AB,single (0, I_+), where the latter two are the single-source differential noises.(iii) Finally, we integrate s_2,modified over the relevant current range, to obtain the modified double-source noiseS_AB,modified (I_+) ≡∫_0^I_+ dI s_2,modified (I).The plot of the function s_2,modified (I_+) is presented in Fig. <ref>f.23 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Kane and Fisher(1992)]SKaneFisherPRB92 author author C. L. Kane and author Matthew P. A. Fisher, title title Transmission through barriers and resonant tunneling in an interacting one-dimensional electron gas, 10.1103/PhysRevB.46.15233 journal journal Phys. Rev. B volume 46, pages 15233–15262 (year 1992)NoStop [Mahan(2000)]SMahanBook author author Gerald D. Mahan, @nooptitle Many-particle physics (publisher Springer, address New York, year 2000)NoStop [Bruus and Flensberg(2004)]SFlensbergBook author author Henrik Bruus and author Karsten Flensberg, @nooptitle Many-Body Quantum Theory in Condensed Matter Physics: An Introduction, edition 2nd ed. (publisher Oxford University Press, address London, year 2004)NoStop [Rosenow et al.(2016)Rosenow, Levkivskyi, and Halperin]SRosenowLevkivskyiHalperinPRL16 author author Bernd Rosenow, author Ivan P. Levkivskyi,and author Bertrand I. Halperin, title title Current correlations from a mesoscopic anyon collider, 10.1103/PhysRevLett.116.156802 journal journal Phys. Rev. Lett. volume 116, pages 156802 (year 2016)NoStop [Lee and Sim(2022)]SLeeSimNC22 author author June-Young M. Lee and author H. S. Sim, title title Non-Abelian anyon collider, 10.1038/s41467-022-34329-y journal journal Nature Communications volume 13, pages 6660 (year 2022)NoStop [Lee et al.(2023)Lee, Hong, Alkalay, Schiller, Umansky, Heiblum, Oreg, and Sim]SLeeNature23 author author June-Young M. Lee, author Changki Hong, author Tomer Alkalay, author Noam Schiller, author Vladimir Umansky, author Moty Heiblum, author Yuval Oreg,and author H. S. Sim, title title Partitioning of diluted anyons reveals their braiding statistics, https://doi.org/10.1038/s41586-023-05883-2 journal journal Nature volume 617, pages 277—281 (year 2023)NoStop [Filippone and Brouwer(2016)]SFilipponeBrouwerPRB16 author author Michele Filippone and author Piet W. Brouwer, title title Tunneling into quantum wires: Regularization of the tunneling Hamiltonian and consistency between free and bosonized fermions, 10.1103/PhysRevB.94.235426 journal journal Phys. Rev. B volume 94, pages 235426 (year 2016)NoStop [Morel et al.(2022)Morel, Lee, Sim, and Mora]SMorelPRB22 author author Tom Morel, author June-Young M. Lee, author H.-S. Sim,and author Christophe Mora, title title Fractionalization and anyonic statistics in the integer quantum Hall collider, 10.1103/PhysRevB.105.075433 journal journal Phys. Rev. B volume 105, pages 075433 (year 2022)NoStop [Rech et al.(2020)Rech, Jonckheere, Grémaud, and Martin]SMartinDeltaT20 author author J. Rech, author T. Jonckheere, author B. Grémaud,and author T. Martin, title title Negative delta-T noise in the fractional quantum Hall effect, 10.1103/PhysRevLett.125.086801 journal journal Phys. Rev. Lett. volume 125, pages 086801 (year 2020)NoStop [Zhang et al.(2022a)Zhang, Gornyi, and Spånslätt]SGuPRB22 author author Gu Zhang, author Igor V. Gornyi,and author Christian Spånslätt, title title Delta-T noise for weak tunneling in one-dimensional systems: Interactions versus quantum statistics, 10.1103/PhysRevB.105.195423 journal journal Phys. Rev. B volume 105, pages 195423 (year 2022a)NoStop [Schiller et al.(2022)Schiller, Oreg, and Snizhko]SKyryloPRB22 author author Noam Schiller, author Yuval Oreg,and author Kyrylo Snizhko, title title Extracting the scaling dimension of quantum Hall quasiparticles from current correlations, 10.1103/PhysRevB.105.165150 journal journal Phys. Rev. B volume 105, pages 165150 (year 2022)NoStop [Alkalay et al.(2023)]SKyryloTalk author author Tomer Alkalay et al., @nooptitle unpublished; talk by K. Snizhko at “QHEdge-Grenoble,” Villard de Lans, France,(year 2023)NoStop [Campagnano et al.(2016)Campagnano, Lucignano, and Giuliano]SCampagnanoPRB16 author author G. Campagnano, author P. Lucignano,and author D. Giuliano, title title Chirality and current-current correlation in fractional quantum Hall systems, 10.1103/PhysRevB.93.075441 journal journal Phys. Rev. B volume 93, pages 075441 (year 2016)NoStop [Giamarchi(2004)]SGiamarchiBook author author T. Giamarchi, @nooptitle Quantum Physics in One Dimension (publisher Oxford Univ. Press, address Oxford UK, year 2004)NoStop [Safi and Schulz(1995)]SSafiSchulzPRB95 author author I. Safi and author H. J. Schulz, title title Transport in an inhomogeneous interacting one-dimensional system, 10.1103/PhysRevB.52.R17040 journal journal Phys. Rev. B volume 52, pages R17040–R17043 (year 1995)NoStop [Safi(1999)]SSafi99 author author I. Safi, title title A dynamic scattering approach for a gated interacting wire, 10.1007/s100510051026 journal journal The European Physical Journal B - Condensed Matter and Complex Systems volume 12, pages 451–455 (year 1999)NoStop [Protopopov et al.(2017)Protopopov, Gefen, and Mirlin]SProtopopovGefenMirlinAoP17 author author I.V. Protopopov, author Yuval Gefen,and author A.D. Mirlin, title title Transport in a disordered ν=2/3 fractional quantum Hall junction, https://doi.org/10.1016/j.aop.2017.07.015 journal journal Annals of Physics volume 385, pages 287–327 (year 2017)NoStop [Spånslätt et al.(2021)Spånslätt, Gefen, Gornyi, and Polyakov]SSpanslattPRB21 author author C. Spånslätt, author Yuval Gefen, author I. V. Gornyi,and author D. G. Polyakov, title title Contacts, equilibration, and interactions in fractional quantum Hall edge transport, 10.1103/PhysRevB.104.115416 journal journal Phys. Rev. B volume 104, pages 115416 (year 2021)NoStop [Martin and Landauer(1992)]SMartinLandauerPRB92 author author Th. Martin and author R. Landauer, title title Wave-packet approach to noise in multichannel mesoscopic systems, 10.1103/PhysRevB.45.1742 journal journal Phys. Rev. B volume 45, pages 1742–1755 (year 1992)NoStop [Blanter and Büttiker(2000)]SBlanterButtikerPhysRep00 author author Ya.M. Blanter and author M. Büttiker, title title Shot noise in mesoscopic conductors, https://doi.org/10.1016/S0370-1573(99)00123-4 journal journal Physics Reports volume 336, pages 1–166 (year 2000)NoStop [Campagnano et al.(2012)Campagnano, Zilberberg, Gornyi, Feldman, Potter, and Gefen]SCampagnanoPRL12 author author Gabriele Campagnano, author Oded Zilberberg, author Igor V. Gornyi, author Dmitri E. Feldman, author Andrew C. Potter,and author Yuval Gefen, title title Hanbury Brown–Twiss interference of anyons, 10.1103/PhysRevLett.109.106802 journal journal Phys. Rev. Lett. volume 109, pages 106802 (year 2012)NoStop [Glidic et al.(2023)Glidic, Maillet, Piquard, Aassime, Cavanna, Jin, Gennser, Anthore, and Pierre]SPierreNC23 author author P. Glidic, author O. Maillet, author C. Piquard, author A. Aassime, author A. Cavanna, author Y. Jin, author U. Gennser, author A. Anthore,and author F. Pierre, title title Quasiparticle Andreev scattering in the ν= 1/3 fractional quantum Hall regime, 10.1038/s41467-023-36080-4 journal journal Nature Communications volume 14, pages 514 (year 2023)NoStop [Zhang et al.(2022b)Zhang, Hong, Alkalay, Umansky, Heiblum, Gornyi, and Gefen]SGuX2022 author author Gu Zhang, author Changki Hong, author Tomer Alkalay, author Vladimir Umansky, author Moty Heiblum, author Igor V. Gornyi,and author Yuval Gefen, @nooptitle Measuring statistics-induced entanglement entropy with a Hong-Ou-Mandel interferometer,(year 2022b), http://arxiv.org/abs/2210.15520 arXiv:2210.15520 [cond-mat.mes-hall] NoStop
http://arxiv.org/abs/2312.16556v1
{ "authors": [ "Gu Zhang", "Pierre Glidic", "Frederic Pierre", "Igor Gornyi", "Yuval Gefen" ], "categories": [ "cond-mat.mes-hall", "cond-mat.str-el" ], "primary_category": "cond-mat.mes-hall", "published": "20231227124247", "title": "Fractional-statistics-induced entanglement from Andreev-like tunneling" }
Department of Physics, Indian Institute of Technology Madras, Chennai, 600036, [email protected] Department of Physics, Indian Institute of Technology Madras, Chennai, 600036, India Quantum Centre of Excellence for Diamond and Emergent Materials, Indian Institute of Technology Madras, Chennai, 600036, India The subtle interplay between competing degrees of freedom, anisotropy and spin correlations in frustrated Kitaev quantum materials offers an ideal platform to host myriads of non-trivial quantum states with exotic fractional excitations. The signature of spin freezing behavior of these spin-orbit driven frustrated magnets is characterized by a bifurcation of zero-field-cooled and field-cooled magnetic susceptibility at low temperature much below the characteristic interaction energy scale. The magnetic specific heat exhibits a T^2 dependence below the freezing temperature. The field-independent behavior of magnetic specific heat below the freezing temperature implies the presence of exotic low-energy excitations. The aging and memory effect experiments in the Kitaev magnets suggest the non-hierarchical free energy distribution, which differs from the hierarchical organization of conventional spin-freezing. Furthermore, nuclear magnetic resonance spin-lattice relaxation rate follows a power law behavior below the spin-freezing temperature suggesting the persistence of unconventional spin excitation spectra. Herein, we demonstrate that the observed low-temperature spin-freezing phenomena in a few representative Kitaev quantum materials can be effectively explained by the Halperin and Saslow (HS) hydrodynamic modes relevant for non-trivial spin glass materials. The linearly dispersive HS modes are hypothesized to account for instigating non-abelian defect propagation, thereby inducing a spin jam state in the low-temperature regime in frustrated Kitaev magnets. Our investigation reveals that HS modes capture the essence of unconventional spin-freezing ascribed to topological origin in two dimensional (2D)Kitaev magnets decorated on a honeycomb lattice and its 3D analog hyperhoneycomb that offers a viable ground to extend this framework to a large class of frustrated quantum materials.The nature of low-temperature spin-freezing in frustrated Kitaev magnets P. Khuntia January 14, 2024 ========================================================================§ INTRODUCTION In correlated quantum materials, a synergistic interplay between competing degrees of freedom, and anisotropy conspire with frustration induced quantum fluctuations, offer a viable basis to realize a plethora of exotic quantum states with low-energy fractional excitations promising to address some of the enduring themes in quantum condensed matter <cit.>. The non-trivial ground states of frustrated quantum magnets at T→ 0, characterized by smooth energy landscapes, can preclude the localization in condensed matter <cit.>. Quenched disorder such as atomic vacancies or randomness in exchange interactions in frustrated quantum magnets plays a vital role in deforming smooth energy landscapes into rugged energy landscapes, thereby leading to localization and the emergence of glassiness characterized by slow spin dynamics <cit.>. The emergence of glassiness in the low-temperature limit of some disorder-free frustrated Mott insulators is attributed to topological origin <cit.>. Landau theory of symmetry breaking is quite successful in describing the conventional phase transitions associated with local order parameters. However, Landau theory is inadequate in capturing the central essence of quantum phases which are driven by non-thermal control parameters and electron correlations in driving novel phases such as quantum spin liquids, high-temperature superconductivity, and the fractional Hall effect. The concept of topological order <cit.> was introduced to elucidate the phase transition of such unconventional quantum states under the mathematical formulation of topology. Topologically ordered states are robust against perturbations and are characterized by topological degeneracy, fractional excitations, emergent gauge fields, non-local entanglement, and finite topological entropy at zero temperature that have far-reaching implications in advancing our understanding in fundamental physics and next-generation technologies <cit.>. A quantum spin liquid state (QSL) is a highly entangled state of quantum matter without a symmetry breaking phase transition down to absolute zero temperature and is characterized by fractional excitation and non-local spin correlations <cit.>. QSL was proposed for S = 1/2 decorating on a Heisenberg triangular lattice <cit.>; however, the triangular lattice with nearest-neighbor isotropic Heisenberg exchange interaction host magnetically ordered state <cit.>. Later, it was demonstrated that the next-nearest neighbor exchange interaction <cit.>, magnetic anisotropy <cit.>, and higher-order interactions <cit.> can stabilize QSL in frustrated magnets. In this context, the spin-orbit driven frustrated honeycomb magnets with bond-dependent highly anisotropic interaction between J_eff = 1/2 degrees of freedom (see Fig. 1a-b) provide a natural habitat to host the celebrated Kitaev QSL state characterized by a spectrum of deconfined fractional excitations such as Majorana fermions <cit.>. In addition, external perturbations that break time-reversal symmetry-breaking stabilize a spin gap, supporting the excitation of non-abelian anyons in the QSL state <cit.>. The exactly solvable Kitaev Hamiltonian consists of spin-orbit driven bond-dependent nearest neighbor Ising interactions between J_eff=1/2 moments generated owing to crystal electric field and spin-orbit coupling (see fig. 1a-b) and is represented as ℋ = - ∑_<ij> K_γS_i^γS_j^γ, where γ=x,y,z and K_γ is the bond dependent coupling constant with nearest neighbor sites i and j as shown in Fig. 1a <cit.>. The emblematic Kitaev model hosts a solution where the spin-1/2 is fractionalized into four fermions, and alternatively, one can write the Hamiltonian asℋ = -1/4∑_i,j K_γ b_i^γb_j^γc_ic_j. Subsequently, a set of four Majorana fermions emerges with three immobile fermions {b_x, b_y, b_j} and one mobile fermion c <cit.>. The manifestation of Majorana fermions engenders the emergence of a Z_2 gauge field <cit.>. Recently, an emergent glass-like state was observed in the celebrated Kitaev magnet α-RuCl_3 in the intermediate magnetic field limit, as evidenced by the non-linear susceptibility <cit.> wherein the enhancement of non-linear susceptibility around the spin-glass temperature is related to the fluctuation of magnetic moments in the host lattice <cit.>. The density-matrix renormalization group method at zero temperature in the intermediate field suggests the emergence of glassiness owing to the slowing down of Z_2 fluxes in the proximity of the U(1) spin liquid region <cit.>. In many-body localized systems, the interplay of many-body interactions and disorder gives rise to an emerging integrability that puts a strong constraint on thermalization <cit.>.Recent calculations on one-dimensional spin-1 Kitaev spin chain show the fragmentation of the Hilbert space into unequal disconnected subspaces  <cit.>. In integrable systems, the sheer abundance of conserved quantities limits the ability of an initial state to fully navigate all feasible configurations in the Hilbert space and lacks self-thermalization in isolation, leading to the emergence of weak ergodicity breaking in Kitaev materials <cit.>. In honeycomb Kitaev model, the conserved quantities, for instance, take the form of flux operators defined around each hexagon, expressed as a product of six spin operators defined as W_1-6 = 2^6S_1^zS_2^xS_3^yS_4^zS_5^xS_6^y (see Fig. 1a) <cit.>. The interplay between the integrability <cit.> inherent in the Kitaev model and the presence of quenched disorder <cit.>, resulting in many-body localization, can give rise to the intrinsic glassiness in frustrated honeycomb lattices.Understanding the origin of spin-freezing in frustrated magnets is of paramount importance, which may provide vital clues for the experimental realization of topological states including elusive QSL state and associated fractional quantum numbers. Frustration leads to massive ground state degeneracy and low-energy excitations with topological characteristics that are reflected as non-trivial behavior of magnetization and specific heat in frustrated spin glass in contrast to conventional spin-glass materials <cit.>. The experimental search for the topological origin of spin freezing began with the investigation of non-trivial spin glass behavior in the magnetopumbites SrCr_9pGa_12-9pO_19 <cit.> (SCGO) and spinel Ba_2Sn_2ZnCr7pGa10-7pO_22 <cit.> (BSZCGO) decorated on a kagome lattice. In contrast to dilute magnetic alloys, these frustrated magnets feature densely populated magnetic ions potential to host exotic quantum phenomena. The manifestation of a glassy state is characterized by the splitting of ZFC-FC magnetic susceptibility (see Fig.1c) in these systems, which is ascribed to quenched disorder and frustration-induced quantum fluctuations. This unique topological state is commonly referred to as a `spin jam' <cit.>. In such geometrically frustrated systems, the mechanism of “order-by-disorder” driven by quantum fluctuations perturbs the classical ground state degeneracy leading to a metastable state within the complex and rugged energy landscape <cit.>.The spin jam state can be well explained by Halperin-Saslow (HS) mode. Halperin and Saslow proposed a hydrodynamic mode in the background of frozen spins, and a linear dispersion relation is predicted below the freezing temperature <cit.>. In 2D frustrated quantum magnets, the T^2 behavior of magnetic specific heat (C_m∝ T^2) can be described by the mean-field low-energy excitations termed as 'spaghetti modes', characterized by a length scale L_0 extending upto a order of 10^2 number of spins that may offer a huge energy barrier to tunnel from one local minima to other  <cit.>. Anderson proposed a scaling behavior where the free energy fluctuation which is the sum of the interaction energy at a generalized plane boundary, separating large blocks of spins, varies as the square root of the boundary area i.e., < E>∝ A^1/2 <cit.>. In the spin jam state, finite-length spin folds are present, and the energy fluctuations originate from the interaction between these spin folds, giving rise to metastable states that drive the system to a glassy phase. In this context, the experimental realization of unconventional ground states, associated low-energy excitations and their interactions in frustrated Kitaev quantum materials decorated on 2D honeycomb and its 3D analog known as hyperhoneycomb lattices offer a viable basis to test theoretical conjectures and set a stage for topological quantum computing <cit.>. Herein, we provide a comprehensive account demonstrating the unusual spin freezing at low temperature in a few selected frustrated Kitaev magnets following the Halperin-Saslow framework relevant for spin glass. In this framework, the HS modes follow a linear dispersion, and these hydrodynamic modes are entangled to atomic spins that probe the behavior of spins decorated on a frustrated spin-lattice at low temperature.Our phenomenological interpretation of experimental results in frustrated Kitaev magnets on honeycomb lattice captures the essence of relevant low-energy excitations in the spin-freezing state. Furthermore, we provide a comparative account concerning the effect of doping on the freezing temperature in conventional spin glass materials and non-trivial spin glass that is observed in frustrated Kitaev magnets. Our analysis and subsequent discussions based on HS hydrodynamic modes reveal that exotic low-energy excitations in frustrated Kitaev magnets are most likely of topological origin, which is entirely different from the conventional spin glass materials.In essence, the HS formalism has wider applicability in a broad class of frustrated magnets. We also propose a phase diagram taking into account the free energy landscape and observed experimental signatures of Kitaev magnets with distinct magnetic phases highly relevant for the exploration of promising class of frustrated quantum materials for the faithful realization of topological and quantum states with exotic quasi-particle excitations.§ FORMALISM OF HALPERIN AND SASLOW (HS) MODEHalperin and Saslow proposed a nonequilibrium hydrodynamic state for spin glass and helical spin ordering, which completely breaks the O(3) symmetry (locally) <cit.>. In order to encounter the non-equilibrium states, the magnetization density m_α(r) was introduced corresponding to three slowly varying rotational angles θ_α(r), α= 1, 2, 3. The free energy cost because of long-wavelength fluctuation of θ_α(r) and low-frequency fluctuation of m_α(r) is expressed byF[m, θ] = 1/2∑_α = 1^3 ∫ d^3r(χ^-1m_α^2 + ρ_s ∇θ_α^2)where χ and ρ_s are magnetic susceptibility and spin-stiffness, respectively. The spin stiffness characterizes the change in free energy of a frustrated magnet when it undergoes a modulation of spin texture. The spin-texture is identified by short-range spin correlation and oscillating spin density profile owing to topological defects in frustrated magnets <cit.>. In the spin glass state, the two variables θ_α and m_α satisfy the commutation relation[θ_α(r), m_β(r')] = i ħγδ_αβδ (r-r')where γ = gμ_B/ħ is the gyromagnetic ratioand α, β are the cartesian components correspond to magnetization density along x,y and z directions.By applying Heisenberg's equation of motion, which is associated with the commutation bracket stated in equation (3), in conjunction with equation (2), two coupled equations of motion that govern the dynamics ofm_α and θ_α are represented as <cit.> ∂θ_α(r)/∂ t= γδ(▵ F)/δ m_α(r) = γ m_α(r)χ^-1 ∂ m_α(r)/∂ t= -γδ(▵ F)/δθ_α(r) = γρ_s∇^2 θ(r)By solving the aforementioned pair of coupled equations, one can obtain the solutions that describe the dispersion relations for the propagation of low frequency spin wave, which are given by ω = ± ck, where the wave propagation velocity c turns out to be c = γ(ρ_s/χ)^1/2.The spin dynamics in densely populated frustrated spin glass materials may host two interesting scenarios (i) excitation pertaining to small-amplitude motions around equilibrium or quasi-equilibrium states, known as spin waves, and (ii) barrier mode excitation that involves the large amplitude motions to overcome the energy of metastable state owing to topological defects <cit.>.Gapless spin wave modes can be realized in materials wherein magnetic interactions maintain symmetry under rotations in an n-dimensional spin space <cit.>. Phenomenologically, below the freezing temperature T_g, the Halperin-Saslow (HS) modes exhibit gapless excitation spectrum. The behavior is similar to the observation of sound waves in glasses wherein the fluctuation of random forces is averaged out in the long-wavelength limit <cit.>. Remarkably, the bosonic excitations exhibit distinctive power law characteristics in the magnetic specific heat, namely C_m ∼ T^D/μ, where D represents the dimension of the spin-lattice and μ signifies the exponent governing energy dispersion. It should be noted that the observed behavior in magnetic specific heat holds true exclusively for the propagation of undamped modes <cit.>. Consequently, in scenarios involving the linear HS mode, the magnetic-specific heat should lead to a T^2 behavior in two dimensional spin-lattice. This behavior encapsulates the intricate interplay between competing degrees of freedom, anisotropy, low-energy excitation, and thermodynamic response in frustrated magnets including Kitaev materials.§ MAGNETIC PROPERTIESCompeting interaction, spin-orbit driven anisotropy and quantum fluctuations play a vital role for the manifestation of topological quantum states in Kitaev materials. Magnetic susceptibility is the response of magnetization in the presence of external magnetic field that provides crucial information concerning the ground state properties of frustrated magnets. In weak magnetic field, a bifurcation of susceptibility recorded in ZFC and FC modes below the freezing temperature is a characteristic feature of spin glass materials. In ZFC, we can observe a linear response of susceptibility (χ_lr) due to the fluctuation of magnetization in a particular metastable state characterized by long relaxation time (see Fig. 1c).In a given applied magnetic field below the freezing temperature, the spin state exhibits long-lived stability. However, with the gradual increase in temperature, the susceptibility shows an asymptotic enhancement. While in the FC case, as we are cooling the material from high temperature to low temperature, the system likely goes to one of the lowest free energy states, which is an average of all ensembles, and the susceptibility can be denoted as χ_eq as depicted in Fig. 1c. In terms of the Edward-Anderson spin-glass order parameter which is associated with the remanent properties of amaterial under study, the two susceptibilities can be expressed as: χ_lr=(1-q_EA/T) and χ_eq=∫ dq P(q)(1-q) <cit.>. The spin-glass order parameter q_EA is defined as the thermal and disorder averaging of the square of the magnetic moment, which can be mathematically expressed as q_EA = < S_i >^2 <cit.>. The magnetic susceptibility measured in ZFC and FC modes can be represented in terms of free energy as <cit.>χ_eq= 1/(∂^2 F_eq/∂ M^2) ,χ_lr= 1/(∂^2 F/∂ M^2)Here χ_eq is the susceptibility corresponds to the free energy at equilibrium (F_eq), which is obtained following the field cooling protocol and is depicted by the dashed convex envelope in the inset of Fig. 1c. While χ_lr is the result of magnetization in the Zero Field Cooled (ZFC) mode, which develops within one of the possible metastable states (F) enclosed within the quasi-equilibrium free energy state. Since the double derivative is the measure of curvature, it is obvious that χ_eq>χ_lr. This deviation from linear response theory arises due to the extensive exploration of all the phase points in the phase space in field cooling mode. In contrast, under zero field cooling conditions, the system explores only a subset of the phase space, a phenomenon termed as broken ergodicity characterized by long relaxation time compared to the timescale of the experiment <cit.>. In the simplest approximation, the difference between the susceptibility measured following two protocols is given by: χ_eq(H)-χ_lr(H) = (d^2F_m/dM^2)^-1= dM_r/dH, where M_r is the remanent magnetization.Recently, unconventional spin freezing behavior has been observed in Kitaev materials, driven by perturbations such as external magnetic fields <cit.>, applied pressure <cit.>, doping <cit.>, or quenched disorder <cit.>. The frustrated Kitaev magnet Li_2RhO_3, where 4d^5 Rh^4+ ions constitute a honeycomb spin-lattice with an effective moment of J_eff = 1/2, shows non-trivial spin-freezing with unusual low-energy excitations as reflected in the field magnetic susceptibility, specific heat and NMR relaxation results (Figs. 2a, 3a) <cit.>. Notably, in the case of d^5 ions in an octahedral environment in 4d and 5d transition metals, the crystal electric field (CEF) splits the energy levels of the t_2g and the e_g orbitals as shown in Fig. 1b <cit.>. Furthermore, the spin-orbit coupling splits the unquenched t_2g orbital into Kramer's doublet ground state with J_eff = 1/2 and quartet state with J_eff = 3/2 (see Fig. 1b) <cit.>. For the system with a half-filled Kramer's doublet, the on-site Coulombic interaction induces a gap in the half-filled band, resulting in a weak spin-orbital Mott insulator <cit.>. Experimental observations, in tandem with theoretical calculations, indicate that Li_2RhO_3 exhibits Mott insulating behavior, characterized by an energy gap Δ∼ 80 meV <cit.>. It shows spin glass behavior around 6 K with remanent magnetization M_r=1.4× 10^-3μ_B/Rh^4+ at 2 K in the presence of an applied magnetic field μ_0 H=100 Oe <cit.>.The first-order derivative of the magnetic susceptibility recorded in ZFC mode in Li_2RhO_3 and that obtained in FC mode is illustrated in Fig. 2b. The inflection point where the slope changes sign i.e., dχ/dT = 0, corresponds to a glass transition temperature in a given magnetic field. Strikingly, this scenario is valid in other spin-orbit driven 4d and 5d based frustratedKitaev magnets such as Cr doped α-RuCl_3, Ti and Ru doped A_2IrO_3 (A = Na, Li) as presented in this work (Fig. 2), which suggests the existence of a common spin-freezing mechanism in this class of frustrated magnets. The non-magnetic Ti doping at the Ir site in the extensively studied Kitaev magnet Na_2IrO_3 induces spin freezing andT_g suppresses upon increasing Ti concentration, as shown inFig.2c, which is in contrast to that observed in unfrustrated spin-glass materials. Specific heat experiment is an excellent probe to shed insights into the ground state and associated low-energy quasi-particle excitations in Kitaev magnets. The magnetic specific heat of Li_2RhO_3 (C_m), obtained after subtracting the lattice contribution using the specific heat of non-magnetic analog Li_2SnO_3, shows T^2 dependence (see Fig. 3a) in the low temperature limit below the spin glass temperature suggesting the presence of unconventional low-energy excitations <cit.>. Notable suchT^2 dependence is peculiar in the antiferromagnetic two-dimensional Goldstone modes as C ∼ T^D, where D is the dimensionality of the spin-lattice. Interestingly, the temperature dependence of magnetic specific heat is robust against the external magnetic field as shown in the inset of Fig. 3a, measured up to 9 T. The C_m(H, T)/T vs. T <cit.> shows the change of curvature at T_broad = 10 K, which ensures that all field derivative of C_m(H, T)/T vanishes at T_broad. Considering ∂^2C_ m/∂^2H = ∂^2χ/∂^2T, it is expected that the first-order temperature derivative of the field-cooled magnetic susceptibility exhibits an extremum around T_broad <cit.>. However, it is noteworthy that the minima observed in dχ/dT as depicted in the inset of Fig. 2b (dashed line) for all magnetic fields up to 1.5 T occur below T_broad, derived from the specific heat data. Since specific heat is more sensitive to low-energy hydrodynamic modes and short range spin correlations, this attribute might provide a plausible explanation for the disparate features observed in the two measurements or different experimental techniques with varied characteristic time scales probe a bit different spin dynamics. This scenario is in sharp contrast to that observed in conventional spin glass, where the system cannot allocate enough time to respond to the dynamically evolving magnetic field and temperature fluctuations in specific heat measurements and is less sensitive to glassiness <cit.>. In Li_2RhO_3, the magnetic entropy recovered at 45 K is much lower (0.35 Rln 2) than that expected for J_eff=1/2 moments suggesting huge ground state degeneracy and short-range spin correlations in agreement with the presence of a broad maximum in specific heat and NMR spin-lattice relaxation rate <cit.>. The specific heat rules out the presence of any Schottky contribution, and the observed behavior of magnetic specific heat in Li_2RhO_3 is at variance with that found in conventional spin glass materials wherein C_m∼ T <cit.>.Remarkably, similarC_m∼ T^2behavior has been observed in other Kitaev materials such as Cr doped α-RuCl_3 <cit.>, Ti doped Li_2IrO_3 <cit.>,Ru doped Na_2IrO_3 <cit.> below the spin-freezing temperature much below the characteristic exchange interaction as shown in Fig. 3a. In order to shed insights into this unconventional low-energy excitations as reflected in the specific heat of the spin-orbit driven frustrated Kitaev magnets with a goal to demonstrate a commonality in this class of materials, we have analyzed the specific heat of four representative Kitaev quantum magnets following Halperin-Saslow formalism. The coherent propagation of HS mode can be captured with a finite spin-stiffness constant in the spin texture with a characteristic length scale L_0. Within this finite spin-texture, the low-energy spin excitation in 2D spin-lattice gives rise to specific heat C_m/R = [3√(3)ζ(3)/2π](ak_BT/ħ𝒟)^2 - (√(3)π/2)(a/L_0)^2 at temperature T<T_g<<θ_CW, where θ_CW refers to the energy scale of exchange interaction and 𝒟 is the spin stiffness constant associated with the free-energy <cit.>. It is worth noting that we employ the spin stiffness constant 𝒟 for dimensional normalization, distinct from the notation ρ_s used previously for the same quantity and L_0 is the characteristic length scale over which the Goldstone modes are well defined. The first term corresponds to the quadratic temperature dependency of C_m in 2D antiferromagnetic spin wave, and the second term is the size-dependent negative shifting of specific heat valid for a 2D gapless linear dispersive mode with frequency ħω/k_B <T< θ_CW <cit.>. Considering only the antiferromagnetic ordering at T∼θ_CW, the spin spin stiffness can be calculated from the relation 𝒟_0^2= [3√(3)ζ(3)/4π] (ak_Bθ_CW/ħ)^2/ln(2S+1) <cit.>. The fit of the low temperature experimental specific heat data belowT_g of Li_2RhO_3 with the expression relevant for elucidating the low-energy excitations in the HS mode yields, 𝒟_0 ∼ 1960 m/s and 𝒟∼ 1064 m/s with spin texture length L_0 ∼ 16 nm and corresponding fit is shown in Fig.3b. This framework has been extended to Cr doped α-RuCl_3, Ti doped Li_2IrO_3,Ru doped Na_2IrO_3Kitaev materials and the resulting parameters are presented in Table 1 exemplifying non-trivial spin-freezing mechanism in this class of spin-orbit driven frustrated quantum materials. The reduction in stiffness constant 𝒟 compared with 𝒟_0 is ascribed to softening owing to magnetic frustration.Remarkably, the T^2 dependence of magnetic-specific heat as shown in Fig. 3a in the Kitaev magnets wherein the magnetic frustration is mediated by spin-orbit driven anisotropic bond-dependent exchange interaction can be resolved by assuming three non-degenerate hydrodynamic modes in the presence of the magnetic field. With the application of magnetic field H, the degeneracy of hydrodynamic mode gaps out, and the frequency of each mode is expressed as <cit.>:ω_± = ±gμ_BH/2+√((gμ_BH/2)^2+(ck)^2) where g is theLandé g-factor, c is the velocity of hydrodynamic modes and k is the wave vector. In the limit of (gμ_BH/2)>> ck, the Taylor expansion yields ω_+ ≈ gμ_BH + c^2k^2/gμ_BH andω_- ≈c^2k^2/gμ_BH. While ω_0 = ck is field-independent excitations showing linear dispersion relation. The quadratic polarization ω_- compensates for the gapped mode ω_+ to some extent at low energy. This compensation accounts for the observed negligible deviation from the T^2 behavior of C_m at low temperature below the spin-glass temperature. The proportionality constant A (C_m = AT^2) is given by:A = 3ζ(3)k_B^2RV/π d ħ^2∑_i 1/c_i, where V is volume of the unit cell, d is the spacing between the successive layers in the honeycomb lattice, and c_i is the velocity of spin wave mode along three spatial directions. Apart from the HS mode, such robustness of specific heatagainst the external magnetic field of up to 9 T at temperatures below 10 K in Li_2RhO_3 as shown in the inset of Fig.3a is most likely associated with pseudo-Goldstone modes, which are fostered by the presence of non-collinear antiferromagnetic ordering  <cit.>. Notably, a broad peak in C_m/T is observed around temperature T = 10 K, indicating the persistence of short-range spin correlations in Li_2RhO_3 <cit.>. In an isotropic unfrustrated system, a λ-type peak in specific heat is typically associated with the onset of long-range magnetic ordering which can be suppressed completely by applying a critical field H_C(0)∼ k_Bθ_CW/gμ_B. So one would expect a complete suppression of the peak in specific heat by applying a critical field of approximately H_C(0)∼ k_BT_peak/gμ_B = 7.4 T if the peak at T=10 K was associated with long range magnetic ordering in Li_2RhO_3 <cit.>. However, it is worth noting that, even in an applied field of 9 T, there is no suppression of the observed peak in Li_2RhO_3. A similar scenario is also observed in Cr-doped α-RuCl_3, Ti doped Li_2IrO_3,Ru doped Na_2IrO_3. The broad peak in specific heat and T^2 behavior of C_m below the spin-glass temperature (see fig. 3) is quite remarkable, which is associated with the presence of short-range spin correlations and abundant low energy excitations in these frustrated magnets <cit.>. In the of HS framework, the magnetic susceptibility χ_m and C_m/T^2 below T_g can be linked to yield two characteristic energy scales denoted as E_1 and E_2, which are expressed as follows:E_1 = 2g^2μ_B^2S(S+1)N_A/zχ_m = k_B τ_1 E_2 = 3n_pζ(3)/πk_B^3/g^2μ_B^2χ_M/C_m/T^2 = k_Bτ_2 Here z is the co-ordination number i.e., the number of nearest neighbor magnetic ions, which is 3 for the honeycomb lattice, and n_p is the number of degenerate hydrodynamic modes. The two energy scales E_1 and E_2, are intricately linked to the exchange interaction energy and spin stiffness, which are in turn comparable to two distinct temperature scales, θ_CW and T_g, respectively. Interestingly, external perturbations such as doping at the magnetic site of Kitaev magnets could lead to intriguing physical phenomena such as spin freezing in the case of Ru_1-xCr_xCl_3 or Na_2Ir_1-xRu_xO_3. In such a scenario, the relevant energy scale can be extracted following modified expressions relevant for E_1 and E_2. The derived relevant energy scales for some selected Kitaev quantum materials are tabulated in Table-1, which suggests a common underlying mechanism governing unconventional spin-freezing phenomena in frustrated Kitaev magnets. More strikingly, in quantum materials with ferromagnetic interactions, the undamped linear dispersion of spin wave mode leads to a T^3 behavior in specific heat below the freezing temperature <cit.>. Such behavior is observed in the 3D variant of Kitaev honeycomb magnet, namely, the hyperhoneycomb lattice β-ZnIrO_3 and β-MgIrO_3, where the dominant exchange interaction between effective spin 1/2 (J_eff = 1/2) of Ir^4+ moments is ferromagnetic with θ_CW∼ 45.6 K and 56 K, respectively <cit.>. In this hyperhoneycomb spin-lattice, the Z_2 flux operator W, is a conserved quantity defined as a loop operator that encompasses precisely ten lattice sites <cit.>. Both 3D Kitaev materials, β-ZnIrO_3 and β-MgIrO_3, demonstrate a distinctive spin freezing phenomenon, characterized by the onset of weak anomaly in thermodynamic experiments at temperatures around 12 K and 22 K, respectively <cit.>. The magnetic contribution to the specific heat exhibits a notable T^3 dependency below the freezing temperature, indicating a linear dispersion behavior, which was initially postulated to elucidate the hydrodynamic mode of spin waves in ferromagnetic materials <cit.>. For ferromagnetic long-range ordering, the spin wave excitations lead to quadratic dispersion relation so the linear dispersion is due to planar ferromagnetic excitations. The T^3 behavior remains intact for β-ZnIrO_3 in the presence of an applied magnetic field upto 5 T, which is due to the compensation of ω_+ with ω_- and the propagation of linear mode (ω_0) accounts for the data collapsing behavior of specific heat in the presence of an external magnetic field <cit.>.In a seminal paper <cit.>, Anderson et al. introduced the idea of two-level system (TL), which successfully describes the linear behavior of specific heat in conventional spin glass. The scenario differs as compared to spin jam state in the frustrated magnets, where spin dynamic persists within a spin cluster. For cluster spin glass, the spins become frozen within the cluster. The specific heat contribution due to cluster spin glass can be written as C_TL = π^2/6k_B^2 T n(0), where n(0) is the density of states accounting for linear specific heat behavior with the energy scale E∼ k_BT_g. In principle, the contribution to the specific heat arises from both spin jam and spin glass cluster mechanisms <cit.>. As such, the specific heat can be expressed as a linear combination: C = f · C_HS + (1-f) · C_TL, where C_HS represents the specific heat contribution from the spin jam component, while C_TL is that due to the spin glass cluster, and f is a weighting factor indicating the relative proportion of each contribution. The magnetic susceptibility and specific heat provide a consistent picture in accord with the HS framework manifesting a common spin-freezing mechanism in a few selected Kitaev magnets presented here. Remarkably, the power law behavior of NMR relaxation rate below T_g in Li_2RhO_3 is in agreement with the presence of unconventional low-energy excitations <cit.>. § EFFECT OF DOPING ON SPIN FREEZINGThe influence of external perturbations, such as doping and magnetic field, is quite striking and often leads to quantum phase transitions such as Bose-Einstein condensation <cit.>, spin liquids <cit.>,superconductivity <cit.>, and non-trivial spin-glass <cit.> in frustrated magnetis. In canonical spin glass systems, the introduction of dopants has been observed to evoke a pronounced and significant influence on the underlying physical mechanism at play. Notably, when a minute amount of magnetic dopants such as Mn and Fe are incorporated into non-magnetic host materials, such as Cu_1-xMn_x, Au_1-xFe_x, and Ag_1-xMn_x, there is a substantial increment in the spin-freezing temperature upon increasing the doping concentration. This enhancement of T_g is characterized by a marked and significant scaling response that is contingent upon the concentration of dopants, as depicted in Fig. 4a <cit.>. In frustrated magnets, the freezing temperature in a spin jam state depends on both the energy of barrier mode (E_B) necessary for the collective spin flipping and the correlation length ξ (T), which can be expressed as T_g ∝ F(E_B, ξ) <cit.>. The spin-freezing temperature T_g remains unchanged upon substitution of non-magnetic impurities in frustrated magnets such as SCGO and BCGO until the separation between non-magnetic impurities becomes comparable to ξ <cit.>.The effect of doping was investigated in Kitaev materials Ru_1-xCr_xCl_3, Na_2Ir_1-xRu_xO_3 and Na_2Ir_1-xTi_xO_3 decorated on a honeycomb lattice, and by varying the doping parameter x, it was observed that within a percolation threshold, the doping is less robust compared to the canonical spin glass <cit.>. The emergence of topological quantum state akin to KitaevQSL in the presence of external magnetic field in the celebrated honeycomb lattice α-RuCl_3 is quite remarkable <cit.>. The well studied Kitaev magnet Na_2IrO_3 shows a zig-zag antiferromagnetically ordered state at temperature T_N= 15 K <cit.> and a few percentages of Ru^4+ doping at Ir^4+ site gives rise to a topologically protected spin-freezing state <cit.>. The substitution of Ru^4+ at Ir^4+ site preserves the magnetic exchange path that suggests the emergence of glassiness can be intrinsic. This could manifest as a spin jam state, given its correlation with intrinsic glassy states. <cit.>. As presented in Fig. 4a, the spin-freezing temperature does not change drastically for the Ru^4+ doping concentration 0.1≤ x≤0.3 in the single crystal of Na_2Ir_1-xRu_xO_3. Notably, upon increasing the amount of Ru^4+ doping concentrations at Ir^4+ site in Na_2Ir_1-xRu_xO_3 and the non-magnetic Ti^4+ at Ir^4+ site in Na_2Ir_1-xTi_xO_3, the freezing temperature decreases smoothly. The observed behavior of spin-freezing temperature upon doping concentration is consistent with spin jam theory. Interestingly, upon increasing doping concentration, we observe an increase in χ(T_g) (see Fig. 2c), while the T_g value is suppressed which, in turn, imposes the constraint dχ(T_g)/dT < 0.The observed trend differs from the conventional spin glasses, wherein an escalation in doping concentration typically yields an increase in both χ(T_g) and T_g. This behavior adheres to the underlying relationship expressed by dχ(T_g)/dT > 0 in the case of conventional spin glass <cit.>. Notably, the extensively studied Kitaev material α-RuCl_3 hosts a glassy phase upon substitution of S = 3/2 Cr^3+ ions at Jeff=1/2 Ru^4+ site and the glassy phase remains stable for values of x exceeding 0.1 in α-Ru_1-xCr_xCl_3 <cit.>. In contrast, nonmagnetic doping of Ir^3+ (5d^6: J_eff=0) in α-Ru_1-xIr_xCl_3 stabilizes a quantum disordered QSL like state for 0.16≤ x≤0.4 <cit.>. Notably, data collapse behavior in specific heat is observed as a function of the magnetic field, and it is achieved by plotting H^1-γ/T vs T/H, where γ (= 0.19) represents the critical exponent typical for random singlets in the quantum disordered state  <cit.>. The observed features suggest that frustrated Kitaev magnets are quite sensitive to external stimuli in undergoing a phase transition leading to the emergence of aplethora of topological states driven by perturbations in a controlled manner. Taking into account the intriguing phenomena detected inKitaev magnets owing to a subtle interplay between competing degrees of freedom, their free energy landscape and effect of external perturbations on the underlying physical phenomena, we proposed a comprehensive phase diagram as shown in Fig. 4b that reflects immense promise for the experimental realization of topological and quantum states such as Kitaev QSL with elusive Majorana fermions in this class of frustrated quantum materials. § DISCUSSIONThe honeycomb magnet stabilizes on a bipartite lattice, and the Heisenberg exchange interaction between spins leads to a Néel state at low temperature. However, spin-orbit driven anisotropic bond-dependent frustrated Kitaev exchange interaction between J_eff = 1/2 spins decorated on a honeycomb lattice can host a quantum spin liquid state with deconfined fractional excitations <cit.>. Notably, the frustrated Kitaev magnets decorated on a honeycomb lattice are potential candidates to host non-trivial spin-freezing of topological origin <cit.>.The order function formalism associated with remanent properties of frustrated magnets may be of paramount importance in elucidating a broad class of disordered systems including spin glass <cit.>. The low-temperature spin freezing behavior observed in frustrated Kitaev magnets Li_2RhO_3, Na_2Ir_1-xTi_xO_3, Ru_1-xCr_xCl_3, Li_2Ir_1-xTi_xO_3, and Na_2Ir_1-xRu_xO_3 exhibit characteristic features akin to the hydrodynamic modes with gapless excitations proposed for frustrated spin glass materials <cit.>. The presence of a linearly dispersing mode is evident in the low-temperature specific heat experiments. The T^2 behavior of magnetic specific heat below the freezing temperature remains independent of the magnetic field, which is quite different from that observed in conventional spin-glass materials. NMR spin-lattice relaxation rate 1/T_1,that tracks low-energy excitations in frustrated magnets follows T^2.2 behavior down to 1.8 K in Li_2RhO_3 suggesting the persistence of low energy excitations below freezing temperature T_g ∼ 6 K. The broad maximum in 1/T_1 well above T_g reflects the persistence of short-range spin correlations that is consistent with magnetic specific heat <cit.>. The observed characteristic features in frustratedKitaev honeycomb magnets reported in this work are in stark contrast to those found in conventional spin glass materials. The higher value of the characteristic energy scale, τ_2, with respect to the freezing temperature T_g (Table 1), can be due to the underestimation of next-nearest neighbor interactions in the HS framework; however, this can be addressed by increasing the number of nearest neighbor z in honeycomb spin lattice from 3 to some effective higher number.In general, the conventional spin glass materials do not support the spin wave excitations where the spin stiffness ρ_s=0.Spin wave excitations arise from clusters of spins, each of which exhibits finite bounds with finite spin stiffness constant. These spin clusters undergo independent spin freezing that is attributed to jamming effects <cit.>. The observed spin fluctuations in frustrated Kitaev magnets, as probed by neutron scattering measurements, extended down to freezing temperatures, likely originating from the spins located at the boundaries of spin clusters <cit.>. Taking into account Anderson's proposal regarding the areal scaling of free energy fluctuations at the boundaries of clusters, one can expect the emergence of a rugged energy landscape beneath an initially flat energy landscape <cit.>. It is worth noting that the HS mode is coupled with both electronic and nuclear magnetic moments due to hyperfine coupling between the nucleus and the surrounding electronic environment <cit.>. Understanding the nature of spin correlations and fluctuation of hyperfine fields in the ground state of the frustrated Kitaev magnet is of paramount importance, which may underpin elusive Kitaev QSL and associated low energy excitations. In this context, microscopic techniques such as NMR provide an excellent probe for tracking the intrinsic magnetic susceptibility,nature of spin correlations, and associated unconventional low-energy excitations in the ground state of these frustrated magnets <cit.>. The spin-lattice relaxation rate, 1/T_1, is related to the spin correlation as the wave vector averaged dynamical susceptibility χ_m^'' (k,ω)probes low energy spin excitations in frustrated magnets. For instance, the frustrated triangular lattice antiferromagnet NiGa_2S_4 exhibits a power-law T^3 dependence in 1/T_1 <cit.>. The damping of spin wave excitation gives rise to a dynamic susceptibility of the form χ_m^'' (k,ω) = ωχ_MD_sk^2/2[1/(ω-ck)^2+(D_sk^2)^2+1/(ω+ck)^2+(D_sk^2)^2] with D_s is the spin diffusion constant <cit.>. In the topologically protected state, the barrier mode propagation involves the diffusion of quadruplet, not pair, flips of defect variables by spin operators <cit.>. So, the barrier mode is thermally suppressed, and in the limit of D_s→ 0, the dynamic susceptibility becomes χ_m^''= π/2ωχ_M[δ(ω-ck)+δ(ω+ck)]. This results in the propagation of undamped spin wave modes that, in conjunction with linear modes, yield a power-law dependence in specific heat, which reflects the gapless nature of the low-energy excitations <cit.>.Unlike the hierarchical organization of free energy observed in conventional spin glasses, a non-hierarchical structure is posited, supported by evidence of memory and aging effects in frustrated kagome structures such as SCGO, BCGO, and Kitaev magnets like Li_2RhO_3 and Na_2Ir_0.89Ti_0.11O_3 <cit.>. The aging behavior exhibits diminished prominence, necessitating a large time scale observations to manifest it. The presence of a subtle memory effect implies the development of a nonhierarchical landscape featuring a wide, nearly flat, but rough bottom <cit.>. The non-abelian nature of defect propagation in anisotropic kagome was proposed for the memory effect in SCGO like nearly defect-free kagome materials <cit.>. The defect-free hyperhoneycomb β-ZnIrO_3 and β-MgIrO_3 Kitaev magnets with dominant ferromagnetic exchange interaction exhibit intrinsic glassiness, necessitating further theoretical investigation. The HS framework sheds very interesting insights into the spin correlations and spin-freezing mechanism in 2D and 3DKitaev magnets. The parameters derived from the analysis of magnetic susceptibility and specific heat results on a few selected Kitaev magnets are documented in Table 1. The low-energy excitations of the spin jam state are primarily governed by the HS mode, wherein spin wave excitations emerge from the unconventional spin-freezing dynamics. This stands in contrast to conventional spin glasses characterized by localized two-level energy systems, where the excitations occur through the barrier mode <cit.>. Our analysis suggests that the HSmodel is quite apt and generic in elucidating the low temperature spin freezing mechanism in a large class of frustrated quantum magnets <cit.>. The possible scenarios that can elucidate the observed unconventional low temperature behavior of frustrated Kitaev magnets are: (i) In the frame of the Kitaev model,the distinctive spin-orbit driven bond-dependent anisotropic interactions engender a set of conserved quantities intimately associated with the spins, often denoted as integrals of motion <cit.>. The existence of this substantial ensemble of conserved quantities such as Z_2 flux operator W can facilitate the emergence of weakly broken ergodicity, thereby leading to the intrinsic manifestation of glassy behavior in the proximity of a clean system <cit.>. (ii) In the absence of ZFC-FC bifurcation, the analysis of higher-order susceptibility can provide valuable insights into the intrinsic spin-glass phase, manifesting distinctive anomalies in the vicinity of the glass transition. (iii) Remarkably, intrinsic glassiness represents a plausible magnetic phase that can endure within the domain of quantum fluctuation and topological defects that may underpin elusive Kitaev QSL.§ CONCLUSIONFrustrated Kitaev quantum materials are potential candidates to host intriguing quantum phenomena such as topological spin glass and non-trivial low-energy excitations, understanding of which may reveal vital clues to the sought-after Kitaev QSL. Our comprehensive analysis and discussion based on Halperin-Saslow hydrodynamic linearly dispersing modes elegantly captures the essence of spin-freezing mechanism. The power law behavior of magnetic specific heat at low temperature in a few representative Kitaev magnets decorated on 2D and 3D spin lattices manifests the persistence of abundant low-energy topological excitations which is in sharp contrast to that found in canonical spin-glass magnets. In addition, our analysis of magnetic specific heat below the spin-freezing temperature in Kitaev spin glass materials following the HS frame suggests the dominant role of low-energy excitations. Our results reveal that frustrated Kitaev materials are promising candiates to exhibit an intrinsic glassiness, the genesis of which can be attributed to the effect of quantum fluctuations and the imposition of topological constraints. The distinctive character of the free energy landscape and non-trivial behavior of frustrated Kitaev magnets enable us to establish a rich phase diagram that may provide a new direction for further exploration of 2D and 3D Kitaev magnets' potential to host quantum and topological states with elusive fractional excitations. The measurement of non-linear susceptibility as a direct probe of the Edward-Anderson order parameter may provide deep insights into the free energy landscape and non-trivial low temperature spin-freezing in frustrated Kitaev magnets. The inelastic neutron scattering and low-temperature NMR measurements may shed deep insights into the low energy excitations below freezing temperature in this class of frustrated magnets, which is crucial to understand the nature of the ground state to design frustrated Kitaev quantum materials with ultimate functionalities. The intriguing question of whether frustrated Kitaev magnets can exhibit spin-freezing without quenched disorder prompts further investigation. Additionally, the prevalence of spin-glass in proximate Kitaev quantum spin liquid candidates invokes further studies in this direction. These may deepen our understanding of complex magnetic systems and pave the way to shed new insights into the behavior of frustrated magnets, under the influence of diverse external perturbations such as doping, pressure, and applied magnetic fields.§ ACKNOWLEDGEMENTU.J. acknowledges support from the Council of Scientific and Industrial Research, India. P.K. acknowledges funding from the Science and Engineering Research Board and Department of Science and Technology, India through research grants.apsrev4-2
http://arxiv.org/abs/2312.16096v1
{ "authors": [ "U. Jena", "P. Khuntia" ], "categories": [ "cond-mat.str-el", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.str-el", "published": "20231226154423", "title": "The nature of low-temperature spin-freezing in frustrated Kitaev magnets" }
This work is supported by the Scientific and Technological Research Council of Turkey(TUBITAK) under grant 120E505. O. Oral, and F. S. Oktem are with the Department of Electrical Engineering, METU, Cankaya, Ankara 06800, Turkey (e-mail: [email protected], [email protected]). This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.Transverse electric waves in Bandos-Lechner-Sorokin-Townsend nonlinear electrodynamics Towe Wang January 14, 2024====================================================================================== Near-field radar imaging systems are recently used in a wide range of applications, such as medical diagnosis, through-wall imaging, concealed weapon detection, and nondestructive evaluation.In this paper, we consider the problem of reconstructing the three-dimensional (3D) complex-valued reflectivity distribution of the near-field scene from sparse multiple-input multiple-output (MIMO) array measurements. Using the alternating direction method of multipliers (ADMM) framework, we solve this inverse problem by enforcing regularization on the magnitude of the complex-valued reflectivity distribution. For this, we provide a general expression for the proximal mapping associated with such regularization functionals.This equivalently corresponds to the solution of a complex-valued denoising problem which involves regularization on the magnitude. By utilizing this expression, we develop a novel andefficient plug-and-play (PnP) reconstruction method that consists of simple update steps.Due to the success of data-adaptive deep priors in various imaging problems, we also train a 3D deep denoiser to exploit within the developed PnP framework for MIMO imaging. The effectiveness of the developed learning-based PnP approach is illustrated under various compressive and noisy observation scenarios using both simulated data and experimental measurements. The performance is also compared with sparsity priors and the commonly used analytical approaches such as back-projection and Kirchhoff migration. The results demonstrate that the developed technique not only provides state-of-the-art reconstruction performance for 3Dreal-world targets,but also enables fastcomputation. Our approach provides a unified general framework to effectively handle arbitrary regularization on the magnitude of a complex-valued unknown andis equally applicable to other radar image formation problems (including SAR).§ INTRODUCTIONNear-field radar imaging systems are recently used in a wide range of applications such as medical diagnosis, through-wall imaging, concealed weapon detection, and nondestructive evaluation <cit.>. For various high-resolution imaging applications, there has been a growing interest in using multiple-input multiple-output (MIMO) arrays (i.e. multistatic arrays) that contain spatially distributed transmit and receive antennas <cit.>. MIMO arrays offer reduced hardware complexity, cost, and acquisition time compared to the conventional monostatic planar arrays (with colocated transmitter and receiver antennas).In near-fieldradar imaging, the three-dimensional (3D) complex-valued scene reflectivity has to be reconstructed fromthe radar data that is generally acquired using sparse arrays. This requires solving an ill-posed inverse problem. Consequently, the imaging performance greatly depends on the underlying image reconstruction method and the utilization of priors.Traditional direct inversion methods do not utilize any prior information and are solely derived to obtain a direct solution based on the forward (observation) model expression. These methods generally involve back-projecting measurements to the image domain and then employing a filter-like operation <cit.>.Kirchhoff migration <cit.>, back-projection <cit.>, and range migration <cit.> are commonly used direct inversion methods for near-field radar imaging. Although these methods offer low computational complexity, their reconstruction performance substantially degrades in ill-posed settings with limited and noisy data.Regularization-based methods can yield more successful reconstructionsthan these traditional methods by incorporating prior information about the unknown 3D image cube into the reconstruction process. One way to utilize prior information is tominimize an appropriately formulated cost function using hand-crafted regularization terms <cit.>.Examples include total-variation (TV) <cit.>and ℓ_1 regularization. These commonly used sparsity priors are motivated by the compressed sensing theory <cit.>and are shown to offer promising imaging performance at various compressive imaging settings including radar imaging <cit.>.For near-field radar imaging, existing regularized reconstruction methods generally enforce smoothness or sparsity on the complex-valued reflectivity distribution <cit.>.These methods are therefore built on the assumption that the scene reflectivity has locally correlated phase and magnitude. However, for many applications, the phase of the reflectivity at a particular point can be more accurately modeled as random and uncorrelated with the phase at other points <cit.>. This is because phase shift can occur when imaging rough surfaces and also at the air/target interface due to the electrical properties of materials <cit.>. As a result, enforcing regularization only on the magnitudecan improve imaging performance compared to enforcing it directly on the complex-valued reflectivity, as has been observed in earlier SAR works <cit.>.With the recent advancements in deep learning, learned reconstruction methods have emerged as powerful alternatives to the regularization-based methods with hand-crafted analytical priors <cit.>.These methods can utilize deep neural networks (DNNs) to learn data-driven deep priors and then incorporate these priors in a model-based reconstruction as a regularizer.Learned Plug-and-Play (PnP) regularization <cit.>and unrolling-based methods <cit.> are examples ofsuch approaches. In particular, the key idea in learned PnP methods is to first learn a deep denoiser prior from training data and then substitutethis denoiser in place of proximal operator in the used optimization framework.Commonly used frameworks for this purpose include alternating direction method of multipliers (ADMM) <cit.> and proximal gradient descent <cit.>. Another approach for exploiting deep priors is based on unrolling, which converts an iterative method that utilize deep-priors, such as PnP, into an end-to-end trainable network <cit.>. Although both learned PnP- and unrolling-based approaches yield state-of-the-art reconstruction quality, PnP methods have the advantage of adaptability to different imaging settings and significantly less training time. Despite the recent success of PnP methods with deep priors, most of these approaches have been developed for 2D or real-valued image reconstruction problems <cit.>. Furthermore, there is no study on such methods for near-field radar imaging where we encounter a 3D complex-valued image reconstruction problem.In this paper, we develop a novel and efficient PnP method for reconstructing the 3D complex-valued reflectivity distribution of the near-field scene from sparse MIMO measurements. Due to the random phase nature of the scene reflectivities in various applications, we formulate the image formation problem by exploiting regularization on the magnitude of the reflectivity function.We provide a general expression for the proximal mapping associated with such regularization functionals operating on the magnitude. By utilizing this expression, we develop a computationally efficient PnP reconstruction method that consists of simple update steps. To utilize within the developed PnP framework, we also train a 3D deep denoiser that can jointly exploit range and cross-range correlations.Our approach provides a unified PnP framework to effectively handle arbitrary regularization on the magnitude of a complex-valued unknown, which appears to be missing in the previous related radar imaging works <cit.>. The effectiveness of the developed learning-based PnP approach is illustrated in microwave imagingunder various compressive and noisy observation scenarios using both simulated data and experimental measurements.We also compare the performance with the commonly used traditionalmethods (back-projection and Kirchhoff migration), and with the sparsity-based approaches involving ℓ_1 and TV regularization. The main contributions of this paper can be summarized as follows: * Development of a novel deep learning-based plug-and-play reconstruction method for 3D complex-valued near-field MIMO radar imaging,* Providing a unified PnP framework for radar imaging to efficiently handle arbitrary regularization on the magnitude of a complex-valued unknown involving random phase,* Comprehensive experiments on synthetic 3D scenes with quantitative and qualitative analysis by considering different compression and noise levels in the observations, * Performance evaluation with experimental measurements to reconstruct 3D real-world targets, and comparison with the commonly used direct inversion and regularized reconstruction methods.Compared to the earlier works in near-field MIMO radar imaging, the developed technique not only provides state-of-the-art reconstruction performance for 3D real-world targets, but also enables fast computation.In particular, compared to the traditionaldirect inversion methods and sparsity-based approaches, the developed reconstruction technique achieves the best reconstruction quality at compressive settings with both simulated and experimental data. Some preliminary results of this researchwere submitted for presentation in <cit.>. Different than the related learning-based works in near-field MIMO radar imaging <cit.>,our approach is a deep prior-based PnP approach developed for imaging 3D extended targets. In particular, the works in <cit.> present deep learning-based non-iterative reconstruction methods for 2D imaging by refining an initial analytical reconstruction using DNNs. Other learning-based work in <cit.>develops an unrolling-based method.But unlike our approach, this method is not DNN-based (i.e. not deep prior-based) and only learns the hyperparameters (such assoft threshold and regularization parameters) of the unrolled ℓ_1 regularization-based reconstruction algorithm.PnP methods with data-driven deep priors have been mostly developed for 2D or real-valued image reconstruction problems <cit.>. To the best of our knowledge, our approach is the first deep prior-based PnP approach developed for near-field radar imaging where we encounter a 3D complex-valued image reconstruction problem. A related PnP work in SAR imaging <cit.> utilizes 2D analytical(but not deep) denoising priors to reconstruct 3D extended targets.This approach alsoconsiders regularization on the complex-valued reflectivity.Differently, our approachexploits regularization on the magnitude of thereflectivitydue to its random phase naturein various applications. There is also a related learned PnP approach with magnitude regularizationwhich has been developed for 2D (far-field) SAR imaging <cit.>. However, this methodrequires an inefficient iterative computation to update the phase. In contrast, our approach does not have a phase update step and all the other update steps are simple and efficient to compute thanks to the closed-form expression used for the proximal mapping.The presented closed-form expression for the proximal mapping associated with arbitrary regularization on the magnitudealso provides a generalization of the proximal mappings associated with TV and ℓ_1 regularization on magnitude <cit.>.Hence our PnP frameworkprovides a generalizable and powerful means for effectively enforcing arbitrary regularization on magnitude, and is equally applicable to other radar image formation problems (including SAR).The paper is organized as follows. In Section <ref> we describe the working principle of a near-field MIMO radar imaging system and introduce the observation model. In Section <ref> we formulate the inverse problem by enforcing regularization on the magnitudeand then develop our plug-and-play approach. The architecture of the deep denoiser utilized for learned PnP reconstruction is also presented here.Section <ref> presents the imaging results for various compressive and noisy observation scenarios. The details of the simulated and experimental settings considered, and the training procedure are also presented here. We conclude the paper by providing final remarks in Section <ref>. § OBSERVATION MODEL In this section, we present the image formation model that relates the near-field MIMOarray measurements to the reflectivity distribution of the scene. Consider the general MIMOimaging setting illustrated in Fig. <ref> with spatially distributed transmit and receive antennas on the antenna array located at z=0. In order to infer the 3D reflectivity distribution of the scene, each transmit antenna, located at _T=[x_T,y_T,0]^T, illuminates the scene with a pulse signal and the scattered field from the scene is measured by a receive antenna, located at _R=[x_R,y_R,0]^T. Under Born approximation, time-domain response of a single point-scatterer with reflectivity s() and located at =[x,y,z]^T can be expressed as follows <cit.>: ỹ(_T,_R,t)=p(t-d(_T,)/c-d(_R,)/c)/4πd(_T,)d(_R,)s().Hereỹ(_T,_R,t) denotes the time-domain measurement acquired using the transmit and receive antenna pair located respectively at _T and _R due to a single scatterer. The transmitted pulse is denoted by p(t), and c denotes the speed of light. The distances of the scatterer to the corresponding transmitter and receiver are given by d(_T,)=_T-_2 and d(_R,)=_R-_2 respectively. By taking 1D Fourier transform over t, the received signal due to a single scatterercan be expressed in the temporal frequency domain as follows:ỹ(_T,_R,k) = h(_T,_R,k,) s(),whereh(_T,_R,k,)=p(k)e^-jk(d(_T,)+d(_R,))/4π d(_T,)d(_R,),and k= 2π/c f denotes the frequency-wavenumber whereas f denotes the temporal frequency. Using (<ref>), the measurement, y(_T,_R,k), due to an extended targetcan be expressed as the superposition of these responses from point-scatterers:y(_T,_R,k)= x∫y∫z∫ h(_T,_R,k,)s() d. Since the measurements are discrete, and the image reconstruction algorithm will be run on a computer, a discrete forward model is needed. For this, the coordinate variables are discretized based on the expectedrange and cross-range resolutions of the used MIMO imaging system <cit.>.Then the discretized scene reflectivity values can be related tothe discrete measurements obtained using different transmitter-receiver pairs and frequency stepsas y(_T_m,_R_m,k_m) =∑_n h(_T_m,_R_m,k_m,_n) s(_n).Here the subscript m indicates the location of the transmitting and receiving antennas as well as the frequency used in the mth measurement. Moreover, the subscript n indicates the voxel number in the discretized 3D scene. By using lexicographical ordering, the measurements and the reflectivity values of the image voxels are put into the following vectors:=[y(_T_1,_R_1,k_1),…,y(_T_M,_R_M,k_M)]^T∈^M,=[s(_1),…,s(_N)]^T∈^N,where M and N respectively represent the number ofmeasurements and voxels.Then using (<ref>) we can express the noisy measurementsin matrix-vector form as follows:=+ .The matrix ∈^M× N is the observation matrix whose (m, n)th element is given by _m,n =h(_T_m,_R_m,k_m,_n),which represents the contribution of the nth voxel at location _n to the mth measurement taken using the transmitter at _T_m, receiver at _R_m, and frequency c/2πk_m. Also∈^M represents the additive noise vector. We assume white Gaussian noise since it commonly holds in practical applications of interest. Hence each noise component is uncorrelated over different voxels and has variance σ_w^2. § PLUG-AND-PLAY RECONSTRUCTION APPROACHIn this section, we first formulate the inverse problem by enforcing regularization on the magnitude and then develop our plug-and-play approach using the ADMM framework. The architecture of the 3D deep denoiser utilized for learned PnP reconstruction is also presented here. §.§ Inverse ProblemIn the inverse problem, the goal is to estimate the 3D complex-valued reflectivity field, , from the acquired radar measurements, . This corresponds to solving anunder-determined problem with sparse measurements M≪ N. As a result, the reconstruction quality greatly depends on the utilization of priors.A systematic approach to regularization is to incorporate the prior knowledge about unknown solutionin a deterministic or stochastic setting, and leads to a minimization with a regularization functionalpenalizing the solutions that do not comply with the assumedprior information <cit.>.Due to the random phase nature of the unknown scene reflectivities, we formulate the inverse problem using a regularization functional, ℛ(|·|), that only operates on themagnitude:smin ℛ(||)subject to -_2 ≤ϵwhere ϵ is a parameter that should be chosen based on thenoise variance (i.e. √(M·σ^2_w)), and || denotes themagnitude of the reflectivity vector .§.§ Variable Splitting and ADMMTo solve this regularized inverse problem, we first convert the constrained problem in (<ref>) to an unconstrained one using the penalty function, ι_ -_1_2≤ϵ(.), and then apply variable splitting as follows:s, _1,_2min ι_ -_1_2≤ϵ(_1) + ℛ(|_2|) subject to -_1=0, -_2=0Here the indicator function ι_ -_1_2≤ϵ(_1) takes value 0 if the constraint in (<ref>) is satisfied and +∞ otherwise, whereas _1, _2 are the auxiliary variables. We solve the optimization problem in (<ref>) with the C-SALSA approach <cit.>. In the corresponding ADMM framework <cit.>,we first obtain the associated augmented Lagrangian form given by ℒ_ρ_1,ρ_2 (,_1,_2,_1,_2) =+ι _-_1_2 ≤ϵ(_1 )+ ρ_1/2-_1-_1 _2^2 - ρ_1/2_1 ^2_2+ℛ(|_2| ) + ρ_2/2-_2-_2 _2^2 - ρ_2/2_2_2^2Here_1, _2 denote the dual variables forand , and ρ_1,ρ_2 ∈^+ are the penalty parameters for the auxiliary variables _1 and _2. We then alternatively minimize this augmented Lagrangian function over , _1, and _2 to obtain the update steps for these variables. Firstly, the minimization overcorresponds to solving a least-squares problem with the following normal equation:(^H + κ)^l+1 =^H(^l_1+^l_1) + κ(^l_2 + ^l_2)where the superscript l is the iteration count, and κ≜ρ_1/ρ_2 is a hyper-parameter that needs to be adjusted.Since solving this normal equation using matrix inversion is impractical due to the large size, we instead use few conjugate-gradient (CG) iterationsto update the scene reflectivity . Secondly, the minimization over _1 corresponds to the proximal operator of the penalty function ι_-_1_2 ≤ϵ(·), which can be computed as the projection of ^l+1-^l_1 onto ϵ-radius hyper-sphere with centeras follows:_1^l+1 =+ϵ^l+1-^l_1-/^l+1-^l_1-_2, if ^l+1-^l_1 -_2>ϵ ^l+1-^l_1-, if ^l+1-^l_1-_2 ≤ϵ Lastly, the minimization over _2 corresponds to the proximal operator for the regularization function, ℛ(|·|), that operates on the magnitude of the complex-valued vector _2:^l+1_2=Ψ_αℛ(|·|)(^l+1-_2^l)where Ψ_αℛ(|·|) is the respective proximal operator given byΨ_αℛ(|·|)() ≜min αℛ(||) + 1/2-^2_2for a complex-valued vector , with α≜1/ρ_2 determining the amount of regularization. This update step corresponds to solving a denoising problem for a complex-valued unknown, , with regularization enforced on its magnitude and noisy observation given as .To develop a computationally efficient PnP reconstruction method that consists of simple update steps, we provide a general expression for thesolution of this denoising problem (equivalently, for the proximal operator in (<ref>)). This will enable us to effectively handle arbitrary regularization on the magnitude,which appears to be missing in the previous radar imaging works. §.§ Denoising withRegularization on MagnitudeIn this section, we provide a general expression for the solution of the complex-valued denoising problemin (<ref>)which involves regularization on the magnitude. For this, we first express each complex-valued vector as a product of a diagonal phase matrix and a magnitude vector as follows: =_||,=_||,where _ = diag(e^j∠) and _=diag(e^j∠) are complex-valued unitarymatrices that contain the phase of the vectorsandon their diagonals, respectively, whereas || and || represent real-valued and non-negative vectors that contain the respective magnitudes. By using these expressions, the optimization problem in (<ref>) can be viewed as a joint minimization over the magnitude and phase of : ||,∠min αℛ(||) + 1/2_||-_||^2_2 This joint minimization problem is equivalent to ||min ∠min αℛ(||) + 1/2_||-_||^2_2 ≡||min ( αℛ(||) +∠min 1/2_||-_||^2_2 ) Hence to solve this complex-valued denoising problem, our strategy is to first solve the minimization over the phase, ∠, in closed-form, and then by substituting the optimal phase solution, ∠, to the above cost function,to solve the remaining minimization over the magnitude, ||.For minimization over the phase, we have ∠ = ∠min 1/2_||-_||^2_2 = ∠min 1/2^H__||-||^2_2where the last expression follows from the unitary property of the phase matrices, i.e. ^H_=_^-1.After expanding the ℓ_2 norm expression and simplifying it using the unitary property of phase matrices and omitting the terms that do not depend on the phase ∠, we obtain ∠ =∠max 1/2(||^T __^*|| +||^T ^*__ ||)Here we also use the fact that || and || are real-valued and hence their Hermitian transpose is simply equal their transpose, and since phase matrices are diagonal, their Hermitian transpose is simply equal their conjugation. Using the diagonality of the phase matrices, this further simplifies to∠ =∠max||^Tℜ𝔢{^*__}||.Hence to find the optimal phase, we need to maximize ∑^N_n=1 |p_n||v_n|cos(∠ v_n-∠ p_n) over all elements ∠ v_nof the vector ∠. Since each term in this summationcontains only one element of ∠, maximization can be decoupled for each element, which yields ∠ p_n as the optimal value of ∠ v_n.This shows that the optimal phase, ∠, for the denoising problem in (<ref>) is equal to the phase of the given noisy observation :∠ = ∠.That is, the proximal mapping of a function that operates on the magnitude of a complex-valued vector must directly pass the phase values of the proximal point.After solving the minimization over the phasein closed-form, we now substitute the optimal phase solution, ∠, to the cost function in (<ref>) andconsider the remaining minimization over the magnitude, ||:||= ||min αℛ(||) +1/2||-||^2_2where we use the unitary property of the phase matrix _ as before. Note that this expression is equivalent to the Moreau proximal mapping, Ψ_αℛ(·), associated with the regularization function ℛ(·) andapplied on the magnitude ||. Hence the optimal magnitude || for the denoising problem in (<ref>) corresponds todenoising of the magnitude of thenoisy observation with noise varianceα:||= Ψ_αℛ(·)(||).For the scalar-valued case, a similar derivation is encountered in <cit.>.Therefore, the solution of the complex-valued denoising problemin (<ref>) with magnitude regularization can be computed asΨ_αℛ(|·|)()= e^j∠⊙Ψ_αℛ(·)(||),where ⊙ denotes element-wise multiplication.This corresponds to denoising the magnitude ofusing the proximal (denoising) operator Ψ_αℛ(·) andmerging the denoised magnitude with the unprocessed phase of . Since (<ref>) decouples the magnitude and phase solutions,it enables us to use real-valued denoisers (proximal operators) Ψ_αℛ(·) for the solution of the complex-valued denoising problem (in (<ref>)).§.§ Developed PnP Reconstruction Method Based on the developments in the previous sections, the steps of the proposed PnP method are listed in Algorithm <ref>.Each iteration of the reconstruction algorithm mainly consists of four computationally efficient update steps. The first step is the update of theimageas given in line <ref> and carried out usingfew CG iterations.The second step is the update of the auxiliary variable _1 by computing the projection given in line<ref> and efficiently computed using scaling operations. The third step is thecomplex-valued denoising step given in line <ref> to update the auxiliary variable _2. As shown, this denoising is equivalent to directly passing the phase but denoising the magnitude of^l+1-^l_2 using the proximal operator Ψ_αℛ(·). To exploit data-driven deep priors, we use atrained denoiser as proximal operator, as explained in the next section.The last stepsare the dual-updates given in lines <ref> and <ref>.Note that our development is implicit about the choice of the regularizer (ℛ(|·|)) and the related proximal operator (Ψ_αℛ(·)). Therefore, we can efficiently adopt plug-and-play framework, which enables the utilization of powerful priors, such as deep denoisers, in place of the proximal operator, without explicitly specifying theregularizer.Moreover, our PnP approach provides a generalizable and powerful means for efficiently handling arbitrary regularization on the magnitude of a complex-valued unknown. Our approach is applicable with any forward model matrix , and hence can be used for other complex-valued image formation problems including SAR reconstruction. §.§ 3D Deep Denoiser for Learned PnP ReconstructionFollowing the success of convolutional neural networks (CNN) on denoising  <cit.>,we train and deploy a deep CNN-based denoiser for the third step of our PnP approach. Our denoiser is a 3D U-net developed based on the 2D U-net architecture in <cit.>and is shown in Fig. <ref>. To be able to effectively handle a wide range of noise levels, our denoiser is designed for non-blind Gaussian denoising similar to <cit.>, and hence takes as input also the noise level. This non-blind denoiser replaces the proximaloperator Ψ_αℛ(·) inline <ref> of the Algorithm <ref>, which is used to denoise the input magnitudes.The proposeddenoiser is a 3-level encoder-decoder architecture with repeated 3D convolutional blocks (C) followed by batch normalization (B) and ReLU (R). Due to 3D processing, the denoiser can jointly exploit range and cross-range correlations. On each level, max pooling (Max. Pool.) is used to reduce the spatial size of the input tensor by a factor of 2 in each dimension and transposed convolution blocks (T.Conv.) are used to increase by 2. At each decoding level, the output of the transposed convolution block is concatenated with the encoder outputs. The concatenated outputs are then fed to the respective decoding blocks. A single-channel 3D convolution block follows the last decoding block. The number of output channels of all convolutional blocks is indicated inside parentheses in Figure <ref>.The input of our U-net is the 3Dreflectivity magnitude that will be denoised and the 3Dnoise level map.The noise level map enables to adjust the amount of denoising in our non-blind denoiser network and its values are set to the constant √(α) in (<ref>). The output of the U-net is the 3D denoised reflectivity magnitude.§ EXPERIMENTS AND RESULTS We now demonstrate the effectiveness of the developed learning-based PnP approach under various compressive and noisy observation scenarios in microwave imaging. For this, we first train the implemented denoiser using a synthetically generated large dataset consisting of 3D extended targets. We then perform comprehensive experiments on synthetic 3D scenes, and comparatively evaluate the performancewith the widely usedback-projection (BP) and Kirchhoff migration (KM) algorithms, as well as using sparsity-based regularization in the form of isotropic total-variation (TV) and ℓ_1.Lastly, we illustrate the performance with experimental measurements to demonstrate the successful reconstruction of 3D real-world targets.§.§ Training of the 3D Deep Denoiser Because a large experimental dataset is not available for microwave imaging, we use a synthetic dataset <cit.> to train our denoiser network. The utilized synthetic dataset consists of randomly generated complex-valued image cubes of size 25 ×25 × 49. For training, validation, and testing, we respectively use 800, 100, and 100 of these image cubes. Each synthetic image cube is obtained by randomly generating 15 points within the cube and then applying a 3D Gaussian filter to convert these points to a volumetric object. The magnitudes are normalized (via sigmoid function) to 1, while adding a random phase to each image voxel from a uniform distribution between 0 and 2π. The denoiser network replaces the proximal operator Ψ_αℛ(·)in line <ref> of the Algorithm <ref> with the goal of denoising the reflectivity magnitudes. We accordingly train our deep denoiser by minimizing the mean squared error between the 3D ground truth magnitudes and Gaussian noise added magnitudes on 800 training scenes. At each iteration of training, a new Gaussian noise realization is added to each ground truth magnitude by randomly and uniformly choosing the noise standard deviation, σ_ν, from the interval [0,0.2]. In addition, the constant noise level map is formed using this value for noise standard deviation, i.e. √(α) = σ_ν, and concatenated to the 3D noisy magnitude. As a result, the network learns to denoise the reflectivity magnitudes in a non-blind manner. For training, we use a batch size of 16 with the maximum number of epochs set as 2000. We utilize Adam optimizer <cit.> with an initial learning rate of 10^-3, and drop the learning rate by a factor of 10 if the validation loss does not improve for 25 epochs. We stop the training when the validation loss does not improve for 50 epochs. At the end of training, we use the network weights that provide the minimum validation loss. Training takes approximately 15 minutes on NVIDIA GeForce RTX 3080 Ti GPU using PyTorch 1.12.0 with CUDA Toolkit 11.6.0 in Python 3.10.6.We use our learning-based PnP approach with this trained denoiser in the subsequent sections to analyze its performance using simulated and experimental data. §.§ Performance Analysis with Simulated Data We first analyze the performance of the developed imaging technique at various noise and compression levels using the synthetic scenes in the test dataset. For this, we consider a microwave imaging setting similar to Fig. <ref>. The scene of interest has physical dimension of30 cm × 30 cm × 30 cm,and its center is located50 cm away from the antenna array. As MIMO array topology, commonly usedMill’s Cross array <cit.> is utilized. The used planar array has a width of 0.3 m, andcontains 12 transmit and 13 receive antennas, which are uniformly spaced on the diagonalsin a cross configuration as shown in Fig. <ref>. The frequency, f, is swept between 4 GHz and 16 GHz with uniform steps. For non-sparsemeasurement case with these aspects, the expected theoretical resolution <cit.>is 2.5 cm in the cross-range directions, x and y, and 1.25 cm in the down-range direction, z. With the goal of achieving these resolutions in the sparse case, we choose the image voxel size as1.25 cm along x, y directions, and 0.625 cm along z direction (i.e. half of these resolutions). For the scene of interest, this results in an image cube of 25 ×25 × 49 voxels, which is same as the size of the synthetic scenes generated. Using these synthetic image cubes with the forward model in (<ref>), we simulate measurements at various signal-to-noise ratios (SNR = 10log_10(^2_2/M·σ^2_w) ) and compression levels (CL = 1 - M/N) for our analysis. Before discussing the results, we provide theimplementation details of the developed learning-based PnP approach, as well as theapproaches used for comparison.For all regularization-based approaches,we enforce regularization on the reflectivity magnitudes and utilize the developed PnP approach in Algorithm <ref> with different denoising (proximal update) steps. In particular, to evaluate the proximal operator, Ψ_αℛ(·), we utilize soft-thresholding in the case of ℓ_1 regularization and 5 iterations of Chambolle algorithm <cit.> in the case of TV regularization.The regularization parameter α in (<ref>) is chosen by searching for the optimal value between 10^-5 and 10^-1 in a coarse to fine fashion. We initialize each iterative algorithm with ^0 = ^H/max(|^H|), and in each -update-step, the conjugate gradient algorithm is run for 5 iterations.TV and ℓ_1-based approaches converge to a solution for a sufficiently large κ in(<ref>). Accordingly, we choose κ=5·10^4 and run the iterations until the stopping criterion is satisfied, which iswhen the relative change ||^l+1-||^l_2/||^l_2 drops below 5·10^-4. Because the convergence of learned PnP is an ongoing area of research and is not always guaranteed <cit.>, we limit the maximum number of iterations in the developed learning-based approach to 30. For the choice of κ, we search the optimal value using the validation dataset and set it as κ=5· 10^2.To comparatively evaluate the performance of the developed approach, we first consider the case with a medium SNR of 30 dB and a high compression level of 90%. This corresponds to using 20 frequency steps between 4 and 16 GHz and is equivalent to reconstructing the reflectivity cube with only 10% data. For a sample test image, the reconstructions obtained with different approaches are illustrated in Fig. <ref> using the same colormap. We also calculate 3D peak signal-to-noise ratio (PSNR) between the ground truth magnitudes and the normalized reconstructed magnitudes to quantitatively evaluate the performance, and provide these underneath the reconstructions. Although all algorithms reconstruct a complex-valued reflectivity distribution, the reconstructed phase is not used in this evaluation since it is random and does not contain any useful information. As seen in Fig. <ref>, the developed learning-based approach provides the best image quality with a reconstruction closely resembling the ground truth and achieving a PSNR of 30.12 dB. On the other hand, TV reconstruction suffers from over-smoothing, whereas ℓ_1 based reconstruction contains speckle-like artifacts and an artifact cluster at the top. The visual quality of KM and BP reconstructions are even worse with many more reconstruction artifacts due to noisy and compressed data, where KM performs slightly better than BP. To compare the reconstruction speed, average run-time of each method is computed over 100 test scenes as given in Table <ref>. As seen, the developed approach is capable of providing the best reconstruction quality with an average runtime of few seconds and is the fastest method after the direct inversion-based approaches (which largely fail).Moreover, TV and ℓ_1 regularized solutions take much longer time to compute.§.§.§ Compression Level AnalysisWe now analyze the effect of the compression levelon theperformance for the 30 dB SNR case.We consider compression levels of 95%, 92.5%, 90%, 85% and 80%, which respectively correspond to using 10, 15, 20, 30, and 40 frequency steps between 4 and 16 GHz, and are equivalent to reconstructing the reflectivity cube with 5%, 7.5%, 10%, 15% and 20% available data. For each case, the average PSNR is computed for the 100 test scenes reconstructed and is givenin Table <ref>. As seen from the table, the developed learning-based approach significantly outperforms the other approaches at all compression levels. In particular, the average PSNR exceeds 30 dB when we perform a reconstruction with 10% or higher data.It is also interesting to observe that the performance of the developed method at the highest compression level (i.e. 5% data) with 27.27 dB PSNR is even better than the performance of all compared methods at the lowest compression level(i.e. 20% data). As expected, all regularization-based approaches outperform the direct inversion methods (BP and KM), especially at highly compressive settings. Moreover, data-adaptive deep priors enable superior performance compared to hand-crafted analytical priors, TV and ℓ_1. From these analytical priors, ℓ_1 starts to yield better performance than TV at the compression levels higher than 90% (i.e. with less than 10% data availability). From the direct inversion-based methods, KM consistently performs better than BP and approaches the performance of ℓ_1 regularization at the increaseddata availability rates.Because of this, from this point forward, we will omit the BP from the visual comparisons and only present the results of KM.In general, the performance of each method starts to increase slowly with the increased data availability rates beyond 15%. This suggests that the bottleneck on the measurement diversity becomes the sparse MIMO array topology when the number of frequency steps exceeds 30.For visual comparison, sample reconstructions obtained with 5%, 10%, and 20% data are also given in Fig. <ref>. As seen, for all approaches, the reconstruction quality improves with the increasing amount of data. Moreover, we can observe that KM is the most severely affected method by the amount of available data, and at high compression levels its reconstruction suffers from large grating lobes.At the highest compression level (corresponding to 5% data) TV reconstruction also contains large artifacts in addition to the over-smoothing effect.On the other hand, although the ℓ_1-based method suffers from speckle-like artifacts, its performancedoes not change much with the amount of available data. Nevertheless, the proposed learning-based PnP method yields the best performance with artifact-free reconstruction at all compression levels.§.§.§ Noise Level AnalysisWe now fix the available data to 10% and analyze the effect of SNR on the quality of reconstructions. For this, we gradually drop the SNR from 30 dB to 0 dB with steps of 10 dB. The average PSNR of each method is given in Table <ref> at different SNRs. As seen, the developed learning-based approach outperforms the other methods also for all noise levels.In particular, the performance of the developed method even at the lowest SNR case (i.e. 0 dB)with 28.31 dB PSNR is better than the performance of all compared methods at the highest SNR case (i.e. 30 dB).Similar to the results in the compression level analysis, all regularization-based approaches outperform the direct inversion methods, and in the most ill-posed case with 0 dB SNR, ℓ_1 prior yields better reconstruction than TV. In Fig. <ref> sample reconstructionsfor 0 dB SNR case are given. Compared to the reconstructions given in Fig. <ref> for 30 dB SNR case, KM result is severely degraded at this low SNR due to high noise amplification. On the other hand, ℓ_1 and TV-based reconstructions still show some fidelity to the original image, but with more artifacts.More importantly, even for this highly noisy and compressive observation setting, the proposed learning-based PnP method is capable of providing a cleanreconstructionthat maintains high fidelity to the ground truth.§.§ Performance Analysis with Experimental Data We now demonstrate the performance of the developed approach on real-world scenes using experimental measurements available online <cit.>. These experimental measurements were acquired for a scene that contains a toy revolverapproximately 50 cm away from a sparse MIMO array <cit.>. The used MIMO array has 16 transmit and 9 receive Vivaldi antennas that are distributed in a spiral configuration on the antenna plane as shown in Fig. <ref>. The experimental measurements were recorded at 251 uniformly sampled frequencies from 1 to 26 GHz. We aim to infer the reflectivity distribution within a 30 cm × 30 cm × 30 cm image cube that contains the revolver. Similar to <cit.>, we choose the sampling interval as 0.5 cm along all three dimensions.This results in an unknown image cube of 61 × 61 × 61 voxels.Since our focus is on compressive imaging, we consider sparse frequency measurements from the band of 4–16 GHz (similar to the simulated setting).In particular, from the available data, we use 7 and 11 uniformly sampled frequencies between 4 and 16 GHz, which respectively correspond to compression ratios of 99.56% and 99.31%.These are equivalent to reconstructing the reflectivity cube with only 0.44% and 0.69% data, yielding to extremely compressive settings. To reconstruct this real scene using the developed approach with deep prior as well as with TV and ℓ_1 priors, we use the same κ parameters determined in the previous simulated setting. For the choice of the regularization parameter α, we again perform a search for the optimal value to obtain the best reconstruction quality.Moreover, the parameter ϵ in (<ref>) is empirically set to 1/√(10)y_2, which approximately corresponds to measurement at 10 dB SNR. Additionally, since the maximum value of the reflectivity magnitudes in the real scene can be different from the synthetic scenes used in training, thereflectivity magnitude at each iteration is scaled with its maximum value prior to entering to the denoiser (in order to fall intothe range [0,1]).Then the denoised magnitude at the output of the denoiser is scaled back.A photograph of the imaged toy revolver and the reconstructionsobtained for two different compressive settings with 0.44% and 0.69% dataare shown in Fig. <ref>. Note that the photograph provides a visual reference for comparisons, but it does not represent the ground truth reflectivity magnitudes.As additional reference for comparisons, we also obtain the KM reconstruction of the scene using the full frequency data available (i.e. 251 frequency steps in the band 1-26 GHz), which corresponds to a highly over-determined setting with M/N=361.44% data availability. This full-data KM reconstruction is given in Fig. <ref>to reveal the general shape of the scene reflectivity. But despite using all of the available data, it still contains widespread artifacts, especiallyover the cross-range dimensions. This is the expected behavior of direct inversion methods with sparse arrays due to the resulting aliasing  <cit.>.When we compare the reconstructions in Fig. <ref> for the highly compressive settings considered, it is seen that the developed approach with deep prior provides the best results with the least amount of artifacts.In particular,KM reconstructions suffer from significant grating lobes and aliasing on range direction (which appears in the form of replication) resulting due to the sparsely sampled frequencies. Although not as prominent, similar replication artifacts on range direction are also present in the results of hand-crafted regularization approaches. Most notably, TV reconstructions fail to resolve aliasingand contain replicated silhouettes of the revolver. While TV reconstructions perform visually better than KM, they perform poorly compared to ℓ_1 regularization at these highly compressivesettings(as similar with the observations in the earlier analysis). In ℓ_1 regularized reconstructions,there are less artifacts along the cross-range directions compared to TV,but the revolverappears as eroded, andthere are distributed speckle artifacts, which are more common along the range direction (aligned with the locations of the aliasing artifacts in KM- and TV-based solutions). On the other hand, the proposed PnP approach with deep prior is capable of providing a near-perfect reconstruction with only 0.69% data. Few aliasing artifacts occur over the range direction at the higher compressed setting with 0.44% data. Nevertheless, in both cases, the edges of the object are sharply reconstructed, and the frame, cylinder, trigger guard, and muzzle of the revolver are all clearly visible. Hence the proposed approach is much less prone to sparse sampling and aliasing,thanks to the power of learned deep priors. The proposed method not only provides the highest reconstruction quality but also takes only6 seconds (for the case with 0.44% data).Hence it is again the second fastest method after KM which performs poorly.On the other hand, TV and ℓ_1 regularized solutions suffer from significantly longer computation time, which are approximately 150seconds. Overall these real scene experiments demonstrate that the utilization of deep priors in a plug-and-playalgorithm enables state-of-the-artreconstruction quality even at highly compressive experimental settings, while also yieldingsignificantly reduced run-time compared to hand-crafted analytical priors. Note that the learned prior is also capable of representing unseen real-world objects, although the training has been performed with synthetic and randomly generated much simpler extended targets. Moreover, even though this experimental observation setting (including antenna array type, number of measurements taken, etc.) differs from the previously analyzed simulated setting, ourlearning-based method can bedirectly used without re-training since it is based on PnP framework (and not unrolling). Hence the proposed learning-based PnP method is highly adaptable to experimental data and different observation settings. § CONCLUSION We have developed a novel and efficient plug-and-play approach that enables the reconstruction of 3D complex-valued images involving random phase by exploiting both analytic and deep priors.Our approach provides a unified general framework to effectively handle arbitrary regularization on the magnitude of a complex-valued unknown and is applicable to various complex-valued image formation problems including SAR and MIMO radar imaging with far- or near-field settings.Our development is based on a general closed-form expression provided for the solution of a complex-valued denoising problemwith regularization on the magnitude.By utilizing this expression in an ADMM framework, a computationally efficient PnP reconstruction method that consists of simple update steps is obtained.In this paper, we applied the developed PnP method to near-field compressive MIMO imaging for reconstruction of the 3D complex-valued scene reflectivities with random phase nature.Within our PnP framework, we utilized a 3D deep denoiser to take advantage ofdata-adaptive deep priors. To the best of our knowledge, our approach is the first deep prior-based PnP approach demonstrated for near-field radar imaging. The effectiveness of ourapproach is illustrated under various compressive and noisy observation scenarios in microwave imaging using both simulated data and experimental measurements. The results show that the developed PnP approach with learned deep prior achieves the state-of-the-art reconstruction quality with a generalizability capability forunseen real-world objects and high adaptability to experimental data. Compared to approaches with hand-crafted analytical priors, it is also more robust to sparse sampling,and enables significantly reduced run-time. The approach also has the advantage of direct applicability to different observation settings without re-training, due to its PnP nature. Lastly we note that although the developed PnP method is quite fast with a runtime on the order of seconds, it can be further accelerated by more efficiently computing the forward and adjoint operators, using methods like fast multipole method (FMM) <cit.>.Moreover,exploring the performance of the developed method with different 3D network architectures for the denoiser may improve the reconstruction quality and is a topic for future study. Likewise, utilizing a training dataset synthesized for a spesific imaging task, such as a dataset consisting of 3D models of concealed weapons, can allow the deep architecture to better learn the task-oriented prior information and can improve the performance. § ACKNOWLEDGMENTSThe authors would like to thank professors Sencer Koc and Lale Alatan at METU for many fruitful discussions about near-field microwave imaging.This work is supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under grant 120E505. IEEEtran[H]2.5cm< g r a p h i c s >[H]13.5cm Okyanus Oral received the B.Sc. degree from the Department of Electrical and Electronics Engineering from Middle East Technical University (METU), Ankara, Turkey, in 2021. He has been pursuing his M.Sc. degree in the same department since 2021 and has been a Teaching/Research Assistant since 2022. His research interests include inverse problems, computational imaging, optimization, and deep learning. [H]2.5cm< g r a p h i c s >[H]13.5cm Figen S. Oktem (M’08) received the B.S. and M.S. degrees in electrical and electronics engineering from Bilkent University, Ankara, Turkey, in 2007 and 2009, respectively, and the Ph.D. degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign (UIUC), Champaign, IL, USA, in 2014. Since 2015, she has been an Assistant Professor with the Electrical and Electronics Engineering Department, METU, Ankara, Turkey. Before joining METU, she was a Postdoctoral Research Associate with the NASA Goddard Space Flight Center, where she worked on the development of high-resolution spectral imaging techniques. Her research spans the areas of statistical signal processing, optical information processing, computational imaging, and inverse problems. Her main research interest is the development of novel computational imaging and sensing technologies, and their applications in physical and life sciences. Dr. Oktem was a recipient of the Professor Kung Chie Yeh Endowed Fellowship from the Department of Electrical and Computer Engineering, UIUC, in 2012, and the NASA Earth and Space Science Fellowship from 2012 to 2014. She is a member of the IEEE, Optica, and EURASIP.
http://arxiv.org/abs/2312.16024v1
{ "authors": [ "Okyanus Oral", "Figen S. Oktem" ], "categories": [ "eess.IV", "cs.LG", "eess.SP" ], "primary_category": "eess.IV", "published": "20231226122509", "title": "Plug-and-Play Regularization on Magnitude with Deep Priors for 3D Near-Field MIMO Imaging" }
[1]Cambridge Graphene Centre, University of Cambridge, Cambridge, CB3 0FA, UK [2]Department of Materials Science & Metallurgy, University of Cambridge, Cambridge, CB3 0FS, UK [3]School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, State Key Laboratory of Silicon and Advanced Semiconductor Materials, ZJU-UIUC Joint Institute, Zhejiang University, Hangzhou 310027, China Inkjet-Printed High-Yield, Reconfigurable, and Recyclable Memristors on Paper Jinrui Chen†,1, Mingfei Xiao†,1, Zesheng Chen1, Sibghah Khan1,Saptarsi Ghosh2, Nasiruddin Macadam1, Zhuo Chen1, Binghan Zhou1, Guolin Yun1, Kasia Wilk2, Feng Tian1,3, Simon Fairclough2, Yang Xu3, Rachel Oliver2, Tawfique Hasan*,============================================================================================================================================================================================================================================ Reconfigurable memristors featuring neural and synaptic functions hold great potential for neuromorphic circuits by simplifying system architecture, cutting power consumption, and boosting computational efficiency. Their additive manufacturing on sustainable substrates offers unique advantages for future electronics, including low environmental impact. Here, exploiting structure-property relationship of MoS_2 nanoflake-based resistive layer, we present paper-based, inkjet-printed, reconfigurable memristors. With 90% yield from a 16×65 device array, our memristors demonstrate robust resistive switching, with 10^5 ON-OFF ratio and 0.5 V operation in non-volatile state. Through modulation of compliance current, the devices transition into volatile state, with only 50 pW switching power consumption, rivalling state-of-the-art metal oxide-based counterparts. We show device recyclability and stable, reconfigurable operation following disassembly, material collection and re-fabrication. We further demonstrate synaptic plasticity and neuronal leaky integrate-and-fire functionality, with disposable applications in smart packaging and simulated medical image diagnostics. Our work shows a sustainable pathway towards printable, high-yield, reconfigurable neuromorphic devices, with minimal environmental footprint. § MAINNeuromorphic computing systems, exemplified by memristors, have immense potential in simplifying computational and storage structures with improved computational efficiency.<cit.> Advancements in neuromorphic engineering not only offer a new technological pathway for future cognitive systems on silicon,<cit.> but also align with the demands of organic electronics for low-power consumption and high integration potential.<cit.> However, challenges in fabricating these versatile neuromorphic systems often arise from the distinct materials and manufacturing techniques required for various components, such as electronic synapses and neurons.These variations can introduce technical complexity, increase resource consumption, and lead to material incompatibility challenges.<cit.> In addition, traditional semiconductor- and plastic-based commercialised neuromorphic devices may also contribute to the rapidly growing e-waste problem.<cit.> From this perspective, additive manufacturing of reconfigurable memristors on sustainable substrates offers an efficient and eco-friendly approach for neuromorphic systems in transient or disposable applications. Reconfigurable memristors, with their ability to emulate both synaptic and neuronal functions, greatly simplify the circuitry architecture and enhance integration density.<cit.>Although there have been some efforts in additive manufacturing of simple, traditional electronic circuits on paper,<cit.> exploration of multifunctional reconfigurable memristors on this eco-friendly substrate has remained unexplored. Here we demonstrate inkjet-printed reconfigurable memristors on paper.Achieving a 90% production yield from a 16 × 65 array, thememristor devices exhibit an ON-OFF ratio of up to 10^5 and < 0.5 V operation voltage.Due to the unique stacking configuration created in inkjet printed layer and abundant sulfur vacancies generated in MoS_2 during exfoliation, the Ag/MoS_2/Au-structured devices exhibit both non-volatile resistive switching (RS) and volatile switching properties, allowing synaptic and neuronal functionalities in a single device.We demonstrate that the majority of device components can be recycled and reused, with re-fabricated devices also exhibiting robust performance, with an ON-OFF ratio exceeding 10^5 and < 0.65 V operational voltage.To illustrate the versatile application scopes of our memristors, we first explore the reconfigurability of individual devices and demonstrate tri-mode electronic time-temperature indicators (TTI) on a paper substrate for smart packaging application. Subsequently, we extend our focus to the system level, where we develop a lightweight artificial neural network (ANN)-based algorithm for efficient extraction of key features from color images.We then adapt this algorithm for diabetic retinopathy (DR) screening.Accounting for device variation extracted from our 16×65 printed memristor array, simulation results indicate that this system identifies all four types of lesions, achieving 90% specificity and accuracy indices. Our paper-based reconfigurable memristors show clear advantages of multi-functionality and low-cost, high-yield manufacturability at a large scale, and hold promise towards future sustainable electronics.§ NANOFLAKE INK FORMULATION AND INKJET PRINTINGFigure 1a illustrates the process, starting from ink formulation to device fabrication and recycling pathways.The high-pressure homogenization (HPH) process exfoliates bulk MoS_2 powder (Sigma-Aldrich) into few-layer nanoflakes in isopropyl alcohol (IPA).<cit.> This procedure leverages the combined effects of cavitation, shearing, and impact forces within the shearing chamber.By controlling the number of processing cycles, we can tune the lateral dimensions of the exfoliated 2D materials while maintaining a relatively consistent thickness. Atomic force microscopy-based (AFM) statistics in Figure S1 reveals that the lateral size of the exfoliated MoS_2 flakes after 250 cycles is 42±15 nm with a thickness of 2.6±0.8 nm. The uniformly sized nanoflakes ensure a stable inkjet printing process, crucial for maintaining consistency in large-area device fabrication. This uniformity also promotes a homogeneous surface topology, pivotal in ensuring high device yield and reducing device-to-device variation.<cit.>UV-Vis absorption spectroscopy of the MoS_2-IPA dispersion with varying diluted concentrations exhibits two characteristic excitonic peaks (Fig. 1b), located near 670 nm (A) and 615 nm (B).These peaks correspond to the direct bandgap excitonic transitions at the K point in the Brillouin zone of the 2H phase of MoS_2, indicating that the inherent phase of MoS_2 has been preserved during the exfoliation process.<cit.>As shown in Fig. 1c, X-ray photoelectron spectroscopy (XPS) on MoS_2 nanoflakes shows characteristic peaks at 229 eV (Mo 3d_5/2), 232.2 eV (Mo 3d_3/2), 163 eV (S 2p_1/2), and 161.8 eV (S 2p_3/2), respectively. These detected binding energies align with those expected for pristine MoS_2 without oxidation.<cit.>The XPS results also reveal the stoichiometry of Mo/S with a ratio of 1:1.83, indicating the generation of sulfur vacancies during the HPH process. We hypothesise that the vacancies foster interlayer diffusion of conductive filaments (CF), engendering efficient pathways to modulate RS, similar to the observation for grain-boundaries in hexagonal boron nitride.<cit.>Therefore, the uniform nanoflakes with sulfur vacancies exfoliated via the HPH technique serve as a robust basis for the fabrication of filamentary memristors.The exfoliated flakes are stably dispersed in a binary solvent of IPA/2-Butanol (10 volume%) to formulate the ink.<cit.>The resultant ink allows smooth inkjet printing without nozzle clogging. The use of binary solvents also helps suppressing non-uniform deposition.<cit.>Figure 1d illustrates an example of the inkjet-printed pattern produced using the formulated ink on paper substrates.The insets showcase the corresponding optical micrographs of selected areas within Fig. 1d, illustrating the uniform deposition and clear edges of the printed patterns.We define printing accuracy in terms of pattern fidelity, which quantitatively assesses the alignment accuracy between the designed and the actual printed patterns. It is assessed by overlaying the printed and imported images, and quantifying the deviation as a percentage of the total printed area (Fig. S2). At the microscopic scale (scale bar 200 µm), the average printing accuracy reaches 98%.Stable jetting and such high printing accuracy with our ink formulation enable uniform material deposition for device fabrication. § CHARACTERIZATION OF INKJET-PRINTED 2D MEMRISTORSThe printed memristors, featuring an Ag/MoS_2/Au structure, are fabricated on PEL-60 paper substrate (See method).The high solvent absorption capacity of the substrate facilitates rapid ink drying after deposition. In addition to the use of binary solvents, this further aids suppression of non-uniform deposition and avoids the need for high-temperature annealing to remove the carrier solvents trapped in the printed layers.To investigate the impact of active layer thickness on RS characteristics in the printed devices, we fabricate an array of 5,200 (80×65) memristors; Fig. 2a. The array is divided into five sections, with increasing number (from 10 to 50 print repetitions) of MoS_2 layers. Figure 2b shows the top-right corner of the array while Fig. 2c displays a 3×4 set of devices within this region.For each 16×65 section, 100 devices are randomly selected for measurement (Fig. S3). Statistical analysis reveals that 30-layer devices exhibit the highest yield of 93% (Fig. 2d, S4).Moreover, 92.5% of the working devices exceed an ON-OFF ratio of 5 decades (Fig. 2f), with a mean ON-OFF ratio of 5.5±0.3 decades (Fig. S4), indicating well-managed device-to-device variation (Fig. S5).The 30-layer device also strikes an optimal balance between the ON-OFF ratio and operation voltage (Fig. S6). Therefore, the 30-layer device is used in all subsequent evaluations unless stated otherwise.Cross-sectional transmission electron microscopy (TEM) (Fig. 2e) and the corresponding energy-dispersive X-ray spectroscopy (EDS) elemental mapping (Fig. S7) reveal that the 30-layer device has an active layer thickness exceeding 500 nm with minimal oxidation. This may result in a high initial resistance beneficial for achieving a large ON-OFF ratio. It also reveals that upon deposition via inkjet printing, the exfoliated nanoflakes create a stacked architecture. This introduces a plethora of potential diffusion pathways. These pathways are essential for the formation of CFs and lay the foundation for low voltage operation of the device. <cit.> Under different compliance current (CC) conditions, we can control the formation and rupture of CFs to manipulate the transition between volatile and non-volatile states.When the CC is 1 mA, the application of SET voltage (positive voltage) oxidizes the silver atoms into silver ions, which migrate through the MoS_2 layer and connect to the grounded bottom electrode, forming robust CFs.<cit.>These filaments remain stable for more than 5000 s (Fig. S8), demonstrating excellent data retention capabilities. Thus, the device necessitates a RESET process to break these filaments using negative voltage (Fig. 2g), which might be a consequence of the synergistic effects of both the dissolution of Ag through reverse ionic diffusion and thermally induced filament disruption.Under this CC, the ON-OFF ratio of the memristor reaches 10^5 with a sharp turn-on slope of 3.07 mV decade^-1, accompanied by mean SET voltage of 0.29±0.10 V and RESET voltage of -0.44±0.20 V (Fig. S9). Notably, the critical RS performance parameters outlined above, such as operational voltage below 0.5 V and ON-OFF ratio exceeding 10^5, compare favorably with those of other high-performance memristors fabricated by chemical vapor deposition (CVD) or complementary metal-oxide semiconductor (CMOS) techniques; see Table S1. When the CC is decreased to 0.01 mA, the devices enter the volatile state (Fig. 2h).In this regime, thinner CFs formed during the SET process tend to spontaneously dissociate upon removal of the electric field due to thermally assisted diffusion and minimum energy effect.<cit.>This exemplifies typical volatile switching behavior.With an ON-OFF ratio of 10^5, the average SET voltage of the memristor is 0.40±0.19 V (Fig. S10), with a corresponding power consumption of only 50 pW.The high ON-OFF ratio and low power consumption of the volatile switching make it ideal for use as selectors to strongly reduce sneak currents within crossbar arrays of memory devices.<cit.>Subsequently, to elucidate the operating mechanisms of our devices, we deploy a range of complementary techniques, including conduction mechanism analysis, electrode substitution, in-situ microscopic examination and defect engineering.Conduction mechanism analysis indicates that the formation, rupture, and self-dissolution of CFs significantly affect the operational mechanism of the device (see Fig. S11 for detailed notes). Then, we carry out electrode substitution experiments to confirm that the Ag electrodes dominate these dynamics of CFs. We use inkjet-printed graphene (Gr) electrodes to substitute either the top Ag or the bottom Au electrodes. The Gr/MoS_2/Au device, without top Ag electrode, shows no RS performance (Fig. S12a). Conversely, the Ag/MoS_2/Gr device, with the bottom electrode switched from Au to Gr, exhibits volatile RS with a reduced operation voltage of 0.30 ± 0.10 V (Fig. S12b) compared to the original devices (Fig. 2h). The volatile RS performance can be attributed to the chemically inert characteristic of graphene against Ag atoms, which hinders the stability of the formed Ag CFs to facilitate volatile switching.<cit.> This implies possible future opportunities to further reduce energy consumption and fine-tune the device operational voltage by simply replacing electrode materials.Next, we use conductive AFM (CAFM) in contact mode to investigate the dynamics of Ag CFs at nanoscale (Fig. S13a).We replace the bottom Au electrode with the Pt-Ir CAFM probe. With the Pt-Ir probe grounded and a positive voltage sweep applied on the inkjet-printed silver, the Ag/MoS_2/Pt-Ir structure behaves like a nanoscale memristor. In the first sweep, we observe a volatile RS switching with 10^4 ratio.With repeated voltage scanning, the SET voltage and ON-OFF ratio gradually decrease, and the device eventually stabilizes in the LRS (Fig. 2j).This conductance increase implies synaptic plasticity, providing in-situ evidence of our device’s transition from volatile state to non-volatile state.<cit.>Upon applying a higher voltage to the sample, the location where thick and persistent filaments form, can be directly observed in the current map (Fig. S13b,c).Lastly, we study the influence of sulfur vacancies on RS performance of our devices.We fabricate MoS_2 memristor devices with 10 to 40 printed layers (Fig. S14) to serve as a control group in comparison to the original devices. Following this, we employ dithiolated conjugated molecule 1,4-benzenedithiol (BDT) to heal the sulfur vacancies and to facilitate the covalent bridging of the neighbouring MoS_2 flakes (see Methods).<cit.> Due to the healing of vacancies, all the BDT-treated devices (Fig. S14) show a larger breakdown voltage of MoS_2 layer compared to the untreated counterparts (Fig. S6).For example, the 30-layer device requires a forming voltage of   9 V (Fig. 2k), while its untreated counterpart operates at 0.5 V (Fig. 2g).Upon conducting subsequent I-V sweep operations, all the BDT-treated devices display a noticeable decrease in the ON-OFF ratio, eventually leading to the complete disappearance of the RS phenomenon within three scans or fewer (see Fig. 2k, S14 for 10-40 layers of the BDT-treated devices).We attribute these phenomena to the inhibition of silver ion permeation pathways within the MoS_2 layers caused by the healing of vacancies through BDT treatment. This inhibition may impede the formation of CFs in MoS_2. Only a high forming voltage can breakdown the MoS_2 layer and form the CFs to set the device to LRS. However, the filaments formed under such excessively high voltage may cause irreversible damage to the devices, resulting in a minimal ON-OFF ratio.The retention measurements indicate that even when the CC is below 0.01 mA, the BDT-devices maintain the stored information, unlike the original devices that transition into a volatile state under same CC (Fig. S14d). The sulfur vacancies may promote the dissociation of Ag CFs by enhancing their instability under low CCs.Thus the healing of these vacancies may extend the retention of CFs. Crucially, our results highlight that the sulfur vacancies introduced by the HPH process and the permeation pathways within MoS_2 layer may promote the RS and reconfigurable functionality of the device. Recyclable electronics are highly desirable in reducing the ever increasing environmental footprint of electronic waste and promoting sustainable development.This is particularly important for transient or disposable applications.<cit.> In light of this, we explore the recyclability of our memristors.Our device exhibits an endurance of over 2500 cycles (Fig. S15).Upon reaching the end of its lifespan, all components of the printed memristor can be recycled and reused.In particular, we can detach the inkjet-printed MoS_2 and the silver electrodes on MoS_2 via simple ultrasonic cleaning (Fig. S16a,b).The detached MoS_2 nanoflakes and Ag nanoparticles are recollected and separated (Fig. S16c) through vacuum filtration and high-speed centrifugation (See method).This allows re-fabrication of the MoS_2 layer and silver electrodes onto the same paper substrate with the gold electrode (Fig. S16d).Consistent with the performance of the original device, the reprinted memristor under different CCs demonstrates a transition between non-volatile (Fig. S16e) to volatile state (Fig. S16g).Despite variations in the operational voltage (Fig. S16f,h), the reprinted device still exhibits an ON-OFF ratio exceeding 10^5 and mean operational voltages below 0.65 V (Fig. S16e,g), affirming its consistent reliability throughout the recycling process. § RECONFIGURABLE MEMRISTIVE PERFORMANCE IN INKJET PRINTED DEVICES Due to the ability to dynamically transition between non-volatile and volatile states, our reconfigurable memristors can effectively emulate both synaptic and neuronal functionalities.This emulation is achieved by varying voltage amplitudes and duration: lower voltage pulses enable transient changes akin to neuronal firing, while higher voltages induce synaptic-like behavior characterized by gradual resistance changes after the CFs form, mimicking synaptic plasticity.Neurons process information by integrating incoming signals (Fig. 3a).When subjected to external stimuli, the neuron, initially at a resting potential, begins to integrate voltage changes from these input signals into its membrane potential.<cit.> Once the membrane potential reaches a specific threshold, the neuron `fires' an action potential. The membrane potential swiftly resets to the initial resting state, preparing to respond to subsequent stimuli.<cit.> This leaky integrate-and-fire (LIF) functionality can be replicated by memristors operating in a volatile threshold switching state (Fig. 3b).In response to a continuous voltage pulse train (100 µs, 1 V; 100 µs, 0.1 V), no current response is detected until a sudden surge after receiving the 5^th pulse, mimicking the integrate-and-fire behaviour.The small current spikes occurring at the edge of voltage pulses are possibly induced by parasitic capacitance.After firing, the current falls to the limit of detection again in the subsequent voltage pulses, indicating that the memristor has recovered to the low conductance state and is prepared for the next-cycle integration process.This self-recovery stems from the fragile nature of the Ag CFs.We suggest that the formed CFs are thin and prone to spontaneous dissociation, which emulates the repolarization behavior of neurons.<cit.> Figure 3c illustrates the relationship between pulse duration and the pulse numbers required for `firing'. For this, we denote a firing event when the current exceeds 1 µA under continuous voltage pulses. The firing ratio is defined as the percentage of such firing events for a train of 100 pulses. We see that as the voltage pulse width widens with a fixed pulse period, the memristor more readily reaches the threshold for firing.This ability of the memristor to adjust its firing threshold this way underscores its potential for efficient and adaptive neuromorphic computing.While neurons follow a threshold-based firing model, synapses facilitate information transfer by fine-tuning their synaptic plasticity in a highly parallel manner.The diffusion dynamics of Agresembles the behaviour of synaptic Ca^2+ in biological systems (Fig. 3d).<cit.> Applying large voltage pulses can induce the growth of CFs, likely because the high-rate injection of Ag^+ into the filament volume predominates over filament self-dissolution.<cit.> This process is akin to synaptic strength enhancement caused by a Ca^2+ surge in biological systems.Therefore, the steady-state evolution of the filament can effectively emulate synaptic plasticity.<cit.> When exposed to a large voltage pulse train (15 ms, 2.5 V; 5 ms, 0.1 V), a progressive increase in conductance is observed (Fig. 3e).It mirrors the continuous growth in synaptic weight between neurons, a key process in biological memory formation and strengthening.The synaptic weight could also be adjusted through pulse frequency and amplitude (Fig. S17).This feature resembles the neurological system's mechanism, where the strength of memory consolidation is influenced by the frequency and intensity of neural stimuli.<cit.>At this stage, the current under the read voltage remains intact. Therefore the device remains in a volatile state, exhibiting short-term plasticity (STP).When the voltage pulse is increased to 3.5 V, the current surges sharply, transitioning into a non-volatile state (Fig. 3f). This is likely due to the sudden formation of robust CFs under substantial stimulus. Within the same pulse cycle, although the synaptic current reaches a CC of 5mA during the pulse train, a gradual increase in conductance is observed under the read voltage. The increase in current exhibits a near-linear trend (inset of Fig. 3f), which is favorable for constructing artificial neural networks (ANN).<cit.>After the completion of the pulse train, the device continues to maintain its current under the read voltage, confirming the transition from STP to long-term plasticity (LTP), signifying the establishment of a long-term connection between the synapses.Collectively, by modulating the voltage pulse parameters, the device can therefore simulate the functionalities of both synapses and neurons.§ RECONFIGURABLE MEMRISTOR-BASED SMART PACKAGINGThe use of paper substrate allows our printed memristors to be used for innovative smart packaging without the use of traditional electronics.As an example, we physically implement a memristor-based tri-mode electronic Time-Temperature Indicator (TTI). TTI is a crucial component in smart packaging<cit.> for applications demanding strict temperature regulation, such as food and pharmaceutical industries (Fig. 4a) <cit.>.It dynamically tracks ambient temperature changes, integrates these variations to compile thermal history, and identifies exposures to abnormal temperatures that could compromise product quality.Compared to traditional TTIs, such as those based on chemical or enzymatic reactions<cit.>, electronic TTIs offer real-time, continuous temperature monitoring with enhanced accuracy and reliability. However, manufacturing complexity and cost limit the wide adoption of electronic TTIs.<cit.>The operation principle and conceptual design of the electronic TTI are illustrated in Figs 4b and 4c, respectively. Specifically, reconfigurable memristors simplify the neuronal circuit design (Fig. S18) and concurrently function as both the processor and data storage unit (Fig. 4b).Its LIF functionality tracks thermal history of the package during its journey through the supply chain while providing over-threshold warning with the reconfigurable properties of its non-volatile state.To test our TTI implementation, we use different voltage pulses to represent temperature sensor output corresponding to different temperatures. The memristor efficiently processes these signals and converts them into a time-temperature integral.Its `leaky' feature only accounts for information reaching the firing threshold, effectively filtering out insignificant temperature fluctuations.In contrast, significant temperature variations are recognized by the device transitioning into a non-volatile state, thereby maintaining the information. Figure 4d illustrates the implementation of our TTI system.Using a simple indicator like a microLED, the output from the TTI can be displayed (Fig. S19, Supporting movies).When the temperature remains within the ideal range, the time-temperature integral is insufficient to trigger any firing event (Fig. 4e).The LED remains off, indicating the product quality is assured.Upon exposure to moderately elevated temperatures, the system commences the integration of temperature signals. As thermal conditions persist, the time-temperature integral surpasses the threshold, triggering the firing events. Consequently, the LED would flash, serving as a warning to cold chain management about transient deviations (Fig. 4f). Once the temperature returns to the normal range, the display system reverts to the off state due to the memristor’s `leaky' characteristic.This suggests that the memristor-based TTI system is capable of effectively conducting real-time monitoring of minor and short-lived temperature anomalies. Furthermore, significant temperature exceedance over an extended period or extreme fluctuations in a brief period would lead to the formation of robust CFs (as demonstrated in Fig. 3f), pushing the time-temperature integral far beyond the threshold. Under such circumstances, robust CFs form, shifting the memristor into a non-volatile state (Fig. 4g). The resulting continuous illumination of the LED signals potential spoilage of the packaged contents.By replacing the gold (Au) bottom electrode with an inkjet-printed graphene electrode, the threshold for initiating firing events in the device can be further lowered to 0.4 V (Fig. S20).This underscores not only the potential for further reduction in energy consumption but also possibilities in broadening and refining the range of operating temperatures, and extending application scopes by replacing materials in inkjet printed memristors. Additionally, the memristors can be coated with parylene to obtain a water proof TTI operation, highly desirable for real-world applications (Fig. S21).The TTIs based on printed reconfigurable memristors thus have potential in disposable and smart packaging. § MEMRISTOR ARRAY FOR EDGE COMPUTING IN MEDICAL IMAGE ANALYSIS The multilevel conductance states that can be implemented with our paper-based memristor system may provide artificial neural networks (ANNs) with the capability for refined weight adjustments, potentially enhancing ANN-based image recognition performance.Moreover, its lightweight neural networks for edge computing facilitate local processing, which not only minimizes cloud data transfer but also amplifies data security and transmission speeds. This approach offers distinctive value for medical image analysis at the point-of-care.The democratization of early detection for 'stealth diseases' necessitates platforms that are user-friendly and economically efficient. Our system not only offers portability and cost-effectiveness but may also allow disposal of physical implementations after use with minimal environmental footprint.Here we propose a memristor-based DR screening system.This incorporates a designed algorithm for feature extraction<cit.> and utilizes memristor array-based 81×16×2 ANN for subsequent feature analysis. Each memristor functions as a synapse connecting neurons across consecutive layers, with its conductance states representing the synaptic weight. The neurons are defined by a mathematical model that incorporates a weighted sum followed by an activation function.DR is a severe complication affecting approximately one-third of diabetic patients and is the primary cause of blindness within this demographic.<cit.>The stealthy progression of DR often results in significant retinal damage before being diagnosed, owing to the lack of symptoms in its early stages.<cit.>Our DR screening system conducts lesion analysis offline by edge computing to identify abnormal changes in retinal blood vessels.In contrast to existing DR testing software that requires uploading raw images to the cloud for analysis, our system only reports post-analysis grading results, without the need for data transfer (Fig. 5a).In our approach, retinal images with retinopathy are imported from the DiaretDB1 dataset, followed by feature extraction. <cit.>In the raw image (Fig. 5b), the backgrounds containing the optic disc and vasculature are first removed to enhance the visibility of candidate lesions (Fig. 5c).The algorithm then successively performs binarization and morphological operations on the images to remove noise and identify the candidate lesions. Based on distinct appearance and underlying pathogenesis, lesions are categorized into bright lesions (Fig. 5d) and red lesions (Fig. 5e).Red lesions primarily comprise retinal hemorrhages and microaneurysms, while bright lesions mainly encompass hard exudates and soft exudates. <cit.>For each type, the algorithm separately extracts the structural, color, and derivative features and converts these features into 81-dimensional vectors (Fig. S22). The vectors are subsequently input into a designed memristor array based 81×16×2 ANN for feature analysis.We use the synaptic plasticity of the memristors to conduct simulation.Considering the variance of our printed devices, we randomly select 9 devices in the 30-layer memristor array in Fig. 2a and input their conductance potentiation and depression cycles (Fig. S23).During a forward pass, the processed data is fed into the ANN, yielding a grading result.The ANN undergoes training through the back propagation algorithm, differentiating regions with lesions from those without and adjusting synaptic weights according to output errors. This training encompasses 1000 epochs. The detection result for the representative image is shown in Fig. 5f, with the lesion areas marked in blue.Fig. 5g summarizes the overall accuracy and specificity for all four lesion types, each exceeding 90%. These simulated results highlight the promising capabilities of our printed memristors for potential applications in transient or disposable medical image processing.§ DISCUSSIONOur paper-based, inkjet printed, reconfigurable memristors exhibit outstanding resistive switching performance. The recyclability of the memristors emphasizes an important step towards sustainable electronics. By demonstrating our memristors in smart packaging and designed framework for medical image processing, we underscore the potential of our paper-based devices.The technology barrier to implement our approach for biodegradadblee polymers is likely to be minimal. Our research therefore provides valuable insights into the development of sustainable and efficient neuromorphic systems and opens up new avenues for future paper and potentially, other biodegradable substrate-based electronics. § METHODS§.§ HPH process for MoS_2 ink preparationThe PSI-40 high-pressure homogenizer, together with the D202D diamond interaction chamber (dual slot deagglomeration chamber with microchannel dimension of 87 µm), is used to make printable ink of MoS_2 nanoflakes.MoS_2 powder (∼6 µm, from Sigma-Aldrich) is first mixed with anhydrous IPA (from Sigma-Aldrich) at a concentration of 5 mg mL^-1 to formulate the raw dispersion.During the high-pressure homogenization (HPH) process, a combination of cavitation, shear, and impact forces reduces the size of particles and exfoliates the MoS_2 crystals into nanoflakes.This nanoflake dispersion is then collected and centrifuged for 1 hour at 4000 rpm (corresponds to a relative centrifugal force of 1520g) with a Hettich Universal 320 Benchtop Centrifuge.This centrifugation step separates the supernatant containing exfoliated MoS_2 nanoflakes from the unexfoliated materials (sediments within the container after centrifugation).The supernatant is collected as the raw IPA ink.This concetration of MoS_2 is adjusted for inkjet printing, with 2-butanol (from Sigma-Aldrich) added into the ink in a volume ratio of 10% to reduce the coffee ring effect of the printing process by inducing Marangoni flow.The final concentration of the MoS_2 ink is 4 mg mL^-1. The graphene ink is also formulated using HPH technique, with the addition of 3% polyvinylpyrrolidone (PVP). The concentration of the graphene ink is 2 mg mL^-1. §.§ UV-Vis measurementsA Cary 7000 UV-VIS-NIR Spectrometer is used to measure the absorbance of the MoS_2 ink at a given concentration.The Lambert-Beer law is used to calculate the concentration of the ink based on the measured absorbance, the optical path length (10 mm) of the PMMA cuvettes, and the extinction coefficient of MoS_2 nanoflakes at 672 nm (3400 Lg^-1 m^-1).<cit.> §.§ AFM measurementsBare silicon wafers are used as substrates for the deposition of MoS_2 nanoflakes for AFM characterization.Substrates are cleaned by sonication in acetone and IPA (5 mins for each step) and are rinsed with DI water multiple times before drying under nitrogen blow.Then 1 ml (3-aminopropyl)triethoxysilane (APTES) is added to 50 mL DI water to make the APTES solution.The silicon substrates are left in the APTES solution for 15 mins to form a self-assembled monolayer (SAM) of APTES.The substrates are then put into clean water to stop the self-assembling process and get thoroughly rinsed.After this, the substrates are dried again with nitrogen blow.Diluted MoS_2 solutions (0.005 gL^-1) are drop-cast onto these APTES-modified substrates and left for 40s to let MoS_2 nanoflakes bind with the APTES monolayer. The samples are then rinsed thoroughly with DI water to remove the unbound flakes.Then samples are finally dried and measured with a Bruker Icon AFM to get statistics of the lateral size and thickness of the flakes.The CAFM measurements are conducted in a Bruker Dimension Icon Pro Instrument, using a Bruker Extended TUNA module.For the measurements, we use conductive Pt-Ir coated soft SCM-PIC-V2 tips (Bruker make, spring constant = 0.1 N/m, nominal tip radius 25 nm). §.§ Device FabricationPEL60 printing is used as the substrate. A 5 nm layer of chromium (as an adhesion layer) and a 20 nm layer of gold are deposited sequentially on the substrate using an Edwards E306A Thermal Evaporator to form the bottom electrode.The evaporation process is maintained at low pressure (2×10^-6 mbar) and a low rate (0.1 Å/s).A Fujifilm Dimatix DMP-2800 inkjet printer is used for inkjet printing. The 2D MoS_2 and graphene inks are printed with a drop spacing of 25 µm. The Ag ink (from Sigma-Aldrich) is printed perpendicular to the bottom electrodes with a drop spacing of 25 µm and is then annealed at 100 ^∘C for 1 hour in the glovebox to form the top electrodes. §.§ Device level electrical characterizationDevices are characterized using a Suss MicroTec Probe Station connected to a 4200-SCS Keithley semiconductor analyser and B2902A source measure unit (SMU). §.§ XPS measurementsInkjet printed MoS_2 samples are sent to the Harwell XPS Center for characterization. §.§ BDT Treatment ProcedureMoS_2 layers are deposited on a paper substrate with evaporated gold electrodes via inkjet printing. The samples are then transferred into a nitrogen-filled glove box. Inside the glove box, a 50 mM BDT solution in anhydrous hexane is prepared, and the samples are gently immersed in this solution. The container is sealed and left to sit for 24 hours. Afterwards, the samples are soaked and gently rinsed in anhydrous hexane. Subsequently, the samples are annealed at 75 °C for an hour within the nitrogen glove box. Finally, silver electrodes are printed §.§ Device Recycling ProcessThe memristors that are printed on PEL60 printing paper are subjected to bath sonication in 5 ml of Triethylene Glycol Monomethyl Ether for Ag electrode removal.Following this, the paper is removed, gently rinsed with IPA, and allowed to dry.The Triethylene Glycol Monomethyl Ether-Ag nanoparticle dispersion produced from this process then undergoesvacuum filtration, with the filtrate, containing the silver nanoparticles, being collected and subjected to a 15-minute bath sonication.Bath sonication in 5 ml of IPA is performed on the dried PEL60 printing paper to detach the inkjet-printed MoS_2.The paper is then removed and rinsed with IPA in preparation for re-printing. The filter paper from the first vacuum filtration step is also immersed in IPA and subjected to ultrasonic treatment to recover any MoS_2 flakes that may have detached along with the Ag electrode and been retained by the filter paper. The resulting dispersion is subjected to solvent exchange via high-speed centrifugation for the dispersion of MoS_2 in NMP.Following this, a second vacuum filtration step is performed.The precipitate collected on the filter paper, which consists of MoS_2 flakes, is then immersed in IPA and subjected to a 15-minute bath sonication.§.§ Simulation of DR screeningTo leverage the difference between the conductance values of synaptic devices, errors between the output vector and ground truth are back-propagated to each layer during the weight updating process.Here, weights are either potentiated or depressed according to the output of a sign function.A three-layer multi-layer-perceptron (81×16×2) is constructed and trained on the DiaretDB1 dataset, following a three-stage system flow with parametric changes and a confidence level of 75%.A multi-layer perceptron (MLP) is an advanced type of neural network that consists of multiple layers of perceptrons, each layer connected to the next. It is capable of learning complex functions by having multiple layers of neurons, each applying nonlinear transformations to the input data. Initially, removal of backgrounds comprising optic disc and vasculature from the fundus images is conducted.Subsequently, a feature extraction phase is implemented, wherein the structural, color, and derivative attributes of candidate lesions are calculated and transformed into 81-dimensional vectors.Lastly, the entire system is trained on these vectors over a course of 1000 epochs.99 ref1 Yang, J. J., Strukov, D. B. & Stewart, D. R. Memristive devices for computing. Nature Nanotech 8, 13–24 (2013).ref2 Boybat, I. et al. Neuromorphic computing with multi-memristive synapses.Nat Commun 9, 2514 (2018).ref3 Strukov, D. B. et al. The missing memristor found. Nature 453, 80–83 (2008).ref4 Rao, M. et al. Thousands of conductance levels in memristors integrated on CMOS. Nature 615, 823–829 (2023).ref5 Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).ref6 Wong, H.-S. P. & Salahuddin, S. Memory leads the way to better computing. Nature Nanotech 10, 191–194 (2015).ref7 Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015). ref8 Mehonic, A. & Kenyon, A. J. Brain-inspired computing needs a master plan. textitNature 604, 255–260 (2022).ref9 Jiang, C. et al. Mammalian-brain-inspired neuromorphic motion-cognition nerve achieves cross-modal perceptual enhancement. textitNat Commun 14, 1344 (2023).ref10 Fu, T. et al.Self-sustained green neuromorphic interfaces. textitNat Commun 12, 3351 (2021).ref11 van de Burgt, Y. et al.A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. textitNature Mater 16, 414-418 (2017).ref12 Cho, H. et al.Real-time finger motion recognition using skin-conformable electronics. textitNat Electron 6, 619–629 (2023).ref13 Kumar, S., Wang, X., Strachan, J. P., Yang, Y. & Lu, W. D. Dynamical memristors for higher-complexity neuromorphic computing.Nat Rev Mater 7, 575–591 (2022).ref14 Zhang, X. et al. Hybrid memristor-CMOS neurons for in-situ learning in fully hardware memristive spiking neural networks.Science Bulletin 66, 1624–1633 (2021).ref15 Portilla, L. et al. Wirelessly powered large-area electronics for the Internet of Things. Nat Electron 6, 10–17 (2023)ref16 Wang, T. et al. Reconfigurable neuromorphic memristor network for ultralow-power smart textile electronics. Nat Commun 13, 7432 (2022).ref17 Fu, T. et al. Bioinspired bio-voltage memristors. Nat Commun 11, 1861 (2020).ref18 Conti, S. et al. Low-voltage 2D materials-based printed field-effect transistors for integrated digital and analog electronics on paper. Nat Commun 11, 3566 (2020).ref19 Karagiannidis, P. G. et al. Microfluidization of Graphite and Formulation of Graphene-Based Conductive Inks. ACS Nano 11, 2742–2755 (2017).ref20 Tang, B. et al. Wafer-scale solution-processed 2D material analog resistive memory array for memory-based computing. Nat Commun 13,1–9 (2022).ref21 Wilcoxon, J. P., Newcomer, P. P. & Samara, G. A.Synthesis and optical properties of MoS2 and isomorphous nanoclusters in the quantum confinement regime Journal of Applied Physics 81, 7934–7944 (1997).ref22 Eda, G. et al. Photoluminescence from Chemically Exfoliated MoS2. Nano Lett. 11, 5111–5116 (2011).ref23 Pan, C. et al. Coexistence of Grain-Boundaries-Assisted Bipolar and Threshold Resistive Switching in Multilayer Hexagonal Boron Nitride. Advanced Functional Materials 27, (2017).ref24 Hu, G. et al. A general ink formulation of 2D crystals for wafer-scale inkjet printing. Science Advances 6, (2020). ref25 Hu, G. et al. Black phosphorus ink formulation for inkjet printing of optoelectronics and photonics.Nat Commun 8, 278 (2017).ref26 Feng, X. et al. A Fully Printed Flexible MoS2 Memristive Artificial Synapse with Femtojoule Switching Energy. Advanced Electronic Materials 5, 1900740 (2019).ref27 Wang, W. et al. Surface diffusion-limited lifetime of silver and copper nanofilaments in resistive switching devices. Nat Commun 10, 81 (2019).ref29 Sun, J. et al. Liquid-like pseudoelasticity of sub-10-nm crystalline silver particles. Nature Mater 13, 1007–1012 (2014).ref30 Zhang, Y. et al. Highly Compact Artificial Memristive Neuron with Low Energy Consumption. Small 6, 14, 1802188 (2018).ref31 Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nature Mater 16, 101–108 (2017).ref32 Sun, L. et al. Ultralow switching voltage slope based on two-dimensional materials for integrated memory and neuromorphic applications. Nano Energy 69, 104472 (2020).ref33 Li, H. et al. Single-Transistor Neuron with Excitatory – Inhibitory Spatiotemporal Dynamics Applied for Neuronal Oscillations. Advanced Materials 2207371, 1–9 (2022).ref34 Shi, Y. et al. Electronic synapses made of layered two-dimensional materials Nat Electron 1, 458–465 (2018).ref35 Kuzum, D., Yu, S. & Wong, H. P. Synaptic electronics: materials, devices and applications Nanotechnology 24, 38200 (2013).ref36 Ippolito, S. et al. Covalently interconnected transition metal dichalcogenide networks via defect engineering for high-performance electronic devices.Nat Nanotechnol. 16, 592–598 (2021).ref37 Williams, N. X., Bullard, G., Brooke, N., Therien, M. J. & Franklin, A. D. Printable and recyclable carbon electronics using crystalline nanocellulose dielectrics. Nat Electron 4, 261–268 (2021).ref38 Dingledine, R., Borges, K., Bowie, D. & Traynelis, S. F. The Glutamate Receptor Ion Channels. Pharmacological Reviews 51, 1 (1999).ref39 Monaghan, D. T. & Bridges, R. J. The excitatory amino acid receptors Annu. Rev. Pharmacol. Toxicol. 29, 365402 (1989).ref40 Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology 117, 500-544 (1952).ref41 Kim, M. & Lee, J. Short-Term Plasticity and Long-Term Potentiation in Artificial Biosynapses with Diffusive Dynamics ACS Nano 12, 1680–1687 (2018).ref42 Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nature Mater 16, 101–108 (2017).ref43 Bliss, T. V. P. & Lømo, T. Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. The Journal of Physiology 232, 331-356 (1973).ref44 Ambrogio, S. et al. Unsupervised Learning by Spike Timing Dependent Plasticity in Phase Change Memory (PCM) Synapses. Frontiers in Neuroscience 10, (2016).ref45 Ghaani, M., Cozzolino, C. A., Castelli, G. & Farris, S. An overview of the intelligent packaging technologies in the food sector. Trends in Food Science & Technology 51, 1–11 (2016).ref46 Göransson, M., Nilsson, F. & Jevinger, Å. Temperature performance and food shelf-life accuracy in cold food supply chains – Insights from multiple field studies. Food Control 86, 332–341 (2018).ref47 Skawińska, E. & Zalewski, R. I. Economic Impact of Temperature Control during Food Transportation—A COVID-19 Perspective. Foods 11, 467 (2022).ref48 Albrecht, A. et al. Implementation of Time Temperature Indicators to Improve Temperature Monitoring and Support Dynamic Shelf Life in Meat Supply Chains. J Package Technol Res 4, 23-32 (2020).ref49 Anbukarasu, P., Sauvageau, D. & Elias, A. L. Time-Temperature Indicator Based on Enzymatic Degradation of Dye-Loaded Polyhydroxybutyrate. Biotechnology Journal 12, 1700050 (2017).ref50 Wang, S. et al. Review of Time Temperature Indicators as Quality Monitors in Food Packaging.Packaging Technology and Science 28, 839–867 (2015).ref51 Roychowdhury, S., Koozekanani, D. D. & Parhi, K. K. DREAM: Diabetic Retinopathy Analysis Using Machine Learning. IEEE Journal of Biomedical and Health Informatics 18, 1717–1728 (2014).ref52 Global Prevalence and Major Risk Factors of Diabetic Retinopathy | Diabetes Care | American Diabetes Association. https://diabetesjournals.org/care/article/35/3/556/28568/Global-Prevalence-and-Major-Risk-Factors-of.ref53 Schoenfeld, E. R., Greene, J. M., Wu, S. Y. & Leske, M. C. Patterns of adherence to diabetes vision care guidelines: baseline findings from the Diabetic Retinopathy Awareness Program. Ophthalmology 108, 563–571 (2001).ref54 Zhang, X. et al. Prevalence of Diabetic Retinopathy in the United States, 2005-2008. JAMA: the journal of the American Medical Association 304,649–56 (2010).ref55 Papers with Code - DIARETDB1 Dataset. https://paperswithcode.com/dataset/diaretdb1.ref56 Coleman, J. N. et al. Two-Dimensional Nanosheets Produced by Liquid Exfoliation of Layered Materials. Science 331, 568–571 (2011).§ ACKNOWLEDGEMENTSM.X., N.M. and T.H. acknowledge support from Engineering and Physical Sciences Research Council (EP/T014601/1, EP/L016087/1). G. Y. acknowledges support from Royal Society. B.Z., F.T. and Z.C. acknowledges support from China Scholarship Council.§ AUTHOR INFORMATIONJinrui Chen and Mingfei Xiao: These authors contributed equally to this work. §.§ ContributionsJ.C, M.X., and K. W. fabricated the devices. J.C. and K.W. performed device characterization. M.X. and N.M. conducted ink formulation and characterization. Z.C., S.K., and J.C. designed the DR algorithm. Z.C, B.Z., G.Y. and Y. X. contributed to the discussion of results. B.Z. conducted the BDT-treatment experiment. S.F. conducted the cross-sectional TEM imaging. R.O., S.G. and J.C. designed and conducted the CAFM measurement experiments. T.H. directed and coordinated the research. J.C., M.X., Z.C. and T.H. wrote the manuscript. All authors participated in the scientific discussion and contributed to the writing of the manuscript. §.§ Corresponding authorCorrespondence to Tawfique Hasan ([email protected])
http://arxiv.org/abs/2312.16501v1
{ "authors": [ "Jinrui Chen", "Mingfei Xiao", "Zesheng Chen", "Sibghah Khan", "Saptarsi Ghosh", "Nasiruddin Macadam", "Zhuo Chen", "Binghan Zhou", "Guolin Yun", "Kasia Wilk", "Feng Tian", "Simon Fairclough", "Yang Xu", "Rachel Oliver", "Tawfique Hasan" ], "categories": [ "cs.ET" ], "primary_category": "cs.ET", "published": "20231227101357", "title": "Inkjet-Printed High-Yield, Reconfigurable, and Recyclable Memristors on Paper" }
firstpage–lastpage Non-Chiral Vertex Operator Algebra Associated To Lorentzian Lattices And Narain CFTs [====================================================================================Gravitational waves (GWs) provide a new avenue to test Einstein’s General Relativity (GR) using the ongoing and upcoming GW detectors by measuring the redshift evolution of the effective Planck mass proposed by several modified theories of gravity. We propose a model-independent, data-driven approach to measure any deviation from GR in the GW propagation effect by combining multi-messenger observations of GW sources accompanied by EM counterparts, commonly known as bright sirens (Binary Neutron Star(BNS) and Neutron Star Black Hole systems(NSBH)). We show that by combining the GW luminosity distance measurements from bright sirens with the Baryon Acoustic Oscillation (BAO) measurements derived from galaxy clustering, and the sound horizon measurements from the Cosmic Microwave Background (CMB), we can make a data-driven reconstruction of deviation of the variation of the effective Planck mass (jointly with the Hubble constant) as a function of cosmic redshift. Using this technique, we achieve a precise measurement of GR with redshift (z) with a precision of approximately 7.9% for BNSs at redshift z=0.075 and 10% for NSBHs at redshift z=0.225 with 5 years of observation from LVK network of detectors. Employing CE&ET for just 1 year yields the best precision of about 1.62% for BNSs and 2% for NSBHs at redshift z=0.5 on the evolution of the frictional term, and a similar precision up to z=1. This measurement can discover potential deviation from any kind of model that impacts GW propagation with ongoing and upcoming observations. Gravitational waves, gravitation, cosmology: observations§ INTRODUCTIONThe General Theory of Relativity (GR) has stood as a cornerstone of our understanding of gravity for over a century. It elegantly explains how massive objects warp the fabric of spacetime, causing other objects to move along curved trajectories. However, in extreme astrophysical environments such as black holes and the early universe, the GR faces challenges in its applicability <cit.>. Recently, a groundbreaking development occurred with the direct detection of gravitational waves (GWs) by the LIGO-Virgo collaboration <cit.>. This achievement has opened a new avenue for testing GR, particularly in the context of compact binary systems existing in relativistic regimes. These systems include Binary Neutron Stars (BNS), Neutron Star-Black Hole pairs (NSBH), and Binary Black Holes (BBH). These compact binaries offer unprecedented opportunities to delve into the fundamental nature of gravity across a vast range of mass scales and cosmological distances. Ground-based detectors like LIGO <cit.>, Virgo<cit.> and KAGRA <cit.> have played pivotal roles in this endeavor. Furthermore, the prospect of future space-based detectors such as LISA <cit.> and ground-based detectors such as LIGO-Aundha (LIGO-India) <cit.>[<https://dcc.ligo.org/LIGO-M1100296/public>], Cosmic Explorer(CE) <cit.>, and the Einstein Telescope(ET) <cit.> holds even greater promise for advancing our understanding of gravity and rigorously testing GR's predictions. Through these combined efforts, we aim to refine and expand our comprehension of the intricate interplay between gravity and the cosmos. While various models proposing modifications to the theory of gravity have been put forth to address different scenarios, this study is dedicated to testing GR in the propagation of GW through a model-independent, data-driven approach. GWs and EM signals' propagation speed and luminosity distance offer insight into modified gravity theories. Factors like graviton mass, frictional term(due to running effect of Planck mass) and anisotropic source term contribute to deviations <cit.>. Precise measurements enable rigorous testing of these alternative theories. Employing bright sirens that emit both GW and EM radiation provides a robust avenue for conducting meticulous assessments of GW propagation speed, graviton mass, and the frictional term(γ(z), see in Equation.<ref>). The prompt measurement of the electromagnetic counterpart, occurring approximately 1.7 seconds after the BNS event GW170817, has facilitated the imposition of exceedingly precise constraints on the speed of gravitational wave propagation and the mass of the graviton <cit.>. Various model-dependent parametrizations of γ(z) exist in the literature, and one such example is the (Ξ_0,n) parametrization <cit.> and its detectability is shown for LVK <cit.> and LISA <cit.>.This parameterization depends on two positive parameters, denoted as Ξ_0 and n. When Ξ_0 is set to 1, it corresponds to GR. There exist several models of modified gravity, each providing predictions for the frictional term in various scenarios. One prominent category is Scalar-Tensor theories. These theories encompass a scalar degree of freedom, the dynamics of which play a pivotal role in the evolution of the universe. Among the most well-known scalar-tensor theories of gravity are the Brans–Dicke theory <cit.>, f(R) gravity <cit.> and covariant Galileon models <cit.>. These belong to the broader class known as Horndeski theories. Beyond Horndeski theories, we have Degenerate Higher Order Scalar-Tensor (DHOST) theories <cit.>. These have been constructed and represent, thus far, the most general scalar-tensor theories that propagate a single scalar degree of freedom alongside the helicity-2 mode of a massless graviton. There are several other well known modified gravity exist in the litarature such as f(Q) gravity, f(T) gravity <cit.>, bigravity <cit.>, gravity in some extra dimensions <cit.> and so on. In addition, there exist additional parameterizations <cit.>, with the most significant being the polynomial-exponential and exponential parameterizations.These models exhibit variations from one another<cit.> from the power-law form which cannot be captured by Ξ_0 and n. It is worth mentioning that for this particular parametrization, the fitting formula for γ(z) becomes less accurate in both low and high redshifts in comparison to some of the models (see for example <cit.>).Additionally, beyond this limited range of models, the integrated effect of γ(z) may differ from a simple power-law form in reality. As a result, it is important to be able to test GR in a model-independent way and measure the integrated effect of γ(z) as a function of the data. To address these issues, we introduce a novel model-independent function, denoted as ℱ(z), defined in Equation <ref>. This function ℱ(z) is specifically designed to account for any deviation from the GR. It serves as a metric for assessing the disparity between the distances denoted d_l^GW(z) and d_l^EM(z) so in standard GR this ℱ(z)=1. Figure <ref>, highlighting the enhanced precision in measuring the redshift-dependent variation of the Planck mass (ℱ(z)) for NSBH and BNS systems. Notably, the precision achieved is approximately 7.9% for BNS systems at redshift 0.075 and 10% for NSBH systems at redshift 0.225 over a 5-year observation period from the LVK network of detectors. Furthermore, employing CE&ET for just 1 year yields remarkable precision, reaching about 1.62% for BNS systems and 2% for NSBH systems at redshift 0.5, assuming a fixed Hubble constant. However, when considering a varying Hubble constant, the error bar increases to approximately 2 to 2.2 times that of a fixed Hubble constant. Figure <ref> also emphasizes that the measurement of ℱ(z) is particularly improved in the redshift range of 0.2 to 1.0. This improvement is attributed to two main factors: signal-to-noise ratio (SNR) and the number of events. For a more detailed analysis, refer to Section <ref>. In the (Ξ_0, n) frictional term parametrizations, Ξ_0 is set to 1 as the fiducial value. Notably, parameter which controls the redshift evolution denoted by n lacks definition at this point. Measuring n becomes challenging as Ξ_0 nears 1 <cit.>. Model-independent parametrization enables redshift scaling for F(z), even for the GR value, in contrast to parametric forms.The paper is structured as follows: In Section <ref>, we delve into the propagation of GW beyond GR. Subsequently, we present our proposed model-independent, data-driven reconstruction of the variation of the effective Planck mass(frictional term). In Section <ref>, we shift our attention to the astrophysical population of GW sources and explore their detection using both current and future detectors. Section<ref> is dedicated to a comprehensive discussion on the model-independent measurement of the BAO scale from the galaxy power spectrum. In Section<ref>, we provide a detailed exposition of the formalism underpinning our work. Moving forward, Section<ref> encapsulates the main findings of our methodology, while Section <ref> serves as the culmination, summarizing our key discoveries and engaging in a discourse on future prospects.§ NON-PARAMETRIC RECONSTRUCTION OF DEVIATION FROM GENERAL RELATIVITY: FORMALISM The propagation of GW in the fabric of spacetime, as described by the GR, can be mathematically expressed as followsh_(+,×)^” + 2ℋh_(+,×)^' + c^2k^2h_(+,×) = 0,where h_(+,×) is GW strain of plus (+) and cross (×) polarization, the prime symbol denotes the derivative with respect to the conformal time(η) and ℋ represents the Hubble parameter in comoving coordinates. We utilize this equation as a starting point to examine the propagation of GWs and evaluate the validity of the GR. The modified theory of gravity generalizes this equation as followsh_(+,×)^” + 2(1-γ(z))ℋh_(+,×)^' + (c_GW^2k^2+m_GW^2a^2)h_(+,×) = a^2Π_(+,×),In this formulation, several additional parameters come into play. Firstly, we have the frictional term denoted by γ(z), which represents the influence of friction in the theory. The speed of GW propagation is represented by c_GW, a is the scale factor and the graviton mass bym_GW. Finally, the anisotropic stress term is denoted by Π_(+,×). It is noteworthy that, within the framework of the GR, all these additional parameters assume a fiducial value of zero, except for c_GW, which corresponds to the speed of light(c). In the context of testing GR through propagation, it is essential to scrutinize the additional parameters and ascertain whether they deviate from the values predicted by GR. Recent measurements, particularly from the GW170817 event, have provided stringent constraints, indicating that the graviton mass (m_GW) is zero, and the speed of GW propagation (c_GW) is equivalent to the speed of light (c) <cit.>.A modification of the the anisotropic stress term (Π_+,×) affects the phase of the binary waveforms; the recent BH–BH observations, in particular of GW150914 and GW151226, have set some limit on such modifications, although for the moment not very stringent <cit.>. The presence of the frictional term γ(z) changes the amplitude of the GW signal received from a source at cosmological distance. This is particularly interesting because it implies that the luminosity distance measured with standard sirens is in principle different from that measured with standard candles or other electromagnetic probes such as CMB or BAO, and this could provide a “smoking gun” signature of modified gravity. So as a consequence the GW luminosity distance d_l^GW is related to the electromagnetic wave-based luminosity distance d_l^EM by an exponential factor, where the exponent involves integrating γ(z')/(1+z') with respect to z' <cit.>. Consequently, the expression becomes d_l^GW(z) = exp(-∫ dz'γ(z')/1+z') d_l^EM(z). In our investigation, we focus on tensor perturbations, where the impact is encoded in the non-trivial function d_l^GW(z)/d_l^EM(z). In this study, we propose a non-parametric reconstruction of the deviation from GR as d_l^GW(z)=ℱ(z)d_l^EM(z).The term ℱ(z) can capture any deviation from GR as a function of redshift. The reconstruction of this quantity from observation of d_l^GW(z) and d_l^EM(z) can capture any modified gravity models which predict a modification in the GW propagation <cit.>. Alternatively, a parametric form using Ξ_0 and n, d_l^GW(z)/d_l^EM(z)=Ξ_0+1-Ξ_0/(1+z)^n is also usedto capture all those models which predicts a power-law modification with redshift of the GW propagation effect <cit.>.In the realm of binary systems, GW offers a unique method for determining luminosity distance, denoted as d_l^GW. This distance measurement is derived from GW events. While the luminosity distance can also be determined using the standard ΛCDM cosmology, such an approach is inherently model-dependent. To obtain a model-independent data-driven inference of luminosity distance, we utilize EM luminosity distance calculations derived from various length scales. Data-driven inference of luminosity distance: A data-driven approach to infer the luminosity distance can be made by using the BAO angular peak position (θ_ BAO) inferred from large scale structure observation <cit.>. The position of the BAO peak in the galaxy two-point correlation function can be inferred directly from observation. The angular scales are related to the comoving sound horizon until the redshift of drag epoch (z_d ∼ 1020) as r_s=∫_z_d^∞dz c_s(z)/H_0√(Ω_m(1+z)^3+Ω_Λ + Ω_r(1+z)^4).Combining these two and using the distance duality relation for EM probes, one can write the above equation as <cit.>ℱ(z)=d_l^GW(z)θ_BAO(z)/(1+z)r_s.The comoving sound horizon r_s at the drag epoch can be related to the comoving sound horizon r_* at the redshift of recombination (z_*= 1090) by r_d ≈ 1.02r_*. The quantity r_* is inferred from CMB observation, using the position of the first peak in the CMB temperature power spectrum θ_*= r_*/(1+z_*)D^EM_A(z_*) <cit.>. In the above equation, d_l^GW(z) is measured from GW sources, θ_ BAO is measured from galaxy two point correlation function, and r_s is inferred from the CMB temperature fluctuations. The redshift to the GW source can be inferred from EM counterpart for a bright stand siren. As a result, the product of all these observable quantities will lead to an identity value at all redshifts if GR and the fiducial ΛCDM model of cosmology is the correct theory. However, if there is any departure from these models, then we can measure a deviation from one with redshift. It is important to note that the value of r_s depends on the redshift above z_d=1020. Though it depends on the model of cosmology, it is a redshift independent number of a constant value for the low redshifts z<2. So, even if there is an inaccuracy in the inference of true r_s, the value of ℱ(z) will vary by an overall normalization. But the reconstructed redshift evolution of ℱ(z) will remain the same. In one of the later sections, we will show results for both the cases, (i) only ℱ(z) and (ii) ℱ(z) and H_0 together. The second case will make it possible to marginalise over any inaccurate inference of sound horizon r_sdue to incorrect inference of the Hubble constant. Several alternative models of GR predict deviation in different natures of the acceleration of the Universe that are predicted from the ΛCDM. As a result, this data-driven approach can make it possible to explore both deviation from GR and w=-1 equation of state of dark energy. The use of different observational probes was done previously <cit.> for a parametric form of deviation from GR (using Ξ_0 and n) valid for a class of model. However, the non-parametric form using ℱ(z) proposed in this analysis, can make a redshift-dependent reconstruction of any deviation from GR using the multi-messenger observations. The redshift can be deduced through the EM counterparts associated with these bright sirens. Ongoing and upcoming missions dedicated to identifying EM counterparts include DESI <cit.>, Vera Rubin <cit.>, Fermi <cit.>, Swift <cit.>, HST <cit.>, Roman space telescope <cit.>, Chandra <cit.>, VLA <cit.>, ZTF <cit.>, Astrosat <cit.>. A comprehensive article on the required multi-messenger observations can be found here <cit.>. For the measurement of BAO scale, ongoing and upcoming missions are committed to spectroscopic surveys. Notable projects in this regard include eBOSS<cit.>, DESI <cit.>, Vera Rubin <cit.>, Euclid <cit.>, and SPHEREx <cit.>. CMB measurements from the ongoing and upcoming missions include ACTPol <cit.>, Simons Observatory <cit.>, SPT-3G <cit.> and CMB-S4 <cit.>. § GW MOCK SAMPLES FOR LVK, COSMIC EXPLORER, AND EINSTEIN TELESCOPE§.§ Modelling of the astrophysical population of GW sourceTo accurately assess deviations from the GR model, it is crucial to rely on real GW events. The total number of GW events is contingent upon the merger rates, and mass population of the GW sources. We will explain below the astrophysical models used in the analysis.In this study we use a delay time model of the binary mergers<cit.>. In the delay time model, the merger rate is described in terms of the delay time distribution, denoted as t_d. The delay time refers to the elapsed time between the formation of stars that will eventually become black holes and the actual merging of these black holes. It is important to note that the time delay is not uniform across all binary black holes but instead follows a specific distribution. This distribution function accounts for the variations in the delay time and is defined as follows: p_t(t_d|t_d^min,t_d^max,d) ∝ (t_d)^-d , for t_d^min<t_d<t_d^max, 0otherwiseThe delay time is given by t_d=t_m-t_f, where t_m and t_f are the lookback times of merger and formation respectively <cit.>. So the merger rate at redshift z can be defined as R_TD(z)=R_0∫_z^∞p_t(t_d|t_d^min,t_d^max,d)R_SFR(z_f)dt/dz_fdz_f/∫_0^∞p_t(t_d|t_d^min,t_d^max,d)R_SFR(z_f)dt/dz_fdz_f, The parameter R_0 represents the local merger rate, indicating the frequency of mergers at a redshift of z=0. According to the study in <cit.>, the estimated values of R_0 for BNS systems vary between 10 Gpc^-3 yr^-1 and 1700 Gpc^-3yr^-1. For NSBH systems, the R_0 values range from 7.8 Gpc^-3 yr^-1 to 140 Gpc^-3yr^-1.In our study, we assume a standard local merger rate of R_0 = 20 Gpc^-3 yr^-1 for NSBH systems. For BNS systems, we consider four different scenarios with R_0 values of 100, 200, 300, and 500 Gpc^-3 yr^-1 respectively. This methodology allows us to examine the effects of different local merger rates on our results. .The numerator of the expression involves the integration over redshift z_f from z to infinity, where p_t(t_d|t_d^min,t_d^max,d) is the delay time distribution, R_SFR(z_f) is the star formation rate, and dt/dz_f is the jacobian of the transformation. The star formation rate(R_SFR(z)), is determined using the <cit.> star formation rate. The total number of compact binary coalescing events per unit redshift is estimated as dN_GW/dz = R_ TD(z)/1+zdV_c/dz T_obs,where T_obs indicates the total observation time, dV_c/dz corresponds to the comoving volume element, and R(z) denotes the merger rate <cit.>. We consider the delay time merger rate with a specific delay time t_d =500Myrs and a power-law exponent of d=1. However, it is regrettable to note that not all of these events can be detected due to the limitations imposed by the sensitivity of our current detectors. To determine which events are detectable, the calculation of the matched filtering signal-to-noise ratio (SNR) plays a crucial role. The SNR serves as a measure of the strength of the gravitational wave signal relative to the background noise. Only those events with a matched filtering SNR greater than or equal to a predetermined threshold SNR (ρ_ TH) can be reliably detected. <cit.> For a GW emitted by an optimally oriented binary system, the optimized SNR, denoted as ρ, is defined as follows <cit.> ρ^2 ≡ 4∫_f_min^f_max df |h(f)|^2/S_n(f), Here, S_n(f) represents the power spectral density of the detector. The function h(f) corresponds to the Gravitational Wave (GW) strain in the restricted post-Newtonian approximation and is defined for plus (+) and cross (×) polarization as <cit.>:h(f)_{+, ×}=√(5η/24)(GM_c)^5/6/d_Lπ^2/3c^3/2f^-7/6e^ιΨ(f)ℐ_{+, ×},In this expression, the symbol η represents the symmetric mass ratio. The term M_c signifies the chirp mass of the system. The variable d_L denotes the luminosity distance. The constant c represents the speed of light in a vacuum. ℐ_+= (1+cos^2i)/2 and ℐ_×= cos i depends on the inclination angle i. Finally, Ψ(f) stands for the phase of the waveform. However, the signal detected by a GW detector h_ det is a complex interplay of several variables, including the detection antenna functions (F_+, F_×), and can be expressed as: h_ det=F_+h_++F_×h_×,here F_+ and F_× are the antenna functions defined as follows <cit.>F_+=1/2(1+cos^2θ)cos2ϕ cos2ψ-cosθ sin2ϕ sin2ψ,F_×=1/2(1+cos^2θ)cos2ϕ cos2ψ+cosθ sin2ϕ sin2ψ,where θ and ϕ define the location of the source in the sky, and ψ is related to the orientation of the binary system with respect to the detector.h_+=h(f) 1+cos^2i/2; h_×=h(f)cosi,i denotes the inclination angle. Consequently, the matched filtering SNR (ρ) takes the form <cit.>ρ = Θ/4[4∫_f_min^f_maxh(f)^2/S_n(f)df]^1/2,where Θ^2 ≡ 4 (F_+^2(1+cos^2i)^2 + 4F_×^2cos^2i). Averaging over many binaries inclination angle and sky positions, <cit.> showed that Θ follows a distribution P_Θ(Θ) = 5Θ(4-Θ)^3/256 if0<Θ<4,0, otherwise. The mass population used for BNS and NSBHs are motivated by the recent results from the third catalog of GW sources by the LVK collaboration <cit.>. To model the primary mass distribution for black holes, we adopt a Power Law + Gaussian Peak model with smoothing, which combines a power law distribution with the inclusion of a Gaussian component encompassing masses ranging from 10M_⊙ to 20M_⊙. Conversely, we assume a uniform distribution of masses for the secondary component of the neutron star, falling within the range of 1M_⊙ to 2M_⊙. In this study, we employ two sets of GW detectors: one comprising LIGO <cit.>, Virgo<cit.> and KAGRA <cit.>(LVK), and the other consisting of Cosmic Explorer <cit.> and Einstein Telescope <cit.> (CE&ET).For the LVK system, a threshold SNR of ρ_ Th =12 is chosen. For CE&ET, the threshold SNR is adjusted to enable the detection of events up to a redshift of 3 (which corresponds to an SNR of 25 and 55 for BNS and NSBH respectively.). A redshift bin size of Δ z=0.025 is adopted, and within this bin, the total number of events is computed using Equation <ref>. Events are generated based on distances inferred from redshifts, the parameter Θ, and object masses from their respective distributions.For NSBH systems, the LVK's 5-year operation is projected to yield approximately 30 detectable events (all of them need not have a detectable EM counterpart). However, with a one-year observation period, the CE&ET are expected to identify around 600 events. In the case of BNS systems, the LVK's 5-year operation is anticipated to result in approximately 25, 39, and 60 detectable events for local merger rates (R_0) of 200, 300, and 500 Gpc^-3 yr^-1 respectively. On the other hand, CE&ET are expected to detect approximately 4800 events in a one-year observation period with R_0=100. In Figure <ref> and Figure <ref>, we present the projected total and detectable coalescing events involving BNS and NSBH systems within specific redshift intervals (Δ z). To obtain these estimates, we first determine the number of merging events for each redshift bin using Equation <ref>. For each event, we employ the inverse transform method to derive the primary mass, secondary mass, and Θ from their respective probability distributions. The primary mass is generated within the range of 10M_⊙ to 20M_⊙, the secondary mass within 1M_⊙ to 2M_⊙, and Θ within the interval 0 to 4. Incorporating redshift information for each event, we calculate the corresponding luminosity distance and SNR using Equation <ref> for a single detector. The total SNR denoted as ρ_ total, is then determined by combining individual SNRs from individual detectors using the formula ρ_ total=√(∑_i ρ_i^2).This detection capability is evaluated using both the CE&ET and LVK network of GW detectors over a defined observation period, T_OBS. The depicted curve in the graph illustrates the distribution of these events up to a redshift of z = 4. This representation offers valuable insights into the temporal occurrence of these mergers throughout cosmic history and the likelihood of their successful detection using current and future planned instrumentation. §.§ GW Source Parameter Estimation using BilbyWe initiate the parameter estimation process for these discernible sources by employing the<cit.> package, which yields realistic posterior distributions on the GW luminosity distance marginalized over the other source parameters. The mass of the GW sources and the number of sources with redshift are drawn as described in the last section. For the remaining source parameters, such as the inclination angle (i), polarization angle (ψ), GW phase (ϕ), right ascension (RA), and declination (Dec), we sample uniformly, considering a non-spinning system. With these injected parameters, we proceed to generate a GW signal using the<cit.> waveform model. This model incorporates higher-order modes that can alleviate the degeneracy between the luminosity distance(d_L) and inclination angle(i) for unequal mass sources. By constraining the priors of all other parameters to delta functions [We have assumed here that the RA and Dec are inferred from the EM counterpart.], we generate posterior distributions for m_1, m_2, d_L and i. Among these, the posterior of d_L is particularly crucial for our study, as it will be used for the inference of ℱ(z). Additionally, the detection probability of the electromagnetic counterpart is contingent on the value of the inclination i <cit.>. § BARYON ACOUSTIC OSCILLATION SCALE FROM GALAXY POWER SPECTRUMThe BAO arises from the intricate interplay between radiation pressure and gravity during the early stages of the Universe. Previous works on this and corresponding measurements can be found here <cit.>.As photons and baryons decouple, a specific length known as the sound horizon at the drag epoch (z_d), represented by r_s, leaves a distinct signature on the distribution of galaxies and the power spectrum of CMB anisotropies <cit.>. By examining the two-point angular correlation function (2PACF) of galaxy catalogs denoted by w(θ), one can identify the BAO signature, which manifests as a prominent feature resembling a bump. The BAO scale can be derived by fitting the angular correlation function w(θ) with a model defined as a combination of a power-law and a Gaussian component as <cit.>w(θ, z)=a_1+a_2θ^k+a_3exp(-(θ-θ_FIT(z))^2/σ_θ^2),where, the coefficients a_i represent the parameters of the model, σ_θ denotes the width of the BAO feature, and θ_FIT corresponds to the best-fit value of the angular BAO scale. This signature serves as a robust standard ruler, enabling independent measurements of the angular diameter distance denoted as d_A(z) <cit.>.Theoretically, one can model the 2PACF w(θ)as w(θ)=∑_l≥ 0(2l+1/4π)P_l(cos(θ))C_l,where P_l is the Legendre polynomial of the l^th order and C_l the angular power spectrum, which can be expressed in terms of the three-dimensional matter power spectrum 𝒫(k) as follows <cit.>C_l=1/2π^2∫4π k^2𝒫(k)ψ_l^2(k)e^-k^2/k_eff^2,where the quantity ψ_l(k) is defined as:ψ_l(k)=∫ dzϕ(z)j_l(kr(z)),where ϕ(z) is the galaxy selection function and j_l is the spherical Bessel function of the l^th order, and r(z) is the comoving distance. To improve the convergence of the integral for the angular power spectrum, a damping factor e^(-k^2/k_eff^2) is introduced <cit.>. Throughout the calculations, we adopt a fixed value of 1/k_eff = 3 Mpc/h, which has no significant impact on the angular power spectrum for the scales of interest. In this study, we calculate the matter power spectrum from the module<cit.>, and the selection function on redshift is formulated as a normalized Gaussian function. The standard deviation of this Gaussian function is set to be 5% of the mean value. The anticipated photo-z error for the Vera Rubin Observatory is described by the expression 0.03(1+z), as referenced in <cit.>. Figure <ref> visually demonstrates how the product of θ^2w(θ) changes with angular separation at a specific redshift z=0.125. The prominent peak at θ∼ 16.0 is a significant observation, highlighting the characteristic scale associated with BAO, which is crucial for our study.The covariance of w(θ), denoted as Cov_θθ', is modelled as Cov_θθ'=2/f_sky∑_l≥ 02l+1/(4π)^2P_l(cos(θ)) P_l(cos(θ'))(C_l+1/n)^2,Here, 1/n is the shot noise which is related to the number density of galaxies n or, more precisely, the number of objects per steradian, and f_sky represents the fraction of the sky that is covered by the survey or observation <cit.>. Figure <ref> illustrates how the covariance matrix, derived from the linear matter power spectrum, changes concerning angular separations at a specific redshift. The increase in the covariance matrix as both θ and θ' rise suggests a correlation between these angular scales. Figure <ref> provides a comprehensive view of how the BAO scale and its error (calculated by using Equation <ref>) evolve with redshift, emphasizing the impact of photo-z inaccuracies. The consistent relative error observed across measurements serves as a key finding, shedding light on the reliability of BAO scale determinations in the given redshift range. In future work on the application of this technique to data, one can estimate the covariance matrix from simulations. Several detectors are set to measure the BAO scale with impressive accuracy, delving into the depths of the universe's redshift. Among these detectors, DESI <cit.> stands out as a groundbreaking 5-year experiment on the ground, specifically designed to investigate BAO and the changes in cosmic structures as seen through redshift-space distortions. Covering a vast area of 14,000 square degrees in the sky (f_sky = 0.3), DESI aims to achieve extremely precise measurements of the BAO feature. The goal is to surpass an impressive precision of 0.5%, focusing on redshift intervals 0.0 < z < 1.1, 1.1 < z < 1.9, and 1.9 < z < 3.7 <cit.>. Additionally, Euclid <cit.> and the Vera Rubin Observatory <cit.>, covering a vast sky area of 18,000 square degrees will be able to make precise measurements of BAO up to high redshift.§ FORECAST FOR RECONSTRUCTING ℱ WITH REDSHIFT §.§ Case with the fixed Hubble constant H_0The intricate connection between the EM wave luminosity distance at a specific redshift (z) and key cosmological parameters, including the BAO scale (θ_BAO) and the sound horizon (r_s), is succinctly captured by the equationℱ(z)=d_l^ GW(z)θ_ BAO(z)/(1+z)r_s,This equation serves as a focal point for the frictional component under consideration. In the context of (ℱ(z)), it establishes a connection with three distinct length scales The luminosity distance for detectable GW events which is determined using .We utilize the BAO scale, derived from the 2PACF using the Equation <ref>. This method is not bound to any particular survey, affording us the flexibility to apply our methodology across a range of surveys without being restricted to a specific one. The sound horizon is measured by analyzing acoustic oscillations in the CMB radiation using several missions such WMAP <cit.>, Planck <cit.>, ACTPol <cit.>, SPT-3G <cit.>, and CMB-S4 <cit.>. In particular, the first peak observed in the angular power spectrum of the CMB corresponds to the scale of the sound horizon during the process of recombination. The measured sound horizon value at the drag epoch is found to be 147.09±0.26 Mpc, indicating a high level of precision from Planck <cit.>.We utilize a Hierarchical Bayesian framework to calculate the posterior distribution of ℱ(z) as a function of redshift for a total of n_GW GW sources. The equation representing the posterior distribution on ℱ(z) for bright sirens is given as <cit.>P(ℱ(z)|{d^GW},{z}) ∝Π(ℱ(z))∏_i=1^n_GW∭ dr_sP(r_s) dd_l^GW^i ×dθ_BAO^i P(θ_BAO^i) ℒ(d_l^GW^i|ℱ(z^i),θ_BAO^i,r_s,z^i),In this equation, P(ℱ(z)|{d^GW}, {z}) denotes the posterior probability density function of ℱ(z) given the GW source data d^GW and spectroscopic redshift from EM counterpart of these sources {z} for i∈ n_GW events. The measurements of BAO angular scale θ_BAO at the redshift (z) of the GW source where an EM counterpart of GW source can be measured using the galaxy survey, and it is marginalized over. The term ℒ(d_l^GW^i|ℱ(z)^i, θ_BAO^i, r_s, z^i) corresponds to the likelihood function. P(r_s) and P(θ_BAO^i) denote the prior probability distributions for the sound horizon (r_s) and BAO scale respectively. The term Π(ℱ(z)) represents the prior distribution for ℱ(z), which encapsulates our knowledge or assumptions about the frictional term before incorporating the measured data. We adopt a flat prior on the frictional term, ℱ(z) from 0.1 to 2. For this investigation, we make specific assumptions regarding the probability distribution of redshift(z) and prior to the frictional term ℱ(z). In this analysis, we consider bright sirens only. So, we assume that the host galaxy redshift of the GW sources is measured spectroscopically. In this case, it implies that we have precise knowledge or assume that the redshift of the sources is concentrated at a particular value. In Figure <ref>, we presented the measurement of ℱ(z), with a fixed value of H_0 for both the CE&ET and the LVK system, focusing on a single event. This plot illustrates a notable trend where the measurement error increases with higher redshifts. Additionally, it discerns that, at the same redshift, the error on ℱ(z) is more pronounced for BNS systems compared to NSBH systems, attributed to the lower SNR observed for BNS events.§.§ Case with varying the Hubble constant H_0 In the preceding sections, our analysis primarily focused on evaluating the frictional term denoted as ℱ(z) in Equation <ref>, assuming a fixed Hubble constant (H_0). However, in this section, we jointly infer both ℱ(z) and H_0. This extension enables the simultaneous estimation of both the frictional term ℱ(z) and H_0, derived from sound horizon measurements that remain constant across redshift, serving as an overall normalization on ℱ(z) measurements.Employing a hierarchical Bayesian analysis, our unified framework efficiently estimates these two parameters by incorporating diverse observational datasets, including GW measurements, CMB measurements, and large-scale structure surveys, as discussed in previous sections. The hierarchical organization of these datasets ensures robust integration, effectively handling their unique uncertainties and systematics. The resulting correlated constraints significantly contribute to our understanding of cosmological phenomena, addressing potential discrepancies between datasets and revealing insightful connections among fundamental cosmological parameters.The equation representing the joint posterior distribution on ℱ(z) and H_0 for these sources is given as <cit.> P(ℱ(z),H_0|{d^GW},{z}) ∝Π(ℱ(z))Π(H_0)∏_i=1^n_GW∭ × dr_sP(r_s^i) dd_l^GW^idθ_BAO^iP(θ_BAO^i) ℒ(d_l^GW^i|ℱ(z^i),H_0,θ_BAO^i,r_s,z^i),In this equation, P(ℱ(z), H_0|{d^GW}, {z}) denotes the joint posterior probability density Function of ℱ(z) and H_0. The term ℒ(d_l^GW^i|ℱ(z)^i, H_0^i, θ_BAO^i, r_s,z^i) corresponds to the likelihood function. The term Π(H_0) denotes the prior distribution for H_0, encapsulating our knowledge or assumptions regarding the Hubble constant before incorporating measured data. All other terms represent the same entities as defined in Section <ref>. In our hierarchical Bayesian framework, we adopt a flat prior assumption, indicating a lack of prior knowledge about the parameters for both the frictional term (ℱ(z)) and the cosmic expansion rate (H_0). The parameter ranges are defined from 0.1 to 2.0 for ℱ(z) and from 40 km/s/Mpc to 100 km/s/Mpc for H_0.In Figure <ref>, we present measurements of ℱ(z) with varying values of H_0 for both the CE&ET and the LVK system, focusing on the total number of events within each redshift bin (mentioned in the parenthesis in the upper x-axis in the plot). This plot reveals a discernible trend, indicating an increase in measurement errors with higher redshifts. Notably, due to the higher likelihood of BNS events compared to NSBH events, we can achieve a more accurate measurement of ℱ(z) for BNS events than for NSBH events.§ RESULT §.§ Reconstruction of ℱ(z)This study exclusively focuses on bright sirens, indicating NSBH and BNS mergers with distinct EM counterparts <cit.>. These include short GRBs and high-energy gamma-ray bursts detected by satellites like Fermi <cit.> and Swift <cit.>. Additionally, kilonovae generate bright optical and infrared transients, offering insights into heavy element formation. Observatories like the Vera Rubin <cit.>, HST <cit.>, and the upcoming Roman space telescope <cit.> are poised to capture these events. Post-merger, afterglows spanning X-ray, UV, optical, and radio wavelengths are scrutinized with instruments such as Chandra <cit.>, VLA <cit.>, ZTF <cit.>, Astrosat <cit.>, Daksha <cit.> and GMRT <cit.>.Detecting EM counterparts in BNS and NSBH mergers poses challenges related to prompt collapse, ejecta mass, and orientation. The visibility of collimated jets is influenced by their opening angle, with wider angles increasing detectability. Distance, instrument sensitivity, and rapid event localization further complicate detection. In astrophysics, terms like "structured jets" and "off-axis viewing" describe these phenomena <cit.>. Coordinated efforts between GW and EM observatories are vital for insights into these cosmic collisions. Despite progress, challenges persist, requiring proximity, orientation, and sensitivity considerations. Success relies on refining theoretical models, combining GW and EM observations, and technological advancements. Future endeavors aim to optimize multi-messenger observations, deepen theoretical comprehension, and amplify the detectability of NSBH merger EM counterparts.In our study, we present cases for both the CE&ET and LVK systems. Specifically for CE&ET, we showcase outcomes for up to redshifts z =2 for different numbers of detectable events. In the case of NSBH binaries, we consider a total of 30 events with a local merger rate (R_0) set at 20 Gpc^-3yr^-1 in the LVK system. These events span redshifts from 0.125 to 0.425. Similarly, for BNS systems, we analyze approximately 25, 40, and 60 events with R_0 set to 200 Gpc^-3yr^-1, 300 Gpc^-3yr^-1, and 500 Gpc^-3yr^-1, respectively. The redshifts of these events range from 0.025 to 0.225 over a 5-year observation period. In Figure <ref> andFigure <ref>, the number of detectable events as a function of redshift with different R_0 and T_obs are presented for the LVK and CE&ET systems. Figure <ref> and Figure <ref> provide a comprehensive comparison of ℱ(z) measurements with fixed H_0 and varying H_0 across different scenarios for both LVK and CE&ET. These scenarios include the detection of a single event, a quarter of the events, half of the events, and the detection of all events. The aim is to demonstrate how the precision of measurements, ranging from percentages to sub-percentages, can be achieved using future detectors. This comparative analysis sheds light on the impact of event detection rates on the precision of ℱ(z) measurements and underscores the potential of advanced detectors like CE&ET to provide increasingly accurate cosmological insights. In this section, we measure ℱ(z) with a constant H_0 that remains uniform across redshifts, serving as an overall normalization factor for ℱ(z) measurements. Consequently, if an independent precise measurement of ℱ(z) in our local universe is possible, it will allow us to normalize it to 1, eliminating the dependence on H_0. This normalization proves effective particularly for low redshifts, where the cumulative effects of modified gravity wave propagation have not had sufficient time to accumulate deviations from GR.The error associated with the frictional term approximately scales as the inverse of the square root of the number of GW sources represented as 1/√(N_GW). This implies that as the number of GW sources increases, the error on the frictional term decreases, thereby improving the precision of the measurement. However, this improvement reaches a saturation point, beyond which an increase in the number of GW sources does not lead to a significant enhancement in the measurement's accuracy. This saturation is primarily due to the presence of error on the BAO scale. The BAO scale error acts as a limiting factor in the precision of the frictional term estimation. Therefore, to achieve further improvement in the estimation of the frictional term with redshift, a more precise measurement of the BAO scale is required. In essence, the precision of the frictional term estimation is a delicate balance between the number of GW sources and the accuracy of the BAO scale measurement and GW luminosity distance. Optimizing these factors can lead to significant advancements in our understanding of ℱ (z) and discovering unexplored territories in fundamental physics. On the GW side, the measurements of ℱ(z) are solely dependent on d_l^GW(z). Any improvement in the precision of d_l^GW(z) will markedly enhance the accuracy of ℱ(z), as d_l^GW(z) stands out as the predominant source of error for ℱ(z). Given that d_l^GW(z) is degenerate with other GW parameters, most notably the inclination angle (i), measuring inclination angle from alternative sources (such as EM counterparts) can substantially reduce the error in d_l^GW(z) and consequently enhance the precision of ℱ(z) <cit.>. For redshift inference from EM counterparts, the influence of peculiar velocity contamination is particularly significant at low redshifts (z<0.03) but it is not important for high redshift <cit.>. As most of the detected sources are at high redshift, we have ignored this effect. Also, weak lensing of the GW sources can be a source of contamination to the luminosity distance up to a few percent at redshifts above z=0.5 <cit.>. In comparison to the error bar on luminosity distance, the weak lensing error bar is small. As most of the sources with better luminosity distance measurement are mainly at low redshift (z<0.5), we are getting maximum constraints around this redshift. So, the weak lensing uncertainty is not a major contamination for these sources. Moreover, weak lensing can provide additional information to constraint non-GR parameter space <cit.>, which we have not considered in this analysis.§.§ Reconstruction of ℱ(z) and H_0By extension of the hierarchical Bayesian framework used in the previous section to measure ℱ(z) with fixed H_0, we can jointly measure ℱ(z) and H_0. In our framework, we measure the EM luminosity distance from different length scales, namely the BAO scale, sound horizon distance, and redshift. In Figure <ref> and Figure <ref>, we offer a thorough comparison of ℱ(z) measurements with both fixed and varying H_0 across different scenarios for both the LVK and CE&ET systems.It is noteworthy that when allowing for variations in H_0, the errors in the measurements of ℱ(z) increased by a factor ranging from 2 to 2.2 compared to measurements with a fixed H_0. In Figure <ref>, we highlight the enhanced precision in measuring the redshift-dependent variation of the Planck mass (ℱ(z)) for NSBH and BNS systems. The figure emphasizes that the measurement of ℱ(z) is notably improved in the redshift range of 0.2 to 1.0. The precision of estimating ℱ(z) depends on accurate measurements of both the BAO scale and d_l^GW(z). Improved accuracy in the BAO scale measurements directly enhances precision in estimating ℱ(z). On the GW side, accurate measurements of ℱ(z) rely solely on d_l^GW(z). Enhancing the precision of d_l^GW(z) improves ℱ(z), as mentioned in the previous Subsection <ref>. There can also be peculiar velocity corrections and weak lensing contamination at low redshift as described in <ref>which are not taken care of in this study.§ CONCLUSION AND DISCUSSION In this study, we present a model-independent approach to search for the frictional term, which can capture deviations at any redshift from GR. We propose a model-independent reconstruction of the frictional term as a function of redshift using a data-driven approach by amalgamating three length scales with redshift namely the luminosity distance from GW sources d_l^ GW(z), the angular scale of BAO scale from galaxy surveys θ_ BAO (z), and the sound horizon r_s from CMB observations. We show this provides a unique opportunity to scrutinize theories of gravity, specifically to test the presence of a frictional term. In the literature, various model-dependent searches for the frictional term have been conducted. These searches face challenges, especially near redshift z=0 and at large redshifts. Particularly noteworthy are the difficulties encountered when there is a potential loss of gravitons in extra dimensions. Furthermore, in the intermediate redshift range, the frictional term may display oscillatory behavior that cannot be adequately captured by these model-dependent searches. We demonstrate that by integrating BAO measurements from galaxies, sound horizon distances from the CMB power spectrum, redshift information from the EM counterpart, and luminosity distance measurements from GW sources, one can perform a comprehensive measurement of cosmological parameters. Specifically, we highlight the potential to jointly determine the Hubble constant (H_0) and the frictional term through this integrated analysis. We have demonstrated the feasibility of measuring the frictional term using our ongoing detectors, such as LVK, across various redshifts. Additionally, we have explored the capabilities of upcoming detectors, specifically CE&ET, in quantifying the frictional term under various scenarios. Our analysis includes cases where a fraction of the event's EM counterpart is detectable, acknowledging the inherent difficulty in detecting EM counterparts.We demonstrate the feasibility of achieving a precision measurement of the frictional term, ranging from a percent to sub-percent accuracy, as a function of cosmic redshift. This is achievable using data from LVK and, in the future, from CE&ET, despite the limitation of a finite number of bright sirens. The measurement of ℱ(z) is particularly improved in the redshift range of 0.2 to 1.0. This improvement is attributed to two main factors: SNR and the number of events. The measurements from LVK can be further improved by including LIGO-India <cit.>[<https://dcc.ligo.org/LIGO-M1100296/public>]. In the (Ξ_0, n) parametrizations of the frictional term, the fiducial value for Ξ_0 is set to 1. However, it is important to note that the parameter n is not defined at this fiducial value of Ξ_0=1. Consequently, measuring the value of n becomes increasingly challenging as Ξ_0 approaches 1 <cit.>. By utilizing 1000 bright sirens up to redshift 8, <cit.> demonstrated that Ξ_0 can be measured with an accuracy of 0.8% using the Einstein Telescope for a fixed value of n. In our study, employing a model-independent parametrization, we found that with 1 year of observation using CE&ET, we can measure the frictional term with an accuracy of 1.62% and 2% for BNS and NSBH systems at redshift 0.5. Additionally, we can reconstruct the frictional term as a function of redshift without relying on any specific parametrization, as illustrated in Figure <ref>. This error scales inversely with the square root of the observation time, i.e., as 1/√(T_obs/years). Our proposed technique extends to dark sirens, where redshift inference becomes necessary. In such cases, we leverage the three-dimensional clustering scale of GW sources with galaxies to infer redshift information <cit.>. Furthermore, this methodology holds promise for both bright and dark standard sirens in the mHz range, detectable by future space-based detectors like LISA. In this study, our exclusive focus is on bright sirens, and as a result, we exclude a prominent category of GW sources Binary Black Holes (BBH) due to their unlikely association with EM counterparts. Additionally, there are challenges associated with detecting EM counterparts for BNS and NSBH systems <cit.>. One potential method to enhance the accuracy of frictional term measurements is to include dark sirens, which lack EM counterparts. In future, the integration of LIGO-India and the LISA GW detector, in conjunction with projects such as eBOSS, DESI, Vera Rubin, and Euclid for BAO measurements, along with the application of this technique to dark sirens, holds the potential to substantially improve the precision of measuring the frictional term. § ACKNOWLEDGEMENTS The authors express their gratitude to Guillaume Dideron for reviewing the manuscript and providing useful comments as a part of the LIGO publication policy. This work is part of the , supported by TIFR and the Department of Atomic Energy, Government of India. The authors express gratitude to the computer cluster ofand the TIFR computer center HPC facility for computing resources. Special thanks to the LIGO-Virgo-KAGRA Scientific Collaboration for providing noise curves. LIGO, funded by the U.S. National Science Foundation (NSF), and Virgo, supported by the French CNRS, Italian INFN, and Dutch Nikhef, along with contributions from Polish and Hungarian institutes. This collaborative effort is backed by the NSF’s LIGO Laboratory, a major facility fully funded by the National Science Foundation. The research leverages data and software from the Gravitational Wave Open Science Center, a service provided by LIGO Laboratory, the LIGO Scientific Collaboration, Virgo Collaboration, and KAGRA. Advanced LIGO's construction and operation receive support from STFC of the UK, Max-Planck Society (MPS), and the State of Niedersachsen/Germany, with additional backing from the Australian Research Council. Virgo, affiliated with the European Gravitational Observatory (EGO), secures funding through contributions from various European institutions. Meanwhile, KAGRA's construction and operation are funded by MEXT, JSPS, NRF, MSIT, AS, and MoST. This material is based upon work supported by NSF’s LIGO Laboratory which is a major facility fully funded by the National Science Foundation. We acknowledge the use of the following packages in this work: Astropy <cit.>, Bilby <cit.>, Pandas <cit.>, NumPy <cit.>, Seaborn <cit.>, CAMB <cit.>, Scipy <cit.>, Matplotlib <cit.>, Dynesty <cit.>, and emcee <cit.> § DATA AVAILABILITYThe corresponding author will provide the underlying data for this article upon request.mnras§ UNCERTAINTIES IN H_0 IN THE RECONSTRUCTION OF H_0 AND ℱ(Z)In Section <ref>, we extended the hierarchical Bayesian framework previously employed for measuring ℱ(z) with a fixed H_0 to a joint measurement scenario for both ℱ(z) and H_0. Within this framework, a flat prior assumption was utilized, indicating a lack of prior knowledge about the parameters associated with the frictional term (ℱ(z)) and the cosmic expansion rate (H_0). For the parameters governing the frictional term (ℱ(z)) and the Hubble constant (H_0), we established parameter ranges of 0.1 to 2.0 and 40 km/s/Mpc to 100 km/s/Mpc, respectively. InFigure <ref>, we showcase the joint posterior of ℱ(z) and H_0 for two distinct scenarios. The first scenario corresponds to CE&ET, considering all observed events for z=0.225 in the case of NSBH events. The second scenario pertains to LVK, also for z=0.225 also focusing on NSBH events. This visualization provides a comparative analysis, illustrating the potential advancements achievable with our future detector, CE&ET, in contrast to the current capabilities of the LVK detector. The joint posterior distribution offers valuable insights into the precision and capabilities of each detector in estimating the parameters ℱ(z) and H_0. In Figure <ref> and Figure <ref> we showed the joint posterior of ℱ(z) and H_0 forCE&ET and LVK at redshift 0.225 for just only one event.
http://arxiv.org/abs/2312.16292v1
{ "authors": [ "Samsuzzaman Afroz", "Suvodip Mukherjee" ], "categories": [ "astro-ph.CO", "gr-qc" ], "primary_category": "astro-ph.CO", "published": "20231226190112", "title": "A Model-Independent Precision Test of General Relativity using Bright Standard Sirens from ongoing and upcoming detectors" }
[email protected] Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN, Wako, Saitama 351-0198, JapanAddressing high-dimensional partial differential equations (HDPDEs) to derive effective actions within the functional renormalization group is formidable, especially when considering various field configurations, including inhomogeneous states, even on lattices. We leverage a physics-informed neural network (PINN) as a state-of-the-art machine learning method for solving HDPDEs to overcome this challenge. In a 0-D O(N) model, we numerically demonstrate the construction of an effective action on an N-D configuration space, extending up to N=100. Our results underscore the effectiveness of PINN approximation, even in scenarios lacking small parameters such as a small coupling. Physics-informed neural network for solving functional renormalization group on lattice Takeru Yokota January 14, 2024 ======================================================================================= The utilization of the functional renormalization group (FRG) <cit.> has gained widespread popularity as a non-perturbative theoretical tool across diverse fields, encompassing high-energy physics, condensed matter physics, and statistical physics (see Refs. <cit.> for comprehensive reviews). Central to the FRG is the utilization of functional differential equations (FDE), such as the Wetterich equation <cit.>, which plays a pivotal role in describing the flow within the renormalization group (RG). Notably, the self-determination of the effective action (encompassing all the correlation information of a system) through the Wetterich equation enhances the precision and comprehensiveness of the FRG formalism.Despite these advantages, the absence of universal, efficient, and accurate algorithms for solving FDEs hampers an easily accessible and accurate determination of various properties. The quest for such algorithms or useful approximation schemes remains an open problem.Power series expansions, such as the vertex expansion (the functional Taylor expansion) and the derivative expansion, are commonly employed in solving the FRG. In these approaches, the Wetterich equation transforms into an infinite hierarchy of differential equations for the expansion coefficients. These coefficients are subsequently truncated in a certain order to facilitate approximate solutions. However, the effective action Γ[φ] is only valid for specific field configurations φ(x). For instance, the effective action derived from the vertex expansion is valid in the vicinity of the expansion point φ(x)≈φ_ exp(x), while that obtained from the derivative expansion is applicable when φ(x)≈const. In these cases, prior knowledge of the field's ground state is a prerequisite for calculations, limiting the ability to capture complex structures such as inhomogeneous states. Moreover, enhancing the accuracy of results often entails computationally demanding efforts in improving the truncation order.The application of FRG extends beyond continuum models and is commonly employed in lattice models as well <cit.>. On a finite lattice, the Wetterich equation becomes an (N_ DOF+1)-dimensional partial differential equation (PDE) involving N_ DOF degrees of freedom for the RG scale and fields variables. While a finite-dimensional PDE might appear more amenable to numerical analysis than an FDE, the computational complexity of calculations with a large N_ DOF grows exponentially when a computational grid is assigned to each field component. Therefore, even in lattice models, approximations based on power series expansions remain commonly employed.This study aims to demonstrate that machine learning offers a novel framework for solving the FRG applied to lattice models with large N_ DOF as an alternative to power series expansions. Among the array of recently developed machine learning methods for handling high-dimensional PDEs <cit.>, we leverage the physics-informed neural network (PINN) <cit.>. PINN can be applied to various PDEs and involves optimizing a differentiable neural network (NN) to satisfy PDE and boundary conditions, providing a solution for a domain of input variables space rather than a single point. Due to its grid-free characteristic, PINN is particularly advantageous for handling high-dimensional inputs, as demonstrated in recent applications to high-dimensional PDEs <cit.>, including 10^5-dimensional cases <cit.>. In such scenarios, the limitations imposed by N_ DOF are naturally expected to relax, implying the possibility of simultaneously constructing effective actions for various field configurations, including inhomogeneous states. Moreover, the universal approximation theorem <cit.> suggests that NNs can serve as accurate approximations for effective action. In the subsequent sections, we present a PINN-based method for solving the FRG applied to a lattice (PINN-LFRG). We outline a methodology for representing the effective action using a differentiable NN, which is trained to satisfy the Wetterich equation. Furthermore, we provide numerical demonstrations of the scalability and accuracy of this approach in the context of the zero-dimensional O(N) model. Here, PDEs with N_ DOF+1=N+1≤ 101 dimensions are solved within a few hours. The effective action and self-energy are computed across a domain of the field space simultaneously, exhibiting superior or comparable accuracies when contrasted with results obtained through perturbative and large-N expansions, spanning various choices of coupling strength and N. Additionally, the O(N) symmetry for the effective action is successfully reproduced through training the Wetterich equation. These findings underscore the feasibility of utilizing NNs to approximate the effective action even without a small parameter, such as a small coupling. We note that our purpose of solving FRG flow equations differs from that in a recent machine-learning-based FRG study <cit.>, which focuses on the dimensionality reduction of the four-point vertex function as given by FRG.We focus on the FRG applied to bosons in a d-dimensional space-time lattice. The action is represented by S(φ), where φ={φ_n,α}_n,α is a real bosonic field. Here, the d-dimensional vector n indicates a lattice site and α is the internal degrees of freedom index. The total degrees of freedom for this system are given by N_ DOF=VN_ IDOF, where V denotes the lattice volume, and N_ IDOF is the internal degree of freedom. All quantities are expressed in lattice units.We adhere to the formalism outlined by Wetterich <cit.>, where the RG flow of the effective action Γ(φ) is governed by the Wetterich equation <cit.>:∂_k Γ_k(φ) = 1/2tr[ ∂_k R_k ( ∂^2 Γ_k(φ)/∂φ∂φ + R_k )^-1],which is an (N_ DOF+1)-dimensional PDE. The regulator R_k=R_k,n-n'^αα' is a specified function designed to suppress fluctuations with momenta smaller than the RG scale, k. Additionally, we introduce the effective average action Γ_k(φ), which satisfies Γ_k_ UV(φ)=S(φ)+const. for a large ultraviolet scale, k_ UV, and Γ_k_ IR(φ)=Γ(φ) for a small infrared scale, k_ IR, under a suitable choice of R_k <cit.>.In principle, Eq. (<ref>) determines Γ(φ), encompassing all the thermodynamic properties and correlations. Typically, Taylor series expansions, including vertex and derivative expansions, are employed. These expansions yield an approximate calculation of Γ(φ) for a specific configuration of φ_n,α. However, there is currently no established method to accurately and efficiently obtain Γ(φ) for a broad domain of the φ_n,α space. Our goal is to propose a promising candidate for such a method.Initially, calculations involving large N_ DOF may appear computationally challenging, given their complexity, which grows exponentially when a grid is associated with each component φ_n,α. However, our approach is rooted in the resilience of PINN to this issue, given its grid-free and applicability to high-dimensional PDEs <cit.>. In PINN, the solution is represented by a differentiable NN, eliminating the need for discretization in numerical differentiation. The NN is optimized to satisfy the PDE and the boundary conditions (BCs) using backpropagation. The optimization function may take the form L=L_ PDE+wL_ BC <cit.>, where L_ PDE (L_ BC) reaches its minimum if, and only if, the NN satisfies the PDE (BC) for any input, and w is a positive hyperparameter to adjust the relative scale of the two terms. The presence of both terms L_ PDE and L_ BC can pose challenges. For example, tuning w for efficient optimization convergence can be required. However, in our case of the initial value problem, L_ BC can be omitted with an appropriate choice of the ansatz on Γ_k(φ) similar to Ref. <cit.>. We make such an ansatz based on the decomposition:Γ_k(φ) = S(φ) + Γ_ RG(l,φ),where l=ln(k_ UV/k). Since the initial condition is Γ_k_ UV(φ) = S(φ), the RG-induced part Γ_ RG(l,φ) satisfies Γ_ RG(0,φ)=0. We further decompose Γ_ RG(l,φ) as Γ_ RG(l,φ) = γ_ free(l) + γ(l,φ).Here, γ_ free(l) represents the constant term originating from the free quadratic term S_ free(φ) of S(φ). In other words, it is the solution when Γ_k(φ) on the right-hand side of Eq. (<ref>) is substituted by S_ free(φ).The remaining term γ(l,φ) constitutes the non-trivial interaction-induced part, corresponding to the shift in the free energy. By imposing γ(0,φ)=0, we replace γ(l,φ) with an NN. A conceivable choice is:γ(l,φ)≈γ(l,φ;θ)=NN_θ(l,φ)-NN_θ(0,φ),where NN_θ(l,φ) is a differentiable NN with parameters θ.A possible choice of L=L_ PDE to train γ(l,φ;θ) isL_θ = 𝔼_φ∼𝒫_φl∼𝒫_l[ ( ∂_l Γ_k^θ(φ) - 1/2tr∂_l R_k( ∂^2 Γ_k^θ(φ)/∂φ∂φ + R_k )^-1)^2 ],with Γ_l^θ(φ)=S(φ)+γ_ free(l)+γ(l,φ;θ), we introduce probability distributions 𝒫_φ and 𝒫_l, defined for the φ-space and l∈ [0,l_ end], with l_ end=ln(k_ UV/k_ IR), respectively. In practice, the expectation value is approximately evaluated using a finite number of collocation points { (l^(i),φ^(i)) }_i=1^N_ col sampled according to 𝒫_φ and 𝒫_l. Naively, if one is interested in a specific configuration φ=φ_ target, then 𝒫_φ should be chosen as to sample the neighborhoods of φ_ target at high rates. A caveat is that, even in such a case, 𝒫_φ should be sufficiently broad for learning φ derivatives, i.e., the φ-dependence of γ(l,φ;θ). We surmise that the breadth should have the scale of the fluctuation √(⟨(φ_n,α-φ_ target, n,α)^2|⟩) for each direction φ_n,α to describe correlations.To illustrate how PINN-LFRG works, we apply it to the zero-dimensional O(N) model, which possesses an exact solution. The action is given by:S(φ)=1/2m^2φ^2+g/4!(φ^2)^2,where φ=(φ_1,…, φ_N) represents an N-component scalar field, and m and g are the mass and coupling, respectively. This model gives the total degree of freedom by N_ DOF=N_ IDOF=N due to V=1. We investigate the scalability with respect to N_ DOF by increasing N [The Wetterich equation for this model can be reduced to a two-dimensional PDE with variables l and ρ=φ^2/2, but we do not use this reduction to investigate scalability]. We also assess accuracy by comparing the results of the interaction-induced effective action γ(l,φ) and the RG-induced self-energy σ_α(l,φ)=∂^2γ(l,φ)/∂φ_α^2 to those from the exact calculation, perturbative expansion up to the leading order, and large-N expansion up to O(1) <cit.>. Note that g̃=Ng/m^4 is the dimensionless control parameter determining the perturbative region as g̃≪ 1 due to ⟨φ^2 ⟩∼ N/m^2 <cit.>.The regulator is set to R_k^αα'=k_ UV^2e^-2lδ_αα'. The stationary point approximation needs to hold at l=0 <cit.> to realize Γ_k_ UV(φ)≈ S(φ). This condition is given by ⟨ S(φ^2) ⟩≪⟨ k_ UV^2φ^2/2 ⟩, which leads to m^2/k_ UV^2 ≪ 1 and g̃≪ k_ UV^2/m^2. Hereafter, our parameters satisfy m^2/k_ UV^2=0.01 and g̃≪ 100. We set l_ end=5. Our ansatz on γ(l,φ) is based on Eq. (<ref>). Our NN_θ(l,φ) is a fully connected NN composed of 3 hidden layers with 256 units per layer and the differentiable softplus activation function. We find that this choice of NN shows successful convergence in the pretraining described below. Figure <ref> depicts a schematic of our proposed NN architecture for γ(l,φ). The matrix ∂_φ^2 Γ_k^θ(φ)+R_k must be regular during training since there is a matrix inverse in Eq. (<ref>). In our experience, this condition is frequently broken without carefully choosing the initial values of θ. We find that pretraining with some approximate analytical results remedies this problem. Specifically, we use the result of the first-order perturbation γ^ 1pt(l,φ), employing the following optimization function:L_θ^ pre = 𝔼_φ∼𝒫_φl∼𝒫_l[(γ(l,φ;θ)-γ^ 1pt(l,φ))^2 ].The Xavier initialization <cit.> is used for this pretraining. For 𝒫_l, we adopt a uniform distribution within the interval [0, l_ end]. To sample the neighborhoods of φ=0, representing the vacuum expectation value, we define 𝒫_φ such that the direction n̂=φ/φ is uniformly sampled. The norm φ is sampled following a normal distribution 𝒩(0, N/m^2) without the sign, where the variance N/m^2 corresponds to the order of ⟨φ^2|$⟩. It is noteworthy that the efficiently sampling neighborhoods ofφ=0for largeNis challenging if𝒫_φis set to anN-dimensional normal distribution𝒩(0, m^-21)or a uniform distribution in anN-dimensional box due to the curse of dimensionality. The Adam optimizer <cit.> is utilized to train the NN with Eqs. (<ref>) and (<ref>). All computations are executed on an NVIDIA A100 GPU with 40 GB of memory. Additional details about our training procedures are in the Supplemental Material <cit.>.We conducted computations for all the combinations ofN=1,10,100andg̃=0.1,1,10. In each case, the computational time for training is kept within 11 hours, ensuring the convergence ofL_θand physical quantities <cit.>. Figure <ref> illustrates the results ofγ(l,φ)andσ(l,φ)forN=1andg̃=1. Specifically, the plot depicts thel-dependence at the vacuum expectation valueφ=0and theφ-dependence atl=l_end. The results from exact calculations, perturbative and large-Nexpansions, and the model after the pretraining are also presented. The perturbative and large-Nexpansion results show considerable deviations from the exact ones since bothg̃=1and1/N=1are not small. Notably, the training of the Wetterich equation successfully shifts values from those obtained by the perturbation approach toward the exact results. In all instances, our PINN-LFRG approach exhibits higher accuracy than the perturbative method and large-Nexpansions. It is crucial to highlight that our approach provides solutions over a broad domain ofφ, in contrast to the limitations of the vertex expansion method. Some deviations between the results of the pretrained model and the perturbation can be observed inσ(l,φ)since the pretraining was halted before full convergence, which is sufficient for stabilizing the training of the Wetterich equation. Figure <ref> illustrates the result forN=100andg̃=1. With the exception ofγ(l,0), our results are presented as theN=100lines corresponding to theNdirections in theφ-space. This includes all theα=1,…,100cases ofγ(l,φe_α)andσ_α(l,φe_α), wheree_αdenotes the unit vector in theφ_αdirection. The PINN-LFRG results for differentαclosely match, with differences being imperceptible inγ(l_end,φe_α)andσ_α(l,0)/m^2. Even inσ_α(l_end,φe_α), all the results from our approach are as close to the exact result as those of the large-Nexpansion, which is expected to be accurate forN=100. These findings indicate that the NN automatically captures theO(N)symmetry, enabling a simultaneously accurate solution for a domain of the high-dimensional configuration space.Table <ref> summarizes the relative errors ofγ(l_end,0)andσ(l_end,0)compared to the exact values for all values ofNandg̃. In the case of PINN-LFRG forN>1, we determineσ(l_end,0)by averagingσ_α(l_end,0)with respect toα=1,…, N, and we derive the standard deviation. To show the tendency, we plot the absolute values of the relative errors ofγ(l_end,0)compared to the exact values as a function ofNin Fig. <ref>; almost the same tendency is seen forσ(l_end,0). For allNandg̃values, the errors of PINN-LFRG are within 3% forγ(l_end,0)and 1% forσ(l_end,0)even if the standard deviations are taken into account. Particularly, PINN-LFRG is accurate even for the non-perturbative and small-Nregions, where both the perturbative and large-Nexpansions break down. This suggests that the NN is a promising tool for providing accurate approximation independently of the existence of a small parameter.This study introduces PINN-LFRG as a novel framework for solving the Wetterich equation on a finite lattice. The approach demonstrates the ability to simultaneously derive an effective action for various field configurations. The proposed procedure involves representing the effective action through an NN and optimizing it. The method is applied to the zero-dimensionalO(N)model, yielding effective action and self-energy results for different choices ofNup to100and various coupling strengths. Across all investigated cases, PINN-LFRG shows superior or comparable accuracy compared to the perturbative and large-Nexpansions, successfully reproducing theO(N)symmetry. These results indicate the feasibility of calculations involving a substantial number of degrees of freedom, around10^2or more, with NNs effectively approximating the effective action without the reliance on a small parameter, such as a small coupling. Our analysis can be readily extended to models incorporating temporal and spatial degrees of freedom. An intriguing avenue for further exploration is the investigation of inhomogeneous states in scalar models, such as solitons, within our framework, building upon existing work on this topic <cit.>. Extending the approach to fermionic systems poses a substantial challenge since there is currently no efficient method for constructing NNs for Grassmann variables. However, one could apply our approach to fermionic systems by introducing bosonic auxiliary fields, for example. An exciting application in this direction is the adaptation of our method to density functional theory <cit.>, a standard tool for analyzing many-body systems. This has been extended to apply to lattice models, such as the Hubbard model <cit.>. We anticipate that our approach holds promise for the FRG-based formalism of density functional theory, a framework that has seen recent developments <cit.>.The author thanks T. Miyagawa for carefully reading the manuscript and making valuable comments. The author also thanks G. Fejős, T. Hatsuda, T. Naito, O. Sugino, and J. Yamamoto for valuable discussions. The RIKEN Special Postdoctoral Researchers Program supported the author. unsrtSupplemental Materials: Physics-informed neural network for solving functional renormalization group on lattice § FORMULATION OF FUNCTIONAL RENORMALIZATION GROUPHere, we provide a summary of the lattice formulation of the FRG for bosonic systems. Consistent with the main text, we consider ad-dimensional space-time lattice and represent the action asS(φ), whereφ={φ_n,α}_n,αdenotes a real bosonic field, with thed-dimensional vectornindicating a lattice site andαrepresenting the internal degrees of freedom. The imaginary-time formalism is employed, and all quantities are expressed in lattice units.Following Wetterich's formulation <cit.>, a regulator term is introduced into the action to induce the RG flow:S_k(φ) = S(φ) + 1/2∑_n,α,n,α'φ_n,αR_k,n-n'^αα'φ_n',α'.The regulatorR_k,n-n'^αα'is a predefined function acting as an artificial mass, designed to dampen fluctuations with momenta smaller than the RG scalek. In the momentum space, the regulator must adhere to the following conditions:lim_p^2/k^2→ 0R̃_k(p)>0,R̃_k_ IR→ 0(p) =0,R̃_k_ UV→∞(p) =∞.For simplicity, we have omitted the indices for the internal degrees of freedom. The first condition signifies the suppression of infrared fluctuations, while the second condition ensures that all fluctuations are included at a small infrared scalek_IR. The final condition is crucial for determining the initial condition of the RG flow. It ensures that the system becomes classical, described byS_k_UV(φ), at a large ultraviolet scalek_UV. In terms of the path integral introduced below, this condition validates the saddle-point approximation atk=k_UV. Compared to the continuous one, the distinctions in the lattice setup lie in the dispersion relation and the restriction of the momentum to the Brillouin zone. An appropriate regulator choice that accommodates these differences is discussed in Ref. <cit.>.With this regulator, one can define the effective average actionΓ_k(φ), which interpolates between the bare actionS(φ)and the effective actionΓ(φ). The definition is:Γ_k(φ) = sup_J( ∑_n, αJ_n,αφ_n,α-ln Z_k(J) ) - 1/2∑_n,α,n,α'φ_n,α R_k,n-n'^αα'φ_n',α',with the path-integral form of the partition functionZ_k(J) = ∫ dφ e^-S_k(φ) + a^d∑_n,αJ_n,αφ_n,α.The conditionlim_k→0Γ_k(φ)=Γ(φ)immediately follows from Eq. (<ref>). From the saddle-point approximation validated by Eq. (<ref>), we haveΓ_k_UV(φ)=S(φ)+const. The RG flow equation is derived by the derivative of Eq. (<ref>) with respect tok:∂_k Γ_k(φ) = 1/2∑_n,α∑_n',α'[ ∂_k R_k,n-n'^αα'( ∂^2 Γ_k(φ)/∂φ∂φ + R_k )^-1_n'α',nα],which is known as the Wetterich equation. Here, the inverse is defined by∑_n',α'( ∂^2 Γ_k(φ)/∂φ∂φ + R_k )^-1_nα,n'α'( ∂^2 Γ_k[φ]/∂φ_n',α'∂φ_n”,α” + R_k,n'-n”^α'α”) = δ_n,n”δ_αα”.The Wetterich equation is written in a short-hand notation in the main text. § EXACT CALCULATION, PERTURBATIVE EXPANSION, AND LARGE-N EXPANSION IN THE ZERO-DIMENSIONAL O(N) MODELWe summarize the numerical procedure for the exact calculation and the results of the perturbative and large-Nexpansions for the interaction-induced effective actionγ(l,φ)=γ(l,φ)and the RG-induced self-energyσ(l,φ)=∂_φ^2 γ(l,φ)in the zero-dimensionalO(N)model. We use the form of the regulatorR_k^αα'=r_kδ_αα'as in the main text.The exact results are obtained by directly evaluating the path integral of the partition function:Z_l(J) = ∫ dφ e^-1/2m_l^2φ^2-g/4!(φ^2)^2+J·φ,wherem_l^2=m^2+r_krepresents the regulated mass squared. Due to the presence of anO(N-1)symmetry in theφ-space perpendicular toJ, the integral can be simplified as follows:Z_l(J) = Ω_N-1∫_-∞^∞ dφ e^-1/2m_l^2φ^2-g/4!φ^4+Jφ Q_N-2,l(φ^2), Q_N-2,l(φ^2) = ∫_0^∞dx x^N-2 e^-1/2(m_l^2+g/6φ^2)x^2-g/4!x^4 = 1/2(6/g)^N-1/4Γ(N-1/2) U (N-1/4,1/2,3/2g(m_l^2+g/6φ^2)^2 ),where we have introducedJ=J, the surface area of the unit(N-1)-sphereΩ_N=2π^N/2/Γ(N/2), the gamma functionΓ(x), and the Tricomi's confluent hypergeometric functionU(a,b,z). LetJ=J_sup, l(φ)be an external field realizing⟨φ|=⟩φ, i.e., the solution ofφ = ∂ln Z_l/∂ J(J_ sup, l(φ)) = ∫_-∞^∞ dxx e^-1/2m_l^2x^2-g/4!x^4+J_ sup,l(φ)x Q_N-2,l(x^2) /∫_-∞^∞ dxe^-1/2m_l^2x^2-g/4!x^4+J_ sup,l(φ)x Q_N-2,l(x^2) .With this external field, the effective action and the self-energy are given byΓ(l,φ) =J_ sup,k(φ)φ - ln Z_k(J_ sup,k(φ)) - 1/2r_kφ^2,Σ(l,φ) = ∂_φ^2 Γ(l,φ)-m^2 = 1/φ^2-φ^2-m_l^2,where correlation functionφ^2-φ^2is evaluated byφ^2-φ^2 = ∂^2 ln Z_l/∂ J^2 (J_ sup,l(φ)) = ∫_-∞^∞ dx e^-1/2m_l^2x^2-g/4!x^4+J_ sup,lx (x-φ)^2 Q_N-2,l(x^2) /∫_-∞^∞ dx e^-1/2m_l^2x^2-g/4!x^4+J_ sup,lx Q_N-2,l(x^2) .With theseΓ(l,φ)andΣ(l,φ), we obtainγ(l,φ) = Γ(l, φ)-Γ(0, φ)-γ_ free(l),σ(l,φ) = Σ(l,φ)-Σ(0,φ),whereγ_free(l)=(N/2)ln(m_l^2/m_0^2)is the solution of∂_l γ_ free(l) = 1/2∑_α=1^N∂_l r_k ( ∂^2 S_ free(φ)/∂φ∂φ + r_k )^-1_αα,γ_ free(0)=0,withS_free(φ)=m^2 φ^2/2. We numerically solve Eq. (<ref>) forJ_sup,l(φ)by use ofin SciPy. With thisJ_sup,l(φ), we numerically evaluate Eqs. (<ref>) and (<ref>) to obtainγ(l,φ)andσ(l,φ). The integrals in Eqs. (<ref>), (<ref>), and (<ref>) are evaluated using the Gauss quadrature method implemented asin SciPy.The perturbative and large-Nexpansion results are obtained from Ref. <cit.>. By substituting the regulated mass squaredm_l^2into these expressions, the results at the scalelfor the effective action and self-energy up to the leading order are given by:Γ(l, φ) =S(φ) + N 1+2N^-1/24g̃_l + 1+2N^-1/12g̃_l m_l^2φ^2 + γ_ free(l) + O(g̃_l^2) ,Σ(l, φ) = ∂_φ^2 S(φ) - m^2 + 1+2N^-1/6g̃_l m_l^2 + O(g̃_l^2).Here, the dimensionless quantityg̃_l=Ng/m_l^4is employed as the expansion parameter instead ofg. The result of the large-Nexpansion up toO(1)is expressed as follows:Γ(l, φ) =S(φ) + N(z_l-1/4-1/2ln z_l ) + 1/2ln(2-z_l) + 1/2(z_l^-1-1)m_l^2φ^2 + γ_ free(l) + O(1/N),Σ(l, φ) = ∂_φ^2 S(φ) - m^2 + (z_l^-1-1)m_l^2 + O(1/N),withz_l = 2/1+√(1+2/3g̃_l).With theseΓ(l, φ)andΣ(l,φ), we obtainγ(l,φ)andσ(l,φ)from Eqs. (<ref>) and (<ref>). § DETAILS ABOUT TRAINING We provide some details about the training and information about the convergence of our results. We optimize our NN to minimizeL_θ(L_θ^pre) in the main text for the training of the Wetterich equation (the pretraining). The expectations of these equations are approximately evaluated on a finite number of collocation points:L_θ ≈1/N_ col∑_n=1^N_ col[ ( ∂_l γ(l^(n),φ^(n);θ) + ∂_l γ_ free(l^(n)) - 1/2∑_α,α'∂_l R_k^(n)^αα'( ∂^2 S(φ^(n))/∂φ∂φ + ∂^2 γ(l^(n),φ^(n);θ)/∂φ∂φ + R_k^(n))^-1_α',α)^2 ], L_θ^ pre ≈1/N_ col∑_n=1^N_ col[(γ(l^(n),φ^(n);θ)-γ^ 1pt(l^(n),φ^(n)))^2 ],wherel^(n)(=ln(k_UV/k^(n))) andφ^(n)are randomly sampled following the probability distributions𝒫_φand𝒫_las outlined in the main text. Specifically, we chooseN_col=500collocation points, which are refreshed each time the optimization functions are assessed. For the numerical implementation, we employ Pytorch. The learning rate for the Adam optimizer is initially set to10^-4and exponentially decays with a factor of 0.99999. The learning rate is fixed at10^-3in the pretraining phase. It is worth noting that the computational cost of evaluating the matrix inverse inL_θis substantial. To facilitate implementation, we directly compute the inversion using thefunction in Pytorch. Efficiency enhancement, potentially utilizing alternative algorithms such as the Hutchinson trace estimator <cit.>, is reserved for future study. The training process involves10^6iterations for the Wetterich equation and10^5for the pretraining. Table <ref> provides an overview of the computational time required. With this iteration count, we observe the convergence ofL_θand physical quantities. As illustrated in Fig. <ref>, we present a learning curve along with the histories ofγ=γ(l_end,0)and the averageσ/m^2, as well as the standard deviationΔσ/m^2ofσ_α(l,0)/m^2with respect toαfor the case ofN=100andg̃=1. In the initial iterations,L_θrapidly decreases, and the physical quantities approach converges quickly. Subsequently, as the learning rate decays, physical quantities gradually converge. The diminishingΔσ/m^2over iterations indicates successfully reproducing theO(N)symmetry during training.
http://arxiv.org/abs/2312.16038v1
{ "authors": [ "Takeru Yokota" ], "categories": [ "cond-mat.dis-nn", "cond-mat.stat-mech", "cond-mat.str-el", "hep-lat", "hep-th" ], "primary_category": "cond-mat.dis-nn", "published": "20231226125536", "title": "Physics-informed neural network for solving functional renormalization group on lattice" }
[email protected] Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan [email protected] Graduate School of Science, Hokkaido University, Sapporo 060-0810, Japan Noncoplanar magnetic states with a scalar spin chirality have been intensively studied in condensed matter physics, since they exhibit fascinating physical phenomena. We theoretically propose the generation of such noncoplanar magnetic states by using a circularly polarized electric field. By performing the micromagnetic simulation, we investigate a time evolution of a classical kagome magnet irradiated by the circularly polarized electric field. As a result, we find that the noncoplanar magnetic states are induced as a nonequilibrium steady state irrespective of the ground-state spin configurations. We show that the induced scalar spin chirality is controlled by the amplitude, frequency, and polarization of the electric field. In addition, we clarify that the mechanism of the noncoplanar magneticstates is accounted for by effective field-induced three-spin interactions by adopting the Floquet formalism in the high-frequency regime.We also show a condition to enhance the scalar spin chirality. Our results present a new reference for controlling the noncoplanar magnetic states and related phenomena by the circularly polarized electric field. Scalar spin chirality induced by a circularly polarized electric fieldin a classical kagome magnet Satoru Hayami January 14, 2024 ==================================================================================================== § INTRODUCTION Engineering of physical properties by a time-periodic field has attracted much attention in various fields of condensed matter physics. A time evolution by the time-periodic field has been studied by the Floquet formalism, which provides a framework for understanding the time-periodic evolution by using a time-independent Floquet Hamiltonian <cit.>. Since the Floquet Hamiltonian exhibits a variety of effective terms depending on the amplitude, frequency, and polarization of the time-periodic field, its radiation becomes a source of rich physical properties compared to the radiation of a static field.In magnetic systems, the effect of the time-periodic field radiation appears as field-induced magnetic interactions in the Floquet Hamiltonian. For example, the radiation of the time-periodic electric field on Mott insulators induces and modifies exchange interactions <cit.> and multiple-spin interactions <cit.>. Such changes in the interactions also occur in spin systems <cit.> and other itinerant electron systems <cit.> as a result of thetime-periodic field radiation. It was theoretically shown that physical properties such as a magnetization <cit.> and states of the matter such as a quantum spin liquid <cit.>, a helical state <cit.>, and a skyrmion <cit.> can be controlled by the field-induced magnetic interactions. This indicates that the time-periodic field radiation is one of the powerful tools to control magnetic states and related electromagnetic responses. In this paper, we propose the further intriguing possibility of controlling magnetic properties by a time-periodic field. We especially focus on a field-induced scalar spin chirality under noncoplanar magnetic states, which becomes the origin of the topological Hall/Nernst effect <cit.>. For that purpose, we radiate a circularly polarized electric field on a collinear or coplanar magnet without the scalar spin chirality in the kagome-lattice structure, where we suppose that the electric field is coupled to spins via a spin-dependent electric polarization mechanism. By calculating a time evolution based on the micromagnetic simulation, we show that the circularly polarized electric field induces noncoplanar magnetic states and their scalar spin chirality is controlled by the amplitude, frequency, and polarization of the electric field. In addition, we derive a time-independent Floquet Hamiltonian based on the Floquet formalism in order to investigate the origin of the field-induced scalar spin chirality. As a result, we clarify that the scalar spin chirality originates from effective field-induced three-spin interactions under the circularly polarized electric field. We also discuss a way of enhancing the scalar spin chirality via the time-periodicelectric field. The paper is organized as follows. In Sec. <ref>, we introduce a static model for a classical kagome magnet and discuss the ground states under no external fields. Then, we introduce a dynamical Hamiltonian by taking into account the effect of a circularly polarized electric field in Sec. <ref>. We also outline methods to analyze the dynamical model based on the micromagnetic simulation and the Floquet formalism in Secs. <ref> and <ref>, respectively. In Sec. <ref>, we show the origin of the field-induced scalar spin chirality and its dependence on model parameters by performing the micromagnetic simulations.Section <ref> summarizes the paper and discusses a possible experimental situation. § STATIC MODEL In Sec. <ref>, we introduce a static Hamiltonian for a classical kagome magnet. The ground states of the static model are shown in Sec. <ref>. §.§ HamiltonianWe consider a static model for a classical kagome magnet with the point group 6̅m2 by implicitly supposing the different sizes of the upward and downward triangles, which is the so-called breathing kagome magnet. The Hamiltonian is given byℋ_0 =∑_△∑_α,βJ^αβ_△(m^α_1m^β_2 +m^α'_2m^β'_3 +m^α”_3m^β”_1 ) +∑_▽∑_α,βJ^αβ_▽(m^α_1m^β_4 +m^α'_4m^β'_5 +m^α”_5m^β”_1 ),where m_j is the local magnetic moment at site j with |m_j|=1, α,β=x,y,z, the summation ∑_△ (▽) is taken over all the upward (downward) triangles on the kagome network, and sites 1, 2 and 3 (1, 4, and 5) on each upward (downward) triangle are labeled in the counterclockwise order [see Fig. <ref>(a)]. We use local cartesian spin coordinates (x,y,z) for the ⟨ 1,2 ⟩ and ⟨ 1,4 ⟩ bonds, (x',y',z') for the ⟨ 2,3 ⟩ and ⟨ 4,5 ⟩ bonds, and (x”,y”,z”) for the ⟨ 3,1 ⟩ and ⟨ 5,1 ⟩ bonds, as shown in Fig. <ref>(b). The nearest-neighbor interaction matrices for the upward and downward triangles are generally given byJ_△ = [F^x_△D^z_△0; -D^z_△F^y_△0;00F^z_△ ], J_▽ = [F^x_▽D^z_▽0; -D^z_▽F^y_▽0;00F^z_▽ ],respectively. Here, F^x_△(▽), F^y_△(▽), and F^z_△(▽) are symmetric anisotropic exchange interactions, and D^z_△(▽) is the Dzyaloshinskii-Moriya (DM) interaction <cit.> for upward (downward) triangles.The components in Eqs. (<ref>) and (<ref>) satisfy the point group symmetry 6̅m2 of the lattice. §.§ Ground statesWe discuss the ground states of the static model in Eq. (<ref>). We simplify the static model by setting F^z_△=F^z_▽≡ F^z,F^x_△=F^x_▽=F^y_△=F^y_▽≡ F^⊥, and D^z_△=D^z_▽≡ D^z; this model corresponds to conventional kagome model with the point group 6/mmm; the effect of the difference between the upward and downward triangles will be considered in a dynamical model in Sec. <ref>. In addition, it is noted that the model has accidental global U(1) symmetry around the z axis in spin space. Then, the ground states are obtained by an analytical calculation <cit.>.We show a ground-state phase diagram on the D^z-F^⊥ plane for the static model with F^z=-1 in Fig. <ref>(a). There are four long-range ordered phases with an ordering wave vector of q=0, whose three-sublattice magnetic configurations are described by using the magnetic configurations on an upward triangle, m=(m_1,m_2,m_3), as follows:(i) Out-of-plane ferromagnetic (z-FM) phase. The magnetic configuration is given bym^ z-FM =±(0,0,1,0,0,1,0,0,1),whose ground-state energy per unit cell is given byE^z-FM =6F^z.There is a degeneracy between the states with positive and negative z components. The z-FM configuration with positive z components is shown in Fig. <ref>(b). (ii) In-plane ferromagnetic (xy-FM) phase. The magnetic configuration is given bym^ xy-FM = (cosθ,sinθ,0,cosθ,sinθ,0,cosθ,sinθ,0),with an angle θ (0<θ≤2π). The ground-state energy per unit cell is given byE^xy-FM =6F^⊥,which is independent of θ due to the U(1) symmetry of the model. The xy-FM configuration for θ=0 is shown in Fig. <ref>(c). (iii) Vortex (Antivortex) phase. The magnetic configuration for the vortex (antivortex) state is given bym^v(av) = {sin(θ+σ2π/3),cos(θ+σ2π/3),0,sin(θ-σ2π/3),cos(θ-σ2π/3),0, sinθ,cosθ,0},with σ=+1 (-1) and θ (0<θ≤2π). The ground-state energy per unit cell is given byE^v =3(-F^⊥+σ√(3)D^z).The negative (positive) D^z favors the vortex (antivortex) state. Similar to the xy-FM state,the ground-state energy is independent of θ due to the U(1) symmetry of the model. The vortex and antivortex configurations for θ=0 are shown in Figs. <ref>(d) and <ref>(e), respectively.Among the four states, the z-FM and xy-FM states are categorized into the collinear state, while the vortex and antivortex states are categorized into the coplanar state. The ground states at the phase boundaries are given by superposing the z-FM, xy-FM, vortex, and antivortex states. At the phase boundary L1, since the energies of the z-FM and vortex states are degenerate, the ground state is given by superposing them asm^ L1 =c_zm^ z-FM+c_vm^v,with c_z^2+c_v^2=1. The magnetic configurations for c_z≠0 and c_v≠0 are categorized into the noncoplanar state with nonzero scalar spin chirality, i.e., m_1 · (m_2 ×m_3) ≠ 0 and m_1 · (m_4 ×m_5) ≠ 0, which we call a scalar chiral state. Similar to the phase boundary L1, the ground state at the phase boundary L2 becomes a noncoplanar magnetic state by superposing the z-FM and antivortex states.At the phase boundary L3, a superposition of the z-FM and xy-FM states becomes the ground state, which is categorized into the collinear state. At the phase boundaries L4, L5, and L6, magnetic states with ordering wave vectors q≠0 become the ground states <cit.>. § DYNAMICAL MODEL We introduce a dynamical Hamiltonian for the classical kagome magnet irradiated by a circularly polarized electric field in Sec. <ref>. Then, we outline two methods to analyze the dynamical model. One is the analysis based on the Landau-Lifshitz-Gilbert (LLG) equation in Sec. <ref> and the other is the analysis based on the Floquet formalism in Sec. <ref>. §.§ Hamiltonian We consider the effect of the circularly polarized electric field on the classical kagome magnet. The Hamiltonian under the circularly polarized electric field is given by ℋ(t) = ℋ_0 -E(t)·P,where ℋ_0 is given in Eq. (<ref>) and P represents an electric polarization coupled to the circularly polarized electric field E(t) = E_0(δcosΩ t,-sinΩ t ,0); E_0 is the amplitude of the field, Ω is the frequency, and δ=+1 (-1) represents a right circular polarization (RCP) [left circular polarization (LCP)].We suppose that the electric polarization originates from spin-dependent electric dipoles on the nearest-neighbor bonds asP = ∑_△(p_12+p_23+p_31)+∑_▽(p_14+p_45+p_51),withp_12 = -λ_△e_12× (m_1×m_2), p_23 = -λ_△e_23× (m_2×m_3), p_31 = -λ_△e_31× (m_3×m_1), p_14 = -λ_▽e_14× (m_1×m_4), p_45 = -λ_▽e_45× (m_4×m_5), p_51 = -λ_▽e_51× (m_5×m_1).Here, the electric dipole p_jk for the nearest-neighbor bond ⟨ j,k ⟩ is induced by the spin-current mechanism <cit.>; λ_△ (▽) is the magnetoelectric coupling constant and e_jk is the unit vector from the site j to the site k. It is noted that the form of p_12 (p_14) satisfies the point group symmetry mm2 of the bond⟨ 1,2 ⟩ (⟨ 1,4 ⟩) <cit.>, and the forms of p_23 and p_31 (p_45 and p_51) are determined to satisfy the threefold rotational symmetry of the upward (downward) triangle under the 6̅m2 symmetry. It is noted that there is no symmetry constraint between λ_△ and λ_▽ under the point group 6̅m2 (breathing kagome structure), while λ_△ is equivalent to λ_▽ under the point group 6/mmm (conventional kagome structure). §.§ Landau-Lifshitz-Gilbert equation We calculate a time evolution of the dynamical Hamiltonian ℋ(t) in Eq. (<ref>) by numerically solving the LLG equation. The LLG equation is given bydm_j/dt= -γ/1+α_ G^2[ m_j×B^ eff_j(t) . .+ α_ Gm_j×{m_j×B^ eff_j(t)}],with the gyromagnetic ratio γ, the Gilbert damping constant α_ G, and the effective magnetic field B^ eff_j(t)=-∂ℋ(t)/∂m_j. The first term represents the precession around the effective magnetic field, and the second termdescribes the relaxation to the effective magnetic field. The effective magnetic field for m_j is given by B^ eff_j (t) =-∂ℋ_0/∂m_j -λ_△∑_k_ NN∈△m_k_ NN×{E(t) ×e_jk_ NN}-λ_▽∑_k_ NN∈▽m_k_ NN×{E(t) ×e_jk_ NN}, where k_ NN∈△(▽) represents the nearest-neighbor site on the upward (downward) triangle. The first term is independent of time, while the second and third terms are time-periodic with a period of T=2π/Ω. To judge whether noncoplanar states are induced by the electric field, we calculate the scalar spin chirality at each time asχ_ sc(t)=1/N∑_△m_1(t)·{m_2(t)×m_3(t)}+ 1/N∑_▽m_1(t)·{m_4(t)×m_5(t) },where N is the number of unit cells. After a long time t_0, the system reaches a non-equilibrium steady state (NESS). To characterize the NESS, we calculate an averaged scalar spin chirality, which is given byχ_ sc = 1/N_ ave∑_n=1^N_ aveχ_ sc(t_0+nΔ). Here, N_ ave and Δ represent the number of samples and the time step, respectively.§.§ Floquet analysis Since the LLG equation in Eq. (<ref>) is time-periodic, we can adopt the Floquet formalism for the classical systems <cit.>, which results in a time-independent effective magnetic field and Floquet Hamiltonian. By solving the LLG equation within the Floquet formalism, one obtains a magnetic state with a local energy minimum of the Floquet Hamiltonian. The obtained magnetic state is expected to be consistent with the NESS when the frequency Ω is large and the amplitude E_0 is small compared to the energy scale of the static model.With this in mind, we perform the Floquet analysis when the frequency Ω is large enough compared to the energy scale of the static model in Eq. (<ref>). By using the high-frequency expansion in the Floquet formalism, a time-independent effective magnetic field up to Ω^-1 is obtained asB̃^ eff_j= B^ eff_j,0 + ∑_n > 0iγ [B^ eff_j,-n,B^ eff_j,+n]/n Ω (1+α_ G^2). Here, n is an integer and B^ eff_j,n = T^-1∫^T_0 dtB^ eff_j(t) e^in Ω t is the Fourier component of the time-periodic effective magnetic field. The relation [A , B ] is defined as[B^ eff_j,-n,B^ eff_j,+n] = B^ eff_j,-n×B^ eff_j,+n +∑_k≠ j[ ( B^ eff_k,-n·L_k ) B^ eff_j,+n..- ( B^ eff_k,+n·L_k ) B^ eff_j,-n] + 𝒪(α_ G), with L_k^α = - ∑_β,ηϵ_αβη m_k^β (∂ / ∂ m_k^η) (α,β,η = x,y,z). We ignore terms proportional to α_ G by assuming small α_ G.The Fourier components of B^ eff_j (t) in Eq. (<ref>) are given byB^ eff_j,0 = -∂ℋ_0/∂m_j, B^ eff_j,±1 = -λ_△∑_k_ NN∈△m_k_ NN× (Ẽ_± 1×e_jk_ NN) -λ_▽∑_k_ NN∈▽m_k_ NN× (Ẽ_± 1×e_jk_ NN),with Ẽ_± 1=E_0(δ,± i,0)/2; the other Fourier components are zero.By substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>),we obtain a time-independent effective magnetic field of the present model. A time-independent Floquet Hamiltonian ℋ^ F is constructed from a relation B̃^ eff_j=-∂ℋ^ F/∂m_j, which is given by ℋ^ F = ℋ_0 + 𝒯_△∑_△m_1·m_2×m_3 + 𝒯_▽∑_▽m_1·m_4×m_5 +𝒯_∑_ [S^z_a(m_b×m_f)^z + S^z_b(m_c×m_a)^z +S^z_c(m_d×m_b)^z +S^z_d(m_e×m_c)^z+S^z_e(m_f×m_d)^z + S^z_f(m_a×m_e)^z],with 𝒯_△ =-√(3)γδ(λ_△ E_0)^2/4Ω(1+α_G)^2,𝒯_▽ =-√(3)γδ(λ_▽ E_0)^2/4Ω(1+α_G)^2,𝒯_ =-√(3)γδλ_△λ_▽ E_0^2/4Ω(1+α_G)^2.The summation ∑_ is taken over all the hexagons on the kagome network, and sites a, b, c, d, e, and f on each hexagon are labeled in the counterclockwise order [see Fig. <ref>(a)]. The Floquet Hamiltonian includes the field-induced three-spin interactions with 𝒯_△, 𝒯_▽, and 𝒯_ in addition to the static Hamiltonian ℋ_0. In particular, the three-spin interactions with 𝒯_△ and 𝒯_▽ are directly coupled to the scalar spin chirality. The amplitude of the field-induced terms is controlled by the amplitude of the electric field E_0 and the frequency Ω, and their sign is controlled by the polarization δ. It is noted that they cannot be induced by a linearly polarized electric field with δ=0.Similar three-spin interactions were obtained in Mott insulators <cit.> and quantum spin systems <cit.>. In the following calculations, we set E_ d≡ E_0λ_△=-E_0λ_▽ for simplicity, where the opposite sign for the upward and downward triangles is allowed by the 6̅m2 symmetry. When the model is reduced to the conventional kagome model under the 6/mmm symmetry, λ_△ is equivalent to λ_▽. This situation does not induce the scalar spin chirality; see the results in Sec. <ref> for details. § SIMULATION RESULTS In this section, we show that the circularly polarized electric field induces the scalar spin chirality in NESSs by numerically solving the LLG equation.We consider the static model in Eq. (<ref>) with a system size of N=3^2 under the periodic boundary condition [We confirm that similar results are obtained in the model with a larger system size, e.g., N=9^2.]. The energy scale and time scale are set as E_ GS and E_ GS^-1, respectively; E_ GS is the absolute value of the ground-state energy. Since we take E_0λ_△=-E_0λ_▽, the relation as 𝒯≡𝒯_△=𝒯_▽=-𝒯_ holds in the Floquet Hamiltonian in Eq. (<ref>). From the viewpoint of the effective interactions, one finds that the scalar spin chirality is not induced in the dynamical model for the conventional kagome magnet with λ_△=λ_▽, since the effects of 𝒯_△, 𝒯_▽ and 𝒯_ are canceled out.Thus, the difference between the upward and downward triangles is essential to induce the scalar spin chirality in kagome magnets. In the LLG equation in Eq. (<ref>) we set γ=1 and α_ G=0.05. We calculate a time evolution for a long time, t_0=80000E_ GS^-1-160000E_ GS^-1, to obtainNESSs. After that, we calculate the averaged scalar spin chirality in Eq. (<ref>) by setting N_ ave=50000 and Δ=0.002E_ GS^-1. We use an open software DifferentialEquations.jl <cit.> to solve the LLG equation. In the following, we discuss the effect of the circularly polarized electric field on the vortex and z-FM states in Secs. <ref> and <ref>, respectively.In Sec. <ref>, we show the static-parameter dependence of the field-induced scalar spin chirality.§.§ Radiation on a vortex stateFirst, we show the results when we radiate the circularly polarized electric field on the vortex state. We set F^z=-1, F^⊥=-0.5, and D^z=-2 in the static model. Then, the ground state becomes the vortex state in Eq. (<ref>) and the energy scale is given by E_ GS=3|-F^⊥+√(3)D^z|.The red line in Fig. <ref>(a) shows a time evolution of the scalar spin chirality in Eq. (<ref>) under the electric field with the RCP, where the field parameters are taken as E_ d=0.5E_ GS and Ω=4E_ GS.At t=0, the scalar spin chirality is zero because of the coplanar structure in the vortex state. After the electric field radiation,the scalar spin chirality is continuously induced and the system reaches a NESS with a positive scalar spin chirality at t∼ 500E_ GS^-1. We show a snapshot of the NESS in Fig. <ref>(b), where positive z components of the magnetic moment are induced in the coplanar vortex configuration. Thus, this magnetic configuration is characterized by the scalar chiral configuration in Eq. (<ref>). It is noted that the NESS exhibits a small in-plane magnetization compared to the out-of-plane one by around 10^-3. Meanwhile, when the polarization is reversed so as to have the LCP, the negative scalar spin chirality is developed, as shown by the blue line in Fig. <ref>(a). After a long time, the system reaches a NESS with a negative scalar spin chirality, whose snapshot is shown in Fig. <ref>(c); the NESS corresponds to the scalar chiral state with negative z components of the magnetic moment.Similar to the case with the RCP, a small in-plane magnetization also appears in this case. The microscopic origin of the scalar spin chirality is attributed to the effective field-induced three-spin interactions appearing in the Floquet Hamiltonian in Eq. (<ref>); the negative 𝒯 is induced by the electric field with the RCP and it favors the positive scalar spin chirality.Indeed, we find that the behavior of the scalar spin chirality in the dynamical model in Eq. (<ref>) is well fitted by the Floquet model in Eq. (<ref>), as detailed below.Similarly, the opposite sign of the scalar spin chirality by the LCP is owing to the positive 𝒯. These results indicate that the sign of the scalar spin chirality is controlled by the polarization of the electric field. Let us comment on the electric field radiation on the antivortex state in Eq. (<ref>), which is stabilized by taking the positive DM interaction D^z=2; the other parameters are the same as the previous ones. In this situation, a NESS with a positive (negative) scalar spin chirality is induced by radiating the electric field with the RCP (LCP), which is similar to the radiation on the vortex state in Fig. <ref>(a). Meanwhile, magnetic configurations of the NESSs are different from those in Figs. <ref>(b) and <ref>(c); with the RCP (LCP) negative (positive) z components of the magnetic moment are induced in the coplanar antivortex configuration.Next, we show the behavior of the scalar spin chirality while changing the amplitude E_ d and the frequency Ω. In Fig. <ref>(a), we show the E_ d dependence of the averaged scalar spin chirality under the electric field with the RCP and Ω=4E_ GS; the red and blue lines show the results for the dynamical and Floquet models, respectively. The results for both models show their good agreement except for the large E_ d region. This indicates that the effective field-induced three-spin interactions 𝒯 play an important role in inducing the scalar spin chirality χ_ sc, and higher-order contributions, such as terms proportional to E_ d^2Ω^-2, are almost negligible in the small E_ d region. The scalar spin chirality increases with increasing E_ d proportional to E_ d^2 in the small E_ d region, which is also consistent with the expression of 𝒯 in Eqs. (<ref>)–(<ref>). When E_ d is further increased, the scalar spin chirality approaches the maximum value of χ_ sc=2.In Fig. <ref>(b), we show the Ω dependence of the averaged scalar spin chiralityunder the electric field with the RCP and E_ d=0.5E_ GS. Similar to the result in Fig. <ref>(a), one finds that the results for the dynamical and Floquet models are consistent with each other; the small derivation in the low Ω region is due to the higher-order contributions in the high-frequency expansion, such as terms proportional to E_ d^2Ω^-2. The behavior of the scalar spin chirality proportional to Ω^-1 is understood from the effective interaction 𝒯 in Eqs. (<ref>)–(<ref>). It is noted that a similar behavior of the scalar spin chirality in Fig. <ref> is obtained when the electric field radiation on the antivortex state is considered.§.§ Radiation on a z-FM stateNext, we investigate the effect of the circularly polarized electric field radiation on the z-FM state. We set F^z=-1, F^⊥=-0.5, and D^z=-1.2 in the static model, wherethe ground state becomes the z-FM state in Eq. (<ref>) and the energy scale is given by E_ GS=|6F^z|=6.We show a time evolution of the scalar spin chirality after introducing the electric field with the RCP (LCP), E_ d=0.5E_ GS, and Ω=4E_ GS by the red (blue) line in Fig. <ref>(a). The system irradiated by the electric field with the RCP (LCP) reaches a NESS with a positive (negative) scalar spin chirality at t∼ 1500E_ GS. The sign of the scalar spin chirality is explained by the sign of the field-induced three-spin interactions 𝒯. This indicates that the scalar spin chirality is controlled by the polarization, which is similar to the result for the vortex (antivortex) state in Sec. <ref>. We show a snapshot of the NESS induced by the electric field with the RCP (LCP) in Fig. <ref>(b) [Fig. <ref>(c)], where a coplanar vortex configuration is additionally induced in the z-FM state with the positive (negative) z components so as to have the positive (negative) scalar spin chirality. These magnetic configurations correspond to the scalar chiral configuration, which is realized as the static ground state at the phase boundary L1 in Fig. <ref>(a). Let us comment on the initial states for Fig. <ref>.We introduce the small random deviation from the magnetic configuration in Eq. (<ref>) at t=0 as follows: We set the initial state as m_j=[√(1-(m_j^z)^2)cosϕ_j,√(1-(m_j^z)^2)sinϕ_j,±(1-a_j)], where a_j (0≤ a_j ≤10^-3) and ϕ_j (0< ϕ_j ≤ 2π) are random variables [We confirm that similar results are obtained for smaller fluctuations. e.g., 0≤ a_j ≤10^-4 and 0≤ a_j ≤10^-5.].This is because no effective magnetic field is generated under the electric field in Eq. (<ref>) when the magnetic configuration is initially set as m_j=(0,0,± 1). It is also noted that the positive (negative) z component of the magnetic moment in the initial state is related to the finite scalar spin chirality by the electric field with the RCP (LCP).In other words, no scalar spin chirality is induced when the electric field with the RCP (LCP) is applied to the z-FM state with negative (positive) z-spin polarization. In Fig. <ref>(a), we show the E_ d dependence of the averaged scalar spin chirality induced by the electric field with the RCP and Ω=4E_ GS; the red and blue lines show the results for the dynamical and Floquet models, respectively.In the dynamical (Floquet) model, the scalar spin chirality is not induced in the E_ d≤ 0.3 (E_ d≤ 0.4) region. As E_ d increases, both models exhibit nonzero scalar spin chirality. The enhancement of the scalar spin chirality for large E_ d is owing to the relatively large three-spin interactions 𝒯∝ E_ d^2. This qualitative behavior is similar to that in Fig. <ref>(a), while there is no quantitative agreement between the dynamical and Floquet models. The scalar spin chirality in the dynamical model tends to be larger than that in the Floquet model, which might be attributed to the fact that the higher-order contribution than E^2_ dΩ^-1 included in the dynamical model favors the scalar spin chirality. We also show the Ω dependence of the averaged scalar spin chirality induced by the electric field with the RCP and E_ d=0.5E_ GS in Fig. <ref>(b). At Ω=3, the scalar spin chirality is induced in the dynamical and Floquet models. The scalar spin chirality decreases monotonically with increasing Ω due to the three-spin interactions 𝒯∝Ω^-1. By further increasing Ω, the scalar spin chirality approaches zero. Similar to the E_ d dependence, the results for the dynamical and Floquet models show qualitative agreement, but not quantitative agreement.§.§ Static-parameter dependenceFinally, we discuss the averaged scalar spin chirality induced by the circularly polarized electric field in terms of D^z and E_ d. In Fig. <ref>(a) [<ref>(b)], we show the averaged scalar spin chirality with changing D^z and E_ d in the dynamical (Floquet) model with F^z=-1, F^⊥=-0.5, δ=1, and Ω=4E_ GS [We confirm that a similar result is obtained in the models with (F^z,F^⊥)=(-1,0) and (-1,0.5). A similar result is also obtained in the model with the LCP (δ=-1)]. The initial states are set as the vortex state for D^z<D^z*=-2.5/√(3), the scalar chiral state in Eq. (<ref>) with χ_ sc=2 for D^z=D^z*, and the z-FM state for D^z>D^z*. The results for both models show the similar parameter dependence. For D^z<D^z*, the radiation of the circularly polarized electric field on the vortex state induces the scalar spin chirality irrespective of D^z and E_ d. The induced scalar spin chirality becomes large in the large E_ d region for a fixed D^z<D^z*, as discussed in Sec. <ref>.In addition, one finds that the scalar spin chirality tends to become large as D^z increases.Especially, as shown in the right panels of Figs. <ref>(a) and <ref>(b), the scalar spin chirality is strongly enhanced in the vicinity of D^z*. This might be attributed to the fact that the magnetic configuration with the large scalar spin chirality is realized when the weights of the vortex and z-FM states in the scalar chiral configuration in Eq. (<ref>) are comparable to each other. In the region for D^z>D^z*, the z-FM state is stabilized as the ground state within the static model. The scalar spin chirality is also induced when the amplitude of the electric field is larger than the critical value of E_ d^*, as discussed in Sec. <ref>. Such a tendency is found for any D^z. For a fixed E_ d, the scalar spin chirality is enhanced near D^z* for the same reason in the D^z<D^z* region; see the right panels of Figs. <ref>(a) and <ref>(b). At D^z=D^z*, the static ground state is a superposition of the vortex and z-FM states, where magnetic states with |χ_ sc|≤ 2 are energetically degenerate. By radiating the electric field in such a region, this degeneracy is lifted by the three-spin interactions 𝒯; the magnetic state with χ_ sc= 2 has the lowest energy in the Floquet model. Such a tendency holds for any E_ d. § SUMMARY AND DISCUSSION In summary, we have studied how to generate and control the scalar spin chirality by the circularly polarized electric field by exemplifying the classical kagome magnet. By taking into account the coupling of the electric field and the spin-dependent electric polarization, we have elucidated the generation of the scalar spin chirality irrespective of the collinear and coplanar spin configurations in the ground state. We have shown that the microscopic origin of the scalar spin chirality is the effective field-induced three-spin interactions, which can be analytically obtained in the high-frequency regime based on the Floquet formalism. Furthermore, we have shown that the sign and magnitude of the scalar spin chirality are controlled by the amplitude, frequency, and polarization of the circularly polarized electric field.We have also shown that the scalar spin chirality tends to be enhanced near the phase boundary between the coplanar vortex phase and the collinear z-FM phase. Our results indicate the possibility of inducing the scalar spin chirality by the electric field rather than the magnetic field, which would provide an alternative root of controlling topological spin textures. Let us discuss a possible experimental situation. In order to induce the scalar spin chirality by the circularly polarized electric field, considering insulating magnets is important since the coupling between the electric field and the electric polarization plays an essential role. Assuming a magnet with an exchange interaction of 1 meV, our Floquet analysis in the high-frequency regime is valid for Ω > 1 THz; a typical magnitude of the terahertz electric field is estimated as E_0=1–10 MV/cm. Meanwhile, Ω should be smaller than a band gap in the order of eV to avoid heating effects by the electric field radiation.We thank K. Shimizu, S. Okumura, Y. Kato, and Y. Motome for fruitful discussions. This research was supported by JSPS KAKENHI Grants Numbers JP19K03752, JP19H01834, JP21H01037, JP22H04468, JP22H00101, JP22H01183, JP23KJ0557, JP23H04869, JP23K03288, and by JST PRESTO (JPMJPR20L8) and JST CREST (JPMJCR23O4).R.Y. was supported by Forefront Physics and Mathematics Program to Drive Transformation (FoPM) and JSPS Research Fellowship.
http://arxiv.org/abs/2312.15891v1
{ "authors": [ "Ryota Yambe", "Satoru Hayami" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20231226054541", "title": "Scalar spin chirality induced by a circularly polarized electric field in a classical kagome magnet" }
[email protected]éParis-Saclay,CNRS,FAST,91405,Orsay, FranceUniversitéParis-Saclay,CNRS,FAST,91405,Orsay, France PoreLab, Department of Physics,Norwegian University of Science and Technology, N-7491, Trondheim, Norway PoreLab, Department of Physics,Norwegian University of Science and Technology, N-7491, Trondheim, Norway UniversitéParis-Saclay,CNRS,LPTMS,91405,Orsay, FranceThe flow of yield stress fluids in porous media presents interesting complexity due to the interplay between the non-linear rheology and the heterogeneity of the medium. A remarkable consequence is that the number of flow paths increases with the applied pressure difference and is responsible for a non-linear Darcy law.Previous studies have focused on the protocol where thepressure difference is imposed. Here we consider instead the case of imposed flow rate, Q. In contrast to Newtonian fluids, the two types of boundary conditions have an important influence on the flow field.Using a two-dimensional pore network model we observe a boundary layer of merging flow paths of size ℓ(Q) ∼ Q^-μ/δ where μ = 0.42 ± 0.02 and δ≃ 0.63 ± 0.05.Beyond this layer the density of the flow paths is homogeneous and grows as Q^μ. Using a mapping to the directed polymer model we identify δ with the roughness exponent of the polymer. We also characterize the statistics of non-flowing surfaces in terms of avalanches pulled at one end. Influence of the imposed flow rate boundary condition on the flow of Bingham fluid in porous media Alberto Rosso January 14, 2024 ==================================================================================================§ INTRODUCTION The flow of non-Newtonian fluids is widely used in many geological and industrial applications. In particular, Yield Stress Fluids (YSF), fluids that require a minimum amount of stress to flow, have significant implications for several industrial processes. They include slurries, polymers, oil, and foam suspensions, all of which may have a yielding stress <cit.>. One prominent application is in the extraction of heavy oil, a subject that has been extensively studied since the 1960s <cit.>. Yield stress fluids are also usedto obstruct preferential paths, which can be useful inenhanced oil recovery but also in limiting the spreading of contaminants in the ground.Given the diverse applications, the flow of yield stress fluids has been the focus of numerous investigations <cit.>. A primary objective is to generalize for yield stress fluid the Darcy lawthat relate the total flow rate to the applied pressure drop.From a practical point of view, there are several methods for driving flow in porous media.When the flow is through a core sample driven by pumps, the flow rate is the natural control variable. However, when the flow is driven by a height difference between fluid reservoir and inlet, the pressure difference is the control parameter <cit.>.For Newtonian flow, it is usually assumed that the classical Darcy law is independent of the prescribed boundary condition (BC) at the inlet and outlet (imposing uniform pressure or velocity). The reason behind this is that changing the boundary condition only affects the flow in a very short distance, e.g., over a few pores. It follows that if the total volume of interest is large enough, the influence of this boundary layer is negligible. The objective of this paper is to investigate the influence of the boundary condition for yield stress fluids.We are interested here in the Bingham rheology for which thestress σ is related to theshear rate γ̇ by the relationship:σ = σ_y +ηγ̇,where σ_y is the yield stress, below which there is no flow (γ̇=0).By prescribing the pressure difference Δ P, previous studies <cit.> have demonstrated that the total flow rate exhibits three successive power-law regimes above a critical pressure Δ P_c: Q ∝ (Δ P - Δ P_c)^β, where β=1 when the pressure is either close and very far from Δ P_c. In the intermediate regime, the exponent β is close to two <cit.>. The physical origin of this quadratic exponent lies in a progressively increasing number of channels in which there is flow as the pressure is increased. Another important point for later is the distribution of lengths of the new flow path, where longer channels are more likely to flow than shorter ones <cit.>.All these computational and theoretical studies were conducted with prescribed pressure at the boundaries. The aim of this research is to study the flow of yield stress fluids in porous media while imposing a prescribed flow rate. More specifically, a mixed boundary condition will be used: a uniform velocity is imposed at the inlet, while a uniform pressure is applied at the outlet. The corresponds to the experimental situation where the flow into the core sample is controlled by the rate of the pumps, whereas the other end of the core sample is open and hence kept at atmospheric pressure.We describe in Section <ref> the pore network model we use.Section <ref> presents the numerical results from this model.We map the problem onto the directed polymer problem in Section <ref> and in Section <ref> we identify the critical exponents observed in the flow problem with the corresponding exponents in the directed polymer problem. Section <ref> contains our summary and discussion of our results. § NUMERICAL METHODIn this work we have used a Pore Network Model (PNM). As shown in Fig. <ref>, the porous medium is described as an ensemble of nodesi=1,2, …, N connected by links (throats). This model requires some assumptions, but it is computationallymore efficientthan the solution of the full Navier-Stokes equation inside each pore (e.g., <cit.>). It is worth noting that such a model has been validated with direct simulations or experiments by several authors. In the context of non-Newtonian fluids, one can cite for example <cit.> for the first opening path, or <cit.> for the flow at high flow rate.The PNM model has already been presented in detail in previous articles <cit.>, but we recall the main features here: Each link is assumed to follow an approximated non-linear Poiseuille (or Buckingham-Reiner) flow. For example the flow in the link connecting pores i and j satisfiesδ̃ ̃p̃_̃ĩj̃ =p̃_i-p̃_j =_̨ijq̃_ij + τ̃_ijq̃_ij/|q̃_ij| .Here δ̃ ̃p̃_̃ĩj̃ represents the pressure difference between nodes i and j. The link is a tube withradius r_ij andlength l_ij. As a consequence, itshydraulic resistivityis _̨ij= (8 μ l_ij)(π r_ij^4) and its pressure thresholdτ̃_̃ĩj̃= (2 σ_y l_ij)/ r_ij.The unknown variables are the pressure at each node and the flow rate in each link. In addition to the constitutive equation eq. (<ref>) for each link, mass conservation requires ∑_j q̃_ij =0 at each node i, where the sum is over all neighbouring links j. Usually two boundary conditions are considered: (i) The pressure imposed boundary condition (BC) where the pressures of allthe inlet and outlet nodes are imposed. (ii) The flow rate imposed BC, where the flow of all the inlet nodes is imposed to be equal to q̃_in and the pressure ofall the outlet nodes is set to be zero. Note that in this case the pressure of the inlet nodes is in general non uniform. The pressure-imposed BC has been study in <cit.>. Here we study the flow rate-imposed BC. To take into account the disorder of the system, the radius is drawn randomly from a uniform distribution with mean r_0 and standard deviation σ_r. The system size is L × W nodes, where L is in the direction of the flow. We have set the variables to the values r_0=0.25, σ_r=0.15, τ_y=1, l=1. In the following, we use the dimensionless flow rate and pressure variables q = q̃ 4 μ/τ_y π r_0^3 and P = P̃ r_0/2 τ_y l.The solution of the system of nonlinear equations is a set of flow rates {q_ij}. The solution can be determined using a variational approach that minimize a functional. In this case it writes as{q_ij} = min_{q_ij}∈Ω∑ (1/2_̨ij q_ij^2 + τ_ij |q_ij|),where Ω is the ensemble of the admissible solutions satisfying conservation of mass and the prescribed flow at the inlet. Our numerical results are obtained using a modified version of this variational approach, called augmented Lagrangian method anddescribed in Talon and Hansen <cit.>. Eq. (<ref>) allows one also to obtain an important insight of the solution when Q→ 0 or Q →∞.§ NUMERICAL RESULTS §.§ Flow curveWe first compute the macroscopic flow-pressure curve as shown inFig. <ref>.For both imposed pressure and imposed flow BC, we find that a minimal pressure is needed to observe a finite flow. Remarkably the minimal pressure for the imposed flow BC, P_0 is always larger than the one for imposed pressure difference BC, P_c. Moreover, as shown in the inset of Fig. <ref> and similarly to the imposed pressure difference BC <cit.>,the imposed flow BC displays three flow regimes: At low and high flow rates the relationship between Q and P - P_0 is linear. In the intermediate regime a power lawis observed:Q ∝ (P - P_0)^β,with β≃ 1.65, which is smaller than the exponent measured for pressure imposed flow BC, which is close to 2<cit.>.§.§ Flow channelsThe origin of the flow regimes for imposed flow rate is similar to the pressure difference imposed BC and are due to the increase of paths where fluid is flowing (flowing paths). In Fig. <ref>, we depict the flow behavior corresponding to different imposed total flow rate.When the flow rate is extremely low (left panel), due to the flow imposed BC, we observe a flow path starting from each inlet node. This is in contrast with thepressure imposed BC, where only a singleflow path is observed. As shown in Fig. <ref>, thepaths eventually merge with others as they progress through the system. The key featureof the low flow rates is that these paths can merge but never split. If the system is long enough, namely if L≫ W, the flow pathseventually converge into a single channel, as for the imposed pressure difference BC. As the total imposed flow rate increases (see middle panel of Fig. <ref>, weobserve that the flow paths begin to split, leading to an increase in the number of flow channels downstream, which is at the origin of the power regime (similarly to the pressure imposed BC). For very large flow rate, all the paths are open and the Newtonian limit is recovered. To quantify these trends, we plot in Fig. <ref> (Left) the number of flowing channels N_channel as function of the distance, x,from the inlet. At low Q the number of channels decreases with distance as a power power law x^-δ, with δ≃ 0.63 ± 0.05.N_channel(x ≥ 1, Q→ 0) ≃W/2 x^-δ.As the flow rate is increased, the power law decay holds within a boundary layer of length ℓ_BL(Q). Above this length,N_channelreaches a plateau that grows as Q^μ. Note that the existence of finite plateau is a necessary condition to enable a homogeneous description of the flow. Also, a Family-Vicsek scaling <cit.> holds forN_channel(x, Q) (see figure <ref>)N_channel(x, Q) = Q^μ f(x Q^μ/δ) ,withμ = 0.42 ± 0.02 and δ≃ 0.63 ± 0.05. Function f scales as f(z) ∼ z^-δfor small z and reaches a constant value at large z.It follows that the the boundary layer decreases with the flow rate as ℓ_BL(Q) ∼ Q^- μ/δ. In the following, we will establish a mapping between the problem of the onset of the flow (very low rate regime) and the model of the directed polymer in random media. This mapping will allow us to predict the exponent δ and explain the different minimal pressure observed by changing the BC.§ ONSET OF THE FLOW: MAPPING TO THE DIRECTED POLYMER PROBLEMUsing the flow equations we can write an equation for the energy dissipation :Q Δ P =∑ q_ijδ p_ij = ∑( _̨ij q_ij^2 + τ_ij |q_ij|)This equation can be recast in the following form:Q = κ^*(Q) [ Δ P - P^*(Q) ],With κ^*(Q) = ( 1/Q^2∑_̨ij q_ij^2 )^-1andP^*(Q) = 1/Q∑τ_ij |q_ij|.Here κ(Q) is the effective mobility andP^*(Q) is the apparent pressure threshold because it tends to the minimal threshold pressure when Q → 0. Fig. <ref> shows the evolution of P^*(Q) and κ^*(Q). Both functions increase from a plateau value, resp. P_0 and κ_0 at low Q to a higher one, respectively P_∞ and κ_∞. This evolution is due to the increase of the number of flowing channels with Q.It is possible to find an explicit expression for P_0 and κ_∞ using the variational approach of Eq. (<ref>). * For Q→∞, the flow field converges to the solution {q} = min_{q}∈Ω∑ (1/2_̨ij q_ij^2),, equivalent to the Newtonian flow. Hence,the mobility κ_∞ corresponds to the Newtonian mobility, namely the maximal admissible mobility:κ_∞ = lim_Q→∞max_{q}∈Ω( 1/Q^2∑_̨ij q_ij^2 )^-1. * For Q→ 0, the flow field converges to the solution {q}→min_{q}∈Ω_Q∑ (δ p_c |q|).It follows an exact expression for P_0:P_0 =lim_Q→ 0min_{q}∈Ω∑τ_ij|q_ij|/Q This expression allows us to map the problem of the onset of the flow withthe directed polymer model. For flow imposed BC with identical flow rates q_i for all the inlet nodes, the flow structure consists of directed flow paths that start from all inlet nodes. Indeed, Eq. (<ref>) prescribes that the flow rate of a particular link is equal to n_paths q_in, where n_paths corresponds to the total number of paths passing to this link (seeFig.<ref>). Hence, the sum in Eq. (<ref>) can be rewritten as the sum of the threshold along each paths, carrying a flow rate q_in divided by the total number of channels. Each path can be considered as a directed polymer with energy given by the sum of its thresholds. As a consequence the minimal pressure P_0 correspond to the mean value of the ground state energy of the W directed polymers starting at each inlet node. On the other hand, for pressure imposed BC, a single channel is expected, and the minimal pressure P_c corresponds to the minimal ground state energy among the W polymers previously considered. It is then clear that we expect P_c < P_0.In Fig. <ref> we plot the flow field obtained using the algorithm of <cit.> to determine the directed polymers. The structure of the flow paths is identical to the one found at low flow rate in Fig.<ref> left. Moreover, the corresponding mean value of the ground state energies(horizontal line in Fig. <ref>) correctly predicts the value of P_0 in Fig. <ref> (left). § CRITICAL EXPONENTS ANDNON-FLOWING REGIONSIn the previous section we identify the flowing channels at the onset of the flow with the ground states directed polymers in the random (porous) medium. This mapping allows us to use an efficient algorithm for the flowing channels, but it also allow to identify the critical exponent δ previously introduced with the roughness exponent ζ=2/3 of the directed polymer. To do this we first characterize the statistics of the non-flowing regions using a scaling relation originally introduced for the Oslo sandpile model<cit.>. This argument is applicable to avalanches dynamic of elastic interfaces in random media pulled at one end <cit.>. In particular, from Fig. <ref>, we consider the non-flowing portions bounded by flowing channels. Our goal is to determine their surface, S, their length ℓ, and their width w.Similar to avalanches, these portions display a scale free statistics,π_1(S) = S^-τ_S g_1(S/S_max), π_2(ℓ)= ℓ^-τ_ℓ g_2(ℓ/ℓ_max) .Here τ_S, τ_ℓ are the avalanche exponents with a value in the interval (1,2). Both distribution have finite size cut-off (S_max∼ L^1+ζ, with ζ=2/3,and ℓ_max∼ L), above which both g_1(x) and g_2(x) have a fast decay. We can determine the mean avalanche surface, ⟨ S⟩ by⟨ S⟩∼ S_max^2-τ_s∼ L^(1+ζ)(2-τ_S) . To close this relation we adapt the same argument employed to characterize depinning avalanches driven at a tip <cit.>and originally introduced in the context of the Oslo sandpile model where grains are added on one side only<cit.>. Indeed here we are determining W ground state polymers.The first polymer is constrained to start at the inlet (x,y)=(0,0), the last one at the inlet (x,y)=(0,W). The total surface bounded by these two channels is ≈ W L and we can writeW ⟨ S ⟩≈ W L ⟹ L^(1+ζ)(2-τ_S)≈ L ,from which we get τ _S=2-1/(1+ζ). The exponent τ_ℓ can be derived using S∼ℓ^1+ζ and S^-τ_S dS ∼ℓ^-τ_ℓ d ℓ. From this we getτ_ℓ=1+ζ. Let us finally remark that the number of channel at a given distance N_channel(x) corresponds to the number of avalanches whose length is longer than x, thusN_channel(x)∼WProb(ℓ>x) ∼W x^-τ_ℓ +1 .Hence, we can identify δ =τ_ℓ-1=ζ, which is consistent with the observation.To summarize, the mapping with the ground state polymers allows us to determineτ _S=2-1/1+ζ= 7/5,τ_ℓ=1+ζ=5/3,δ=ζ=2/3 .These exponents are in good agreement with those measured in Fig. <ref>. It is worth noting that such a mapping cannot determine β and μ because they depend on the channels opening at higher flow rates.It is also worth mentioning that at very low flow rate, the power-law relationship Eq. (<ref>) indicates that the boundary layer should be infinite. However, due to the finite size of the system, a plateau is necessarily reached once N_channel=1. It follows a maximum length ℓ_bc^max for the boundary layer that depends on the domain width is ℓ_bl^max∝ W^1/δ.This suggests that in order to observe the flow path beyond the boundary layer for low Q, one needs to consider a domain with an aspect ratio other than one (L>W), unlike the example in Fig. <ref>. § DISCUSSION/CONCLUSIONIn this paper we have analysed the influence of the type of boundary condition on the flow of Bingham fluid in porous media. In contrast to Newtonian fluids, the type of the applied boundary condition has a very strong influence on the flow, especially at low flow rates.One consequence is the difference in the minimal pressure for the onset of flow which is smaller when imposing the pressure imposed BC than when imposing the flow rate. Another consequence is that, at low velocity, the pressure imposition shows a single flowing channel, whereas the flow imposition displays a merging channel structure.There is therefore a boundary layer that spreads over a long distance. Its expansion increases according to a power law as the flow rate approaches zero, and increases with the system width. It follows for instance that the boundary layer, but also the difference between the two pressure of flow onset depend on the system aspect ratio. This point is particularly important in the context of homogenisation. Indeed, since it describe the system as homogeneous and derives relations between averaged quantities, the boundary conditions should have as little influence as possible. This appears to be extremely difficult in this problem because the boundary layer diverges at low flow rates, but also because it depends on the aspect ratio of the system under consideration.Finally, we have analysed the flow structure in the zero flow limit. We have shown that the flow structure can be mapped to the problem of an avalanche dynamics of an interface pulled at one end in a random medium. This avalanche dynamics is characterized by power lawswhose exponent (τ_S, τ_ℓ, δ, ζ) can be predicted.However, the analogy cannot predict the two exponents β and μ, which correspond respectively to the increase in flow rate with pressure and the increase of the boundary layer with flow rate. There are various possibilities for extending this study. Firstly, this problem could be extended to a 3D pore network. In this case, the physics is certainly similar, where the inlet flow channels merge at low flow rates. However, one can expect differences in the scaling power exponents.One could also consider other flow driving conditions such as the application of a homogeneous body force like gravity. If this condition should not differ too much from the pressure imposed BC, one could note that there is also a boundary layer associated with it, as can be seen on the right side of the figure <ref> (right). It would be interesting to analyse the evolution of this boundary layer. 25 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Coussot(2005)]coussot05 author author P. Coussot, @nooptitle Rheometry of pastes, suspensions, and granular materials: applications in industry and environment (publisher John Wiley and Sons, year 2005)NoStop [Entov(1967)]entov67 author author V. Entov, @noopjournal journal Prikl. Mat. Mekh. volume 31, pages 820 (year 1967)NoStop [Pascal(1981)]pascal81 author author H. Pascal, 10.1007/BF01170343 journal journal Acta Mechanica volume 39,pages 207 (year 1981)NoStop [Al-Fariss T.(1985)]alfariss85 author author P. K. Al-Fariss T., http://www.onepetro.org/mslib/app/Preview.do?paperNumber=00014683 societyCode=SPE journal journal SPE volume 14683 (year 1985)NoStop [Chase and Dachavijit(2005)]chase05 author author G. Chase and author P. Dachavijit, @noopjournal journal Rheologica Acta volume 44, pages 495 (year 2005)NoStop [Chen et al.(2005)Chen, Rossen, and Yortsos]chen05 author author M. Chen, author W. Rossen,andauthor Y. C. Yortsos, DOI: 10.1016/j.ces.2005.02.054 journal journal Chem. Eng. Sci. volume 60, pages 4183(year 2005)NoStop [Sochi and Blunt(2008)]sochi08 author author T. Sochi and author M. J. Blunt, 10.1016/j.petrol.2007.05.009 journal journal J. Pet. Sci. Eng. volume 60, pages 105(year 2008)NoStop [Chevalier et al.(2013)Chevalier, Chevalier, Clain, Dupla, Canou, Rodts, and Coussot]chevalier13 author author T. Chevalier, author C. Chevalier, author X. Clain, author J. Dupla, author J. Canou, author S. Rodts,and author P. Coussot, 10.1016/j.jnnfm.2012.12.005 journal journal J. Non-Newtonian Fluid Mech. volume 195, pages 57(year 2013)NoStop [Talon and Bauer(2013)]talon13b author author L. Talon and author D. Bauer,http://dx.doi.org/10.1140/epje/i2013-13139-3 journal journal Eur. Phys. J. E volume 36,pages 139 (year 2013)NoStop [de Castro et al.(2014)de Castro, Omari, Ahmadi-Sénichault,and Bruneau]castro14 author author A. R. de Castro, author A. Omari, author A. Ahmadi-Sénichault,and author D. Bruneau,booktitle booktitle Transport in Porous Media, http://dx.doi.org/10.1007/s11242-013-0248-5 journal journal Transp. Porous Media volume 101, pages 349 (year 2014)NoStop [Nash and Rees(2016)]nash16 author author S. Nash and author D. A. S. Rees, @noopjournal journal Transp. Porous Media volume 116, pages 1073 (year 2016)NoStop [Shahsavari and McKinley(2016)]shahsavari16 author author S. Shahsavari and author G. H. McKinley, http://dx.doi.org/10.1016/j.jnnfm.2016.07.006 journal journal Journal of Non-Newtonian Fluid Mechanics volume 235, pages 76(year 2016)NoStop [Moura et al.(2020)Moura, Måløy, Flekkøy, and Toussaint]moura2020intermittent author author M. Moura, author K. J. Måløy, author E. G. Flekkøy,and author R. Toussaint, https://doi.org/10.3389/fphy.2019.00217 journal journal Frontiers in Physics volume 7, pages 217 (year 2020)NoStop [Roux and Herrmann(1987)]roux87 author author S. Roux and author H. J. Herrmann, http://stacks.iop.org/0295-5075/4/i=11/a=003 journal journal Europhys. Lett. volume 4, pages 1227 (year 1987)NoStop [Chevalier and Talon(2015)]chevalier15a author author T. Chevalier and author L. Talon, 10.1103/PhysRevE.91.023011 journal journal Phys. Rev. E volume 91,pages 023011 (year 2015)NoStop [Bauer et al.(2019)Bauer, Talon, Peysson, Ly, Batôt, Chevalier, and Fleury]bauer19 author author D. Bauer, author L. Talon, author Y. Peysson, author H. B. Ly, author G. Batôt, author T. Chevalier,and author M. Fleury, 10.1103/PhysRevFluids.4.063301 journal journal Phys Rev Fluids volume 4, pages 063301 (year 2019)NoStop [Liu et al.(2019)Liu, De Luca, Rosso, and Talon]liu19 author author C. Liu, author A. De Luca, author A. Rosso,and author L. Talon, 10.1103/PhysRevLett.122.245502 journal journal Phys. Rev. Lett. volume 122, pages 245502 (year 2019)NoStop [Schimmenti et al.(2023)Schimmenti, Lanza, Hansen, Franz, Rosso, Talon, and De Luca]schimmenti23 author author V. M. Schimmenti, author F. Lanza, author A. Hansen, author S. Franz, author A. Rosso, author L. Talon,and author A. De Luca, 10.1103/PhysRevE.108.L023102 journal journal Phys. Rev. E volume 108, pages L023102 (year 2023)NoStop [Fraggedakis et al.(2021)Fraggedakis, Chaparian, and Tammisola]fraggedakis21 author author D. Fraggedakis, author E. Chaparian,and author O. Tammisola, 10.1017/jfm.2020.1105 journal journal J. Fluid Mech. volume 911,pages A58 (year 2021)NoStop [Lopez et al.(2003)Lopez, Valvatne, and Blunt]lopez03 author author X. Lopez, author P. H. Valvatne,and author M. J. Blunt,https://doi.org/10.1016/S0021-9797(03)00310-2 journal journal J. Colloid Interface Sci. volume 264, pages 256(year 2003)NoStop [Talon and Hansen(2020)]talon20 author author L. Talon and author A. Hansen,10.3389/fphy.2019.00225 journal journal Front Phys volume 7, pages 225 (year 2020)NoStop [Family and Vicsek(1985)]family85 author author F. Family and author T. Vicsek, 10.1088/0305-4470/18/2/005 journal journal Journal of Physics A-mathematical and General volume 18, pages L75 (year 1985)NoStop [Drossel and Kardar(1995)]drossel95 author author B. Drossel and author M. Kardar, 10.1103/PhysRevE.52.4841 journal journal Phys. Rev. E volume 52,pages 4841 (year 1995)NoStop [Paczuski and Boettcher(1996)]paczuski96a author author M. Paczuski and author S. Boettcher, 10.1103/PhysRevLett.77.111 journal journal Phys. Rev. Lett. volume 77, pages 111 (year 1996)NoStop [Aragón et al.(2016)Aragón, Kolton, Doussal, Wiese, and Jagla]aragon16 author author L. E. Aragón, author A. B. Kolton, author P. L. Doussal, author K. J. Wiese,and author E. A. Jagla, 10.1209/0295-5075/113/10002 journal journal Europhysics Letters volume 113, pages 10002 (year 2016)NoStop
http://arxiv.org/abs/2312.16639v1
{ "authors": [ "Laurent Talon", "Andreas Andersen Hennig", "Alex Hansen", "Alberto Rosso" ], "categories": [ "physics.flu-dyn", "cond-mat.soft" ], "primary_category": "physics.flu-dyn", "published": "20231227170038", "title": "Influence of the imposed flow rate boundary condition on the flow of Bingham fluid in porous media" }
=1 (γΩλhttp://doi.orghttp://arxiv.org/absr̊ghmt
http://arxiv.org/abs/2312.17277v1
{ "authors": [ "Harshna Balhara", "J. K. Singh", "Emmanuel N. Saridakis" ], "categories": [ "gr-qc", "astro-ph.CO", "hep-th" ], "primary_category": "gr-qc", "published": "20231227192825", "title": "Observational constraints and cosmographic analysis of $f({T},{T}_{G})$ gravity and cosmology" }
Dipartimento di Fisica “Aldo Pontremoli”,Università degli Studi di Milano, I-20133 Milano, ItalyINFN, Sezione di Milano, I-20133 Milano, [email protected] robust hybrid receiver for binary phase-shift keying discrimination in the presence of phase noise Michele N. Notarnicola and Stefano Olivares[Corresponding author.] January 14, 2024 ==================================================================================================== January 14, 2024 We address the problem of coherent state discrimination in the presence of phase diffusion. We investigate the role of the hybrid near-optimum receiver (HYNORE) we proposed in [J. Opt. Soc. Am. B 40, 705-714 (2023)] in the task of mitigating the noise impact. We prove the HYNORE to be a robust receiver, outperforming the displacement photon-number-resolving (DPNR) receiver and beating the standard quantum limit in particular regimes. We introduce the maximum tolerable phase noiseas a figure of merit for the receiver robustness and show that HYNORE increases its value with respect to the DPNR receiver.M. N. Notarnicola & S. Olivares A robust hybrid receiver for binary phase-shift keying discrimination…§ INTRODUCTIONQuantum discrimination of coherent states is a central problem for quantum technologies, as they represent thetypical information carriers in both fiber-optics and deep-space communications <cit.>. Within this field, the goal is to design a quantum receiver minimizing the decision error probability <cit.>, being a possible resource to enhance both quantum communications <cit.> and quantum key distribution <cit.>. These kinds of receivers provide a genuine quantum advantage over the standard receivers, based on either homodyne or double-homodyne detection. Indeed, while conventional receivers are limited by the shot noise limit, or standard quantum limit (SQL), the quantum decision theory developed by Helstrom identifies the optimum receiver achieving the minimum error probability allowed by quantum mechanics, namely the Helstrom bound <cit.>.The decision problem has been widely studied for binary phase-shift keying (BPSK) discrimination. The optimum receiverwas designed by Dolinar <cit.> and several proposals of more feasible schemes have been advanced, employing both single-shot <cit.> and feed-forward strategies <cit.>. Among them, strategies employing subitable displacement operations like the Kennedy receiver <cit.> and the hybrid near-optimum receiver (HYNORE) <cit.> proved themselves to be near-optimum, beating the SQL and providing useful detection schemes to enhance information transfer over attenuating (Gaussian) channels <cit.>. Beside this, the performance of quantum receivers in the presence of noisy non-Gaussian channels is another fundamental task towards the realistic implementation of quantum optical communications. A paradigmatic example is provided by phase diffusion noise, which represents the most detrimental source of noise for phase-shift encoding<cit.>, destroying the coherence and the purity of the employed coherent pulses <cit.>. The impact of phase diffusion has been investigated in quantum metrology <cit.>, quantum interferometry <cit.>, quantum communications <cit.> and quantum state discrimination <cit.>. Remarkably, the Kennedy receiver is no longer near-optimum for BPSK discrimination over a phase diffusion channel, and homodyne detection approaches the Helstrom bound in the regime of large noise <cit.>. To overcome this fundamental limitation, DiMario and Becerra adopted the displacement photon-number-resolving (DPNR) receiver <cit.>, that is a Kennedy setup employing photon-number-resolving (PNR) detectors with finite resolution M instead of on-off photodetectors, and optimized also the encoding strategy, enhancing robustness to both channel and detection noise <cit.>.In this paper we investigate the performance of the HYNORE <cit.> in the presence of phase diffusion. The HYNORE is a hybrid receiver that combines the setups of weak-field homodyne, or homodyne-like (HL), detection <cit.> and the DPNR receiver. Its basic principle is to split the encoded signal at a beam splitter, implement HL detection on the reflected branch and, according to the obtained result, perform a suitable displacement operation on the transmitted side. Thereafter, in ideal conditions it behaves as a near-optimum receiver beating both the DPNR receiver and the SQL, and also shows more robustness to detection noise and visibility reduction <cit.>. Here, we address its role to mitigate the detriments of phase diffusion in BPSK discrimination and show it to outperform the DPNR receiver in different regimes, in particular in the high-energy and large-noise limits.The structure of the paper is the following. In Sec. <ref> we briefly recall the main features of coherent state discrimination, while Sec. <ref> deals with the problem of discrimination in the presence of phase diffusion, presenting in detail the DPNR receiver proposed in Ref.s DiMario2018, DiMario2019. Then, Sec. <ref> addresses the performance of HYNORE for phase diffused coherent states and proves it to outperform the DPNR scheme. Finally, in Sec. <ref> we draw some conclusions. § BASICS OF COHERENT STATE DISCRIMINATION In the conventional BSPK encoding in the absence of phase diffusion, the goal is to perform discrimination between two symbols k=0,1, encoded into the two coherent states <cit.>:|α_k⟩ =|e^iπ (k+1) α⟩,with α>0, α^2 being the mean energy, generated by a quantum source with equal a priori probabilities. To this aim, we shall design a quantum receiver, described by a binary positive-operator valued measurement (POVM) {Π_0, Π_1}, associated with the error probability P_ err=[P(0|1)+P(1|0)]/2, where P(j|k)= ⟨α_k|Π_j |α_k⟩, j,k=0,1, is the probability of inferring symbol j if the state |α_k⟩ was measured.The minimum error probability allowed by quantum mechanics, namely the Helstrom bound <cit.>, in the noiseless case is equal to P_^=[1-√(1-exp(-4α^2))]/2, and it is achieved by the Dolinar receiver <cit.>. Nevertheless, the Dolinar setup requires optical feedback and continuous-time measurements, thus making its practical implementation quite challenging.On the other hand, standard receivers reach the SQL, obtained with homodyne detection, with the associated error probability P_^=[1-(√(2)α)]/2. Given this scenario, the task of quantum state discrimination theory is to design a feasible receiver outperforming the SQL and being as close as possible to the Helstrom bound. In the absence of phase diffusion, two paradigmatic examples of feasible receivers are provided by the Kennedy receiver <cit.> and the HYNORE <cit.>.In the Kennedy receiver, or displacement receiver, the incoming signal |α_k⟩ undergoes the displacement operation <cit.> D(α), followed by on-off detection. The displacement may be implemented practically by letting the signals interfere with a suitable intense local oscillator at a beam splitter with large transmissivity <cit.>. We note that D(α) performs a nulling operation, leading to the mapping:|-α⟩→ |0⟩ |α⟩→ |2α⟩,and, in particular, sending |α_0⟩ into the vacuum. Therefore, BPSK is turned into on-off keying (OOK) and on-off detection provides the optimal measurement choice. The resulting error probability is equal to P_^=exp(-4α^2)/2, proving the receiver to be near-optimum, namely proportional to the Helstrom bound, in the high-energy limit α^2≫ 1. Moreover, it also beats the SQL for α^2 > α^2_, with α^2_≈ 0.38 <cit.>.On the other hand, the HYNORE is a hybrid near-optimum receiver based on the combination of the Kennedy setup and the weak-field homodyne, or HL, detection. Its functioning will be briefly presented in the next subsection. §.§ The hybrid near-optimum receiver The HYNORE employs HL detection, that is a homodyne setup where the conventional p-i-n photodiodes and high-intensity local oscillator (LO) are replaced with PNR detectors and low-intensity LO, respectively <cit.>. Its implementation is sketched in the inset of Fig. <ref>. First, the incoming signal impinges at a balanced beam splitter with a low-intensity LO |z⟩, z>0; thereafter we implement PNR detection on both the output modes, retrieving the outcomes n and m, and finally we consider the difference photocurrent Δ=n-m. Realistic PNR detectors have a finite resolution M, as they can resolve up to M photons. In turn, they are described by the POVM {Π_0, Π_1, …, Π_M } with M+1 elements, where:Π_n ={[ | n ⟩⟨ n|n=0,…, M-1 ,; - ∑_j=0^M-1 | j ⟩⟨ j| n=M . ].For a better clarity, from now on we refer to them as PNR(M) detectors. In particular, PNR(1) corresponds to on-off detection, whereas ideal photodetectors are referred to as PNR(inf). Accordingly, we have n,m=0,…, M and the difference photocurrent gets only integer values in the range -M≤Δ≤ M. For an input coherent signal |γ⟩, γ∈ℂ, the HL probability distribution reads<cit.>:S_Δ(γ) = ∑_n,m=0^M q_n(μ_+(γ)) q_m(μ_-(γ))δ_(n-m),Δ,whereμ_±(γ)= |γ± z|^2/2,are the mean number of photons, or energies, on the two output branches, δ_jk is the Kronecker Delta, andq_n(μ) = {[e^-μ μ^n/n! n<M,; 1- e^-μ∑_j=0^M-1μ^j/j!n = M , ].is the probability of getting outcome n after PNR(M) detection <cit.>.The HL setup may be suitably exploited to design a hybrid receiver, the HYNORE, improving the performance of the Kennedy, whose structure is reported in Fig. <ref>. The underlying principle is to implement HL on a portion of the encoded signal to improve the displacement operation of the Kennedy scheme. Both the fraction of the signal directed to the HL detector and the amplitude of the LO are optimized to minimize the overall error probability.In more detail, the state |α_k⟩ is split at a beam splitter of transmissivity τ, such that|α_k⟩→ |α_k^(r)⟩⊗ |α_k^(t)⟩≡ |-√(1-τ)α_k⟩⊗ |√(τ)α_k⟩.Firstly, we perform HL detection on the reflected signal |α_k^(r)⟩. The value of the obtained outcome Δ provides us with some information on the field phase, which can be exploited to choose the sign of a displacement operation D(±√(τ)α) to be implemented on the transmitted pulse |α_k^(t)⟩. Indeed, if Δ≥ 0 it is more likely that state |α_0⟩ was sent, therefore we apply D(+√(τ)α), while in the opposite case we choose D(-√(τ)α). Eventually, we perform on-off detection, which may be still implemented by PNR(M) detectors. The final decision rule is depicted in Table <ref>.The error probability then reads:P_^ =min_τ, z{e^-4 τα^2/2 [ ∑_Δ=-M^-1 S_Δ(α^(r)_0) + ∑_Δ=0^M S_Δ(α^(r)_1) ] },optimized over the transmissivity τ and the LO amplitude z of the HL scheme.As demonstrated in Ref.s Notarnicola2023-HY, Notarnicola2023-HFF, the HYNORE is near-optimum, it outperforms the Kennedy receiver for all energies, P_^≤ P_ K^, and beats the SQL for α^2>α^2_(M), where α^2_(M) is a decreasing function of the resolution M such that α^2_(M)<α^2_.§ QUANTUM DISCRIMINATION OF PHASE DIFFUSED COHERENT STATESIn the presence of a phase diffusion channel, the problem of BPSK discrimination is remarkably different with respect to the scenario discussed in Sec. <ref>. In fact, the encoded coherent states evolve according to a suitable master equation <cit.>, equivalent to the completely positive (CP) map ℰ_σ, such that:|α_k⟩ρ_k=∫_ℝdϕg_σ(ϕ) | α_k e^-iϕ⟩⟨α_k e^-iϕ | ,where g_σ(ϕ)= exp[-ϕ^2/(2σ^2)]/√(2πσ^2) is a Gaussian distribution whose standard deviation σ>0 quantifies the amount of noise. That is, the overall effect of phase diffusion is the application of a Gaussian-distributed random phase shift to the incoming signal, resulting in a overall non-Gaussian CP map. The map (<ref>) not only provides a model for light propagation through phase-fluctuating quantum channels <cit.>, but maybe also adopted in other contexts. An intriguing case is local-local oscillator continuous-variable quantum key distribution (LLO-CVQKD), where a sender and a receiver share a reference phase via quantum estimation on a coherent probe <cit.>. The phase value is estimated after double-homodyne detection, and the obtained result is assumed to fluctuate around the actual value according to a Gaussian distribution. Another example is represented by modeling the phase fluctuations of a realistic laser source <cit.>. More precisely, phase noise of laser radiation is a diffusive stochastic process characterized by both a phase drift and phase fluctuations. While the drift can be controlled by monitoring part of the radiation emitted by the source <cit.>, mitigating phase fluctuations is a more challenging task <cit.> and, in principle, they shall be taken into account in realistic quantum protocols.Given the previous considerations, in the presence of BPSK the effect of phase diffusion is detrimental: it reduces both the coherence and the purity of the encoded coherent states as emerges from Fig. <ref>, reporting the phase space representation of the quantum states before and after the noisy channel <cit.>. In turn, the quantum receiver has to discriminate between the two mixed phase-diffused states ρ_0 and ρ_1. The Helstrom bound becomes <cit.>:P_= 1/2[1- 1/2( |Λ | ) ] ,with Λ=ρ_0 -ρ_1. The corresponding SQL reads <cit.>:P_ = 1/2[ ∫_0^inf dx p_ (x|0) + ∫_-inf^0 dx p_(x|1)] ,wherep_(x|k) = ∫_ℝ dϕg_σ(ϕ)exp[-(x-2α_k cosϕ)^2/2]/√(2π)is the homodyne probability of obtaining outcome x given the state | α_k ⟩, expressed in shot-noise units.As regards the Kennedy receiver, the presence of phase noise is detrimental and its performance is severely degraded for σ>0 <cit.>. To mitigate the noise impact, DiMario and Becerra proposed the DPNR receiver <cit.>, namely a Kennedy setup where PNR(M) detectors replace the on-off ones.In fact, for σ> 0the displacement operation D(α) maps states ρ_k into _k= D(α) ρ_k D(α)^†, equal to:_k = ∫_ℝ dϕg_σ(ϕ)|√(μ_k(α^2,ϕ))e^-iϕ/2⟩⟨√(μ_k(α^2,ϕ))e^-iϕ/2| ,whereμ_0(α^2,ϕ) = 4α^2 sin^2(ϕ/2) μ_1(α^2,ϕ) = 4α^2 cos^2(ϕ/2) .Differently from the noiseless case, the nulling operation implemented by D(α) is not perfect and the output state _0 still contains some photons. Thereby, on-off detection is not the most appropriate strategy anymore and the DPNR setup is expected to outperform the Kennedy.The PNR(M) probability distribution of the displaced states _k reads:p_σ(n|k) = ∫_ℝ dϕg_σ(ϕ) q_n(μ_k(α^2,ϕ) ) ,(n=0,…,M) ,with the probability q_n and the count ratesμ_k(α^2,ϕ) defined in Eq.s (<ref>) and (<ref>), respectively. The final decision is performed according to the maximum a posteriori probability (MAP) criterion:given the outcome n=0,…,M, we infer the state “0” or “1” associated with the maximum a posteriori probability <cit.>. This is equivalent to introducing a threshold = (α^2,σ)≤ M such that all outcomes n < correspond to decision “0", while outcomes n ≥ infer state “1" <cit.>. The threshold is obtained numerically by equating the photon number distributions of the two displaced phase-diffused states, namely p_σ(n̅|0)=p_σ(n̅|1), n̅∈ R, and considering the lowest integer greater than the obtained root n̅, namely (α^2,σ)=n̅. In turn, the error probability of the DPNR receiver reads:P^(M)_= 1/2[ ∑_n=0^-1 p_σ(n|1) + ∑_n=^M p_σ(n|0)] ,and with PNR(1) detection we retrieve the Kennedy receiver.Plots of P^(M)_ as a function of the signal energy α^2 are reported in the left panel of Fig. <ref> for the realistic noise value σ=0.1 <cit.>. The error probability is not a monotonic function of α^2 and, as demonstrated in Ref. Olivares2013, the Kennedy receiver is not near-optimum anymore in the presence of noise.The Kennedy is beaten by DPNR receivers with higher resolution M, whose corresponding error probabilities exhibit a step-like behaviour. This follows from the application of MAP criterion: as displayed in Fig. <ref> (right panel), for α^2≪ 1 the threshold decision count is equal to =1, equivalent to on-off detection, whilst for larger α^2 it jumps to higher integer values, until reaching =M in the high-energy limit α^2≫ 1.Accordingly, the error probability has a cusp at every change in the value ofand, once =M, it becomes an increasing function of the energy. In fact, in the high-energy limit a decision error occurs only when outcome n=M is retrieved from state ρ_0, therefore the error probability is equal to <cit.>:P^(M)_≈p_σ(M|0)/2= 1/2[1-∑_j=0^M-1∫_ℝ dϕg_σ(ϕ) e^-μ_0(ϕ)μ_0(ϕ)^j/j!] ,being an increasing function of α^2. On the contrary, PNR(inf) detectors do not have a finite resolution, thereforecan get arbitrary large values and the step-like behaviour is observed in every energy regime, as shown in Fig. <ref> (left panel).Differently from the noiseless case, the SQL is beaten by DPNR receivers only in particular energy regimes. To highlight this, we consider the gainG_^(M)= 1 - P^(M)_/P_,plotted in Fig. <ref>. In turn, we have a genuine quantum advantage over the SQL when G_^(M) > 0. As expected, the gain G_^(M) is not monotonic with α^2, but it exhibits M jumps before decreasing monotonously. The DPNR receiver outperforms the SQL in the low-energy limit and only in particular intervals of α^2. Remarkably, for a given noise σ we obtain the maximal region of positive gain with PNR(M) detectors having sufficiently small M, whereas increasing the resolution further is not necessary to enhance the violation of the SQL.In Fig. <ref> we report the error probability P_^(M) as a function of the noise σ for low and high energy values α^2=1 (left panel) and α^2=4 (right panel), respectively. In both the cases DPNR receivers beat the SQL only for small noise, whilst in the large-noise limit the SQL becomes near optimum <cit.>. We also note that for α^2=1 the performance of PNR(M) detectors with M≥2 is the same, since in this case the threshold count is equal to =2. On the contrary, for α^2=4 increasing the PNR resolution is beneficial to reduce the error probability.Given the previous considerations, we introduce as a figure of merit the maximum tolerable phase noise ^(), namely the maximum level of noise for which G_^(M)≥ 0 for a given signal energy α^2, depicted in Fig. <ref>. Thus, the DPNR receiver outperforms the SQL if σ < ^(), corresponding to the undergraph region of ^(). We have ^()=0 for α^2<α^2_, since in that regime the DPNR does not beat the SQL neither in the noiseless case; then the plot exhibits M peaks and, thereafter, it decreases towards 0.§ HYNORE IN THE PRESENCE OF PHASE DIFFUSIONNow, we address the role of the HYNORE for BPSK discrimination of phase-diffused coherent states. As discussed in the former section, quantum receivers based on either quadrature measurements or displacement and photon counting show different degrees of robustness against phase noise, therefore a hybrid scheme like the HYNORE, based on the combination of both of them, provides a good candidate to better mitigate the impact of the noise.To evaluate the performance of the HYNORE we proceed as follows. After the beam splitter with transmissivity τ, the dephased signal ρ_k is split into the separable bipartite stateΞ_k = ∫_ℝ dϕg_σ(ϕ)|α_k^(r) e^-i ϕ⟩⟨α_k^(r) e^-i ϕ| ⊗|α_k^(t) e^-i ϕ⟩⟨α_k^(t) e^-i ϕ| ,with α_k^(r) and α_k^(t) introduced in Eq. (<ref>). Then, we perform HL detection on the first branch obtaining outcome Δ and displace the conditional state on the second branch accordingly, obtaining the (not normalized) state _k(Δ)= _r[UΞ_kU^†], where the reflected beam has been traced out, andU=ℙ_Δ⊗{Θ(Δ) D(√(τ)α)+[1-Θ(Δ)] D(-√(τ)α)},ℙ_Δ being the projection operator over the eigenspace associated with the outcome Δ and Θ(Δ) is the Heaviside Theta function, returning 1 for Δ≥ 0 and 0 elsewhere. In turn, we have:_k(Δ) = ∫_ℝdϕg_σ(ϕ) S(Δ|α_k^(r) e^-i ϕ) ×{Θ(Δ) |√(μ_k(τα^2,ϕ))e^-iϕ/2⟩⟨√(μ_k(τα^2,ϕ))e^-iϕ/2|+ [1-Θ(Δ)] |√(μ_k ⊕ 1(τα^2,ϕ))e^-iϕ/2⟩⟨√(μ_k ⊕ 1(τα^2,ϕ))e^-iϕ/2|},S(Δ|α_k^(r) e^-i ϕ) being the HL probability of Eq. (<ref>) and “⊕” denoting the mod 2 sum. Finally, we implement PNR(M) detection on states _k(Δ). The resulting joint probability of outcomes -M≤Δ≤ M and n=0,…,M reads:p_σ(Δ,n|k) = ∫_ℝdϕg_σ(ϕ) S(Δ|α_k^(r) e^-i ϕ) ×{Θ(Δ) q_n(μ_k(τα^2,ϕ)) + [1-Θ(Δ)] q_n(μ_k ⊕ 1(τα^2,ϕ) ) }. We perform discrimination according to the MAP criterion, i.e. by considering the threshold count depicted in Fig. <ref> (right panel), with the remark that the energy value to be considered is now the transmitted fraction τα^2, namely =(τα^2,σ). Accordingly, the decision rule is modified as in Table <ref>.The error probability is obtained asP^(M)_ = min_τ,z P^(M)_(τ,z) ,whereP^(M)_(τ,z)= 1/2[p_σ(Δ < 0, n< | 0) + p_σ(Δ≥ 0, n≥| 0)+p_σ(Δ < 0, n≥| 1) + p_σ(Δ≥ 0, n< | 1) ] = 1/2∫_ℝ dϕ g_σ(ϕ){∑_n=0^-1 q_n(μ_1(τα^2,ϕ) ) ×[∑_Δ=-M^-1𝒮(Δ|α_0^(r) e^-i ϕ) + ∑_Δ=0^M𝒮(Δ|α_1^(r) e^-i ϕ) ]+ ∑_n=^M q_n(μ_0(τα^2,ϕ) ) ×[∑_Δ=-M^-1𝒮(Δ|α_1^(r) e^-i ϕ) + ∑_Δ=0^M𝒮(Δ|α_0^(r) e^-i ϕ) ] } . Plots of P^(M)_ are reported in Fig. <ref> (left panel). Like the DPNR receiver, the error probability P^(M)_ exhibits a step-like behaviour induced by the change in the threshold . The HYNORE outperforms the DPNR, P^(M)_≤ P^(M)_, especially in the high-energy limit α^2≫ 1, where the error probability is reduced of a factor ≈ 5,15,20 for M=1,2,3, respectively, showing higher robustness in mitigating the phase noise. Moreover, we observe a quantum advantage also in the low-energy regime, as emerges by computing the gainG_^(M)= 1 - P^(M)_/P_,depicted in the right panel of Fig. <ref>. We have G_^(M)≥ G_^(M) and, differently from the DPNR case, improving the resolution M makes the gain increase, since the HL scheme performs better and better, coming closer to the homodyne limit. As one may expect, the best performance is obtained with PNR(inf) detectors, where the HL performs as standard homodyne detection and G_^(M)≥ 0 for all energies. The physical meaning of the present results is clearer when considering the optimized transmissivity τ_ and LO amplitude z^2_ obtained after the minimization in Eq. (<ref>), reported in the left and right panels of Fig. <ref>, respectively, for the case of PNR(3) detectors. Analogous results can be retrieved for other values of the resolution M. In the low-energy limit, the transmissivity τ_ increases with α^2 up to reach 1 (corresponding to DPNR). Thereafter, we observe M-1 “sawteeth", namely regions where τ_<1 before increasing to again reach 1. Accordingly, when τ_<1 the LO is z^2_≈ M and the HYNORE outperforms the DPNR, whilst when τ_=1 all signal is sent to the transmitted DPNR setup.On the contrary, in the high-energy limit the transmissivity jumps discontinuously and becomes a decreasing function of α^2, saturating for α^2≫ 1 to an asymptotic value τ_inf 0. Remarkably, τ_inf<1, therefore the optimal strategy is obtained with a proper combination of both the HL and the DPNR schemes. In this regime, z^2_ increases with α^2, being a linear function for α^2 ≫ 1.Finally, in Fig. <ref> we plot P_^(M) (solid lines) as a function of the noise σ for α^2=1 (left panel) and α^2=4 (right panel), respectively, comparing it to the DPNR (dot-dashed lines). We see that P_^(M)≤ P_^(M) in both the small- and large-noise limits and the enhancement is more relevant for large α^2, consistently with the previous analysis. As a consequence, the HYNORE increases the maximum tolerable phase noise ^(), as depicted in Fig. <ref>. In fact, we have ^()≥^() for all energies, and ^()=0 for α^2<α^2_(M), enlarging the region of quantum advantage with respect to the DPNR. Moreover, increasing the resolution M lets the height of the peaks increase, improving further the robustness of the receiver.§ CONCLUSIONS In this paper we addressed the problem of BPSK discrimination in the presence of phase diffusion noise, and investigate the performance of the HYNORE <cit.> with the intent of mitigating the noise impact. We showed that HYNORE outperforms the DPNR receiver <cit.>, proving itself as a robust receiver to counteract phase diffusion in both the low- and high-energy limits. In particular, in the low-energy regime the HYNORE beats the SQL, therefore being a feasible solution to obtain a quantum advantage even in realistic conditions. Moreover, we introduced the concept of maximum tolerable phase noiseto quantify the robustness of a quantum receiver, proving that ^()≥^().The results obtained in the paper provide a more complete characterization of feasible quantum receivers, identifying the limits of quantum communications in the presence of realistic non-Gaussian noise. Furthermore, they pave the way for new applications in quantum phase communications <cit.> and CVQKD <cit.>.10Agrawal2002 G. P. Agrawal,Fiber-optic communications systems (Wiley, 2002).Kikuchi2016 K. Kikuchi,J. Light. Technol. 34 (2016) 157–179.Kaushal2017 H. Kaushal and G. Kaddoum,IEEE Commun. Surv. Tutor. 19 (2017) 57.Helstrom1976 C. W. Helstrom, Quantum Detection and Estimation Theory (Academic Press, New York, 1976).Bergou2010 J. A. Bergou,J. Mod. Opt. 57 (2010) 160–180.Cariolaro2015 G. Cariolaro,Quantum Communications (Springer, Berlin, 2015). Giovannetti2004 V. Giovannetti et al.,Phys. Rev. Lett. 92 (2004) 027902.Arrazola2014 J. M. Arrazola and N. Lütkenhaus, Phys. Rev. A 90 (2014) 042335.Grosshans2002 F. Grosshans and P. Grangier,Phys. Rev. Lett. 88 (2009) 057902.Gisin2002 N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden,Rev. Mod. Phys. 74 (2002) 145–195.Leverrier2009 A. Leverrier and P. Grangier,Phys. Rev. Lett. 102 (2009) 180504.Notarnicola2023-Pol M. N. Notarnicola, M. Jarzyna, S. Olivares and K. Banaszek,New J. Phys. 25 (2023) 103014.Helstrom1970 C. W. Helstrom, J. W. S. Liu and J. P. Gordon,Proc. IEEE 58 (1970) 1578–1598.Dolinar1973 S. J. Dolinar,Quart. Prog. Rep. 11 (1973) 115–120.Lau2006 C. W. Lau et al., in Free-Space Laser Communication Technologies XVIII Vol. 6105 (SPIE, 2006), pp. 144–150.Kennedy1973 R. S. Kennedy, Quart. Prog. Rep. 108 (1973) 219–225 .Takeoka2008 M. Takeoka and M. Sasaki,Phys. Rev. A 78 (2008) 022320.DiMario2018 M. T. DiMario and F. E. Becerra,Phys. Rev. Lett. 121 (2018) 023603.Notarnicola2023-HY M. N. Notarnicola, M. G. A. Paris and S. Olivares,J. Opt. Soc. Am. B 40 (2023) 705-714.Assalini2011 A. Assalini, N. D. Pozza and G. Pierobon,Phys. Rev. A 84 (2011) 022342.Sych2016 D. Sych and G. Leuchs,Phys. Rev. Lett. 117 (2016) 200501.Notarnicola2023-HFF M. N. Notarnicola and S. Olivares,Phys. Rev. A 108 (2023) 042619.Ishii11 H. Ishii, K. Kasaya and H. Oohashi, NTT Technical Review 9(3) (2011) 1.Genoni2011 M. G. Genoni, S. Olivares and M. G. A. Paris,Phys. Rev. Lett. 106 (2011) 153603.Cialdi2020 S. Cialdi, E. Suerra, S. Olivares, S. Capra and M. G. A. Paris,Phys. Rev. Lett. 124 (202) 163601.Notarnicola2022 M. N. Notarnicola, M. G. Genoni, S. Cialdi, M. G. A. Paris and S. Olivares,J. Opt. Soc. Am. B 39 (2022) 1059–1067.Genoni2012 M. G. Genoni, S. Olivares, D. Brivio, S. Cialdi, D. Cipriani, A. Santamato, S. Vezzoli and M. G. A. Paris,Phys. Rev. A 85 (2012) 043817.Jarzyna2014 M. Jarzyna, K. Banaszek and R. Demkowicz-Dobrzański,J. Phys. A 47 (2014) 275302.Trapani2015 J. Trapani, B. Teklu, S. Olivares and M. G. A. Paris,Phys. Rev. A 92 (2015) 012317.Olivares2013 S. Olivares, S. Cialdi, F. Castelli and M. G. A. Paris,Phys. Rev. A 87 (2013) 050303(R).Bina2017 M. Bina, A. Allevi, M. Bondani and S. Olivares,Opt. Express 25 (2017) 10685–10692.Chesi2018 G. Chesi, S. Olivares and M. G. A. Paris,Phys. Rev. A 97 (2018) 032315.DiMario2019 M. T. DiMario, L. Kunz, K. Banaszek and F. E. Becerra,npj Quantum Inf. 5 (2019) 65.Thekkadath2020 G. S. Thekkadath et al.,Phys. Rev. A 101 (2020) 031801(R).Olivares2012 S. Olivares, Eur. Phys. J. Spec. Top. 203 (2012) 3–24.Olivares2021 S. Olivares, Phys. Lett. A 418 (2021) 127720.Paris1996 M. G. A., Phys. Lett. A 217 (1996) 78–80. Ip2008 E. Ip, A. P. T. Lau, D. J. F. Barros and J. M. Kahn, Opt. Express 16 (2008) 753–791.Marie2017 A. Marie and R. Alléaume, Phys. Rev. A 95 (2017) 012316.Chin2021 H.-M. Chin, N. Jain, D. Zibar, U. L. Andersen and T. Gehring, npj Quantum Inf. 7 (2021) 20.Shao2021 Y. Shao et al., Phys. Rev. A 104 (2021) 032608.Bina2016 M. Bina, A. Allevi, M. Bondani and S. Olivares, Sci. Rep. 6 (2016) 26025.Ferraro2005 A. Ferraro, S. Olivares and M. G. A. Paris,Gaussian States in Quantum Information (Bibliopolis Napoli, 2005).Jarzyna2016 M. Jarzyna, V. Lipińska, A. Klimek, K. Banaszek and Matteo G. A. Paris,Opt. Express 24 (2016) 1693–1698. Adnane2019 H. Adnane, B. Teklu, and M. G. A. Paris,J. Opt. Soc. Am. B 36 (2019) 2938–2945. Notarnicola2023-SEC M. N. Notarnicola, S. Olivares, E. Forestieri, E. Parente, L. Potì and M. Secondini,to appear in IEEE Trans. Commun. (arXiv:quant-ph/2211.05688).Notarnicola2023-MJ M. N. Notarnicola, F. Cieciuch and M. Jarzyna,Continuous-variable quantum key distribution over multispan links employing phase-insensitive and phase-sensitive amplifiers arXiv:quant-ph/2309.08041.
http://arxiv.org/abs/2312.16493v1
{ "authors": [ "Michele N. Notarnicola", "Stefano Olivares" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231227093946", "title": "A robust hybrid receiver for binary phase-shift keying discrimination in the presence of phase noise" }
0000-0002-8169-1653 [email protected] Future Laboratory, Tsinghua University No. 160 Chengfu Road Beijing P.R.C. 100084 Pennsylvania State University Westgate Building University Park Pennsylvania USA 168010000-0001-7126-1302 [email protected] of Arts & Design, Tsinghua University Tsinghua University Beijing P.R.C. 100084Corresponding author 0000-0001-6927-0111 [email protected] Future Laboratory, Tsinghua University No. 160 Chengfu Road Beijing P.R.C. 1000840000-0002-4821-2366 [email protected] of Arts & Design, Tsinghua University Tsinghua University Beijing P.R.C. 100084 0009-0007-6812-7031 [email protected] College, Peking University NO.5 Yiheyuan Road Beijing P.R.C 1008710000-0001-9383-9910 [email protected] Technological University Singapore0000-0001-5189-337X [email protected] State University Westgate Building University Park Pennsylvania USA 16801 Both sensor networks and data fusion are essential foundations for developing the smart home Internet of Things (IoT) and related fields. We proposed a multi-channel sensor network construction method involving hardware, acquisition, and synchronization in the smart home environment and a smart home data fusion method (SHDFM) for multi-modal data (position, gait, voice, pose, facial expression, temperature, and humidity) generated in the smart home environment to address the configuration of a multi-channel sensor network, improve the quality and efficiency of various human activities and environmental data collection, and reduce the difficulty of multi-modal data fusion in the smart home. SHDFM contains 5 levels, with inputs and outputs as criteria to provide recommendations for multi-modal data fusion strategies in the smart home. We built a real experimental environment using the proposed method in this paper. To validate our method, we created a real experimental environment — a physical setup in a home-like scenario where the multi-channel sensor network and data fusion techniques were deployed and evaluated. The acceptance and testing results show that the proposed construction and data fusion methods can be applied to the examples with high robustness, replicability, and scalability. Besides, we discuss how smart homes with multi-channel sensor networks can support digital twins.<ccs2012><concept><concept_id>10003120.10003121.10003122</concept_id><concept_desc>Human-centered computing HCI design and evaluation methods</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10003120.10003121.10011748</concept_id><concept_desc>Human-centered computing Empirical studies in HCI</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10002951.10003227</concept_id><concept_desc>Information systems Information systems applications</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10010583.10010588</concept_id><concept_desc>Hardware Communication hardware, interfaces and storage</concept_desc><concept_significance>300</concept_significance></concept><concept><concept_id>10010583.10010588.10010596</concept_id><concept_desc>Hardware Sensor devices and platforms</concept_desc><concept_significance>500</concept_significance></concept></ccs2012> [500]Human-centered computing HCI design and evaluation methods [500]Human-centered computing Empirical studies in HCI [500]Information systems Information systems applications [300]Hardware Communication hardware, interfaces and storage [500]Hardware Sensor devices and platforms Multi-channel Sensor Network Construction, Data Fusion and Challenges for Smart Home John M. Carroll January 14, 2024 ====================================================================================§ INTRODUCTIONWith the improvement of sensor technology, the development of artificial intelligence, the popularization of high-speed Internet, and the application of the concept of smart home Internet of Things, the concept of the smart home is being accepted by more and more people. At the same time, with the popularity of consumer-grade smart home products and the increase in human society's demand, the smart home market has a certain scale and is growing rapidly. Statista states that the global smart home market could reach $182.45 billion by 2025 <cit.>. According to the scenario-based needs of users, a smart home mainly involves several key areas such as device control and linkage <cit.>, scene atmosphere <cit.>, entertainment and leisure <cit.>, intelligent security <cit.>, energy management <cit.>, intelligent applications <cit.>, and design for specific people or needs <cit.>. On the one hand, there is a growing demand for smarter and more efficient smart homes and related products, and the various needs, motivations, goals, and implementation methods of using smart homes in smart home scenarios are repeatedly emphasized <cit.>. Among them, data and sensor networks are one of the most important foundations for studying smart home problems. On the other hand, traditional interaction methods have been found to have a number of interactivity problems that still affect the interaction experience, such as intrusiveness <cit.>, fatigue <cit.>, and interaction with specific groups of people <cit.>. The sensor network construction and data fusion application are the preconditions to help solve the above problems. In addition, humans have moved into the information society, where more and more interactions are moving from physical to virtual. Highly informative and virtualized interactions are gradually accepted and taken for granted. The diversity of data collected through multi-channel sensor networks significantly impacts future interaction methods, especially for further smart home digital twins.There is currently no comprehensive summary of sensor network construction methods in the smart home. The main problems are the following two: (1) There are few comprehensive, integrated application and sensor network construction methods covering the whole smart home. Most of the existing studies are sensor networks built for specific studies, which are characterized by using only a small number of channels, a relatively single type, and a single purpose of sensors. Although similar sensor networks are still somewhat usable for specific research problems, with the further diversification of scenarios and requirements in smart homes, a series of unpredictable cross-influences on the research problems in real smart home environments cannot be effectively addressed or planned; (2) Existing studies rarely mention the specific implementation problems encountered in the construction of sensor networks, especially the configuration of hardware, acquisition, and synchronization in multi-channel sensor networks. The smart home is a significant research sub-field in human-computer interaction (HCI) and even in the interdisciplinary intersection. There are a lot of research problems related to people's livelihoods that can be explored <cit.>. At the same time, the a priori skills of researchers and technologies are likely to be limited in the interdisciplinary field, which limits further research. With the development of sensor technologies and the growing demand for HCI, ubiquitous computing <cit.> and data-driven <cit.> approaches have received significant attention from researchers. The processing and fusion methods of the large amount of multimodal data generated in the smart home environment directly affect the effectiveness of the relevant models. The data fusion mentioned in this paper refers to the process of fusing real data from multiple sources, where disparate data are fused into usable integrated data. Traditional data fusion has certain challenges, mainly regarding uncertainty, integrity, and consistency after fusion [9]. In addition to the conventional challenges, data fusion solutions for smart homes also face the following new challenges:* Existing data fusion solutions are not applicable to smart home problems.* Multimodal data fusion has limitations in terms of the types and numbers of modalities involved.* Only specific problems are addressed without global considerations, and portability is insufficient.Based on the above background and challenges, this paper firstly introduces the current situation, development trend, problems, and difficulties of multi-channel sensor network construction and data fusion and related research, and then proposes a multi-sensor network construction method applied to the smart home environment in conjunction with the development trend of smart home, future research-ability, sensor availability and related environment construction problems, and thus proposes a data fusion scheme in the smart home environment. Concurrently, the proposed methods for constructing the sensor network and the data fusion scheme were preliminarily tested through real scenario assessments on the comprehensive experimental platform we've built. § RELATED WORK§.§ The Construction of Multi-channel Sensor Networks The sensor network consists of multiple sensor nodes deployed in the smart home, forming a multi-hop self-organizing network using wireless communication. Liang et al. <cit.> built a wireless sensor network for the smart home security system based on ZigBee protocol and other IoT technologies. Their proposed sensor network mainly consists of a digital access control system(hub) and sensor nodes. The sensor nodes mainly focus on detecting intrusion and indoor environmental information, including PIR, water level, smoke, and reed switch sensors. Ramson and Moni <cit.> reviewed various applications of wireless sensor networks, and the examples cited in their study regarding environmental monitoring, healthcare, and smart building categories are highly relevant to smart homes. Liu et al. <cit.> described the types of multi-channel sensors that may be used in smart environments and their uses.Multi-device synchronization in multi-channel sensor networks poses significant challenges. While the widely used Network Time Protocol (NTP) <cit.> offers universal time coordination, it lacks precision (its time accuracy is low, providing 1-50ms time accuracy and relying on real-time network load), which may result in substantial errors in systems requiring high-frequency data, such as multi-camera video sensors. Additionally, to ensure the smooth functioning of the sensor network, a smart home system often requires a robust control base station, data acquisition, data processing, wireless communication, and power supply systems <cit.>. While various studies <cit.> have made valuable contributions to multi-channel sensor networks in smart homes, we aim to extend this research further. We aim to develop a system that integrates a broader array of channels with a more diverse range of sensors and smart home devices. Our proposed construction route is designed to be comprehensive, enhancing the system's overall quality. In our ongoing work, we strive for a system that excels in terms of wholeness, accessibility, robustness, reproducibility, and intelligence. §.§ Multi-channel Data Fusion Researchers have widely discussed data fusion and related techniques. The underlying logic is to combine different types, dimensions, and qualities of data from multiple sources and use them for any parameter estimation <cit.>. In related studies <cit.>, the conceptual terms of multimodal data fusion, multi-channel data fusion, and multisensor data fusion are often substitutable. Hall and Linas <cit.> proposed a definition of data fusion in which data from multiple sources are processed and combined based on relevant information to obtain higher accuracy and greater interpretability than single-source data. On the one hand, researchers have conducted a large number of studies related to multimodal data fusion, involving fusion strategies <cit.>, data processing methods <cit.>, multisensor data fusion <cit.>, information fusion <cit.>, and data fusion applications <cit.>, etc. Meanwhile, some technical engineering and methodological guidelines on data fusion are provided in related studies <cit.>. On the other hand, researchers have continuously reviewed and organized techniques and fusion methods related to data fusion <cit.>.The data fusion architecture system proposed by Dasarathy <cit.> is one of the most famous data fusion methods, and its system consists of five categories based on input and output conditions, which are (1) Data In-Data Out (DAI-DAO): this method is the most basic data fusion method, where the raw data is fused, and the data itself is output directly after the sensor acquires the data; (2)Data In-Feature Out (DAI-FEO): Combine the sensors' original data and output the fused data's features; (3)Feature In-Feature Out (FEI-FEO): Both input and output are data features that are suitable for multi-source data with different data structures; (4) Feature In-Decision Out (FEI- DEO): Goal-oriented by inputting features and deriving decision labels based on a priori knowledge or pre-trained models. This category is often referred to as feature fusion; (5) Decision In-Decision Out (DEI- DEO): Both input and output are decisions, and this category is often referred to as decision fusion. All the five fusion strategies proposed in the method have some limitations. Among them, methods (1) to (4) are the common fusion methods, while method (5) has higher technical configuration requirements, especially for the deployment and configuration of full sensor networks. However, (5) has a good scope for development and is more suitable for the development trend in the context of artificial intelligence and machine learning. Based on this data fusion architecture, Fawzy et al. <cit.> proposed a spatiotemporal data fusion (STDF) method for low-level data input-output fusion for real-time spatial IoT resource aggregation by k-mean clustering for spatial clustering and tiny AG-gregation (TAG) for temporal aggregation, which reduced the data size by 95% and saved 80% of the processing time with 90% accuracy.Joint Directors of Laboratories (JDL) <cit.>, although originating from the military domain, is currently the most popular data fusion model <cit.>, which is based on a data fusion process divided into five levels and includes an associated information database and an information channel for connecting the five levels, the database, the management system and the user interface.§.§ Technology Mediation The use of information technology to create new "virtual spaces" is no longer just an idea but has become a happening today. Creations using virtual reality (VR) <cit.>, augmented reality (AR) <cit.>, and mixed reality (MR) <cit.> technologies are beginning to blur the boundaries of the world. Dourish's <cit.> concept of relying on data and technology and developing new practices, new responses, and new environmental perceptions from everyday real-world experiences has been integrated into and become part of human society. The impact of data on the digital twin and the future of interaction is now widely recognized by academia. In the Smart Home domain, many possible devices and concept prototypes are made possible through data support. For example, the "Smart Home Virtual Tour" created by Kučera and Haffner et al. <cit.> uses multiple sensors connected to an Arduino microcontroller to enable virtual tours and events to react to each other. Hu and Mao et al. <cit.> proposed a digital twin and mixed reality based remote collaboration system for smart homes (RCSSH) in which cameras, environmental sensors (for temperature, harmful gas, smoke/fire), weight sensors, audio sensors (by headphones), and infrared sensors are deployed. Remote assistance is provided to users by collecting data from sensors in a mixed reality environment. Gopinath and Srija et al. <cit.> proposed a solution for redesigning the smart home using the digital twin, which relies on continuous connection and data transfer between sensors and virtual models and applications to enable real-time tracking and visualization of the smart home in a digital twin environment. However, there are few digital twins for the entire smart home environment.§ MULTI-CHANNEL SENSOR NETWORK CONSTRUCTION FOR SMART HOME This paper proposes the whole sensor network system consists of four subsystems covering sensor acquisition, data processing, data transmission, database maintenance management, and automation functions, namely: (1) hardware system, (2) acquisition system, (3) synchronization system, and (4) robustness system.§.§ Hardware, Data Collection and Synchronization SystemThe hardware system of the smart home multi-channel sensor network contains (1) multi-channel sensors for collecting various activities and environmental data in the smart home environment; (2) Interprocess communication (IPC) system components for handling the data acquisition process; (3) Synchronization control system components for solving multi-channel sensor synchronization acquisition problems; (4) Network attached storage (NAS). Common types of sensors include multi-camera systems, microphone arrays, pressure sensors, electronic nose, gas sensors, sensors attached to furniture and home appliances, etc. The hardware system determines the upper limit of the quality of the collected data. It is recommended to focus on the sensor type, deployment location, synchronization method, and feasibility of the hardware system.The multi-channel sensor network, as outlined in our real-time hardware model, has been deployed in a real-life smart home integrated experiment platform spread over approximately 60m^2. The setup comprises multiple FLIR cameras, connected to an Industrial PC (IPC) via a 10 GigE network, generating about 6.7TB of data daily, excluding bedrooms and bathrooms to maintain privacy. Microphones, olfactory, environmental, and specific device sensors, such as pressure and infrared, are positioned strategically. These sensors relay data to the IPC system using data cables, Bluetooth, or WiFi. The synchronized data is pre-processed and stored on a dedicated server for future use.§ MULTI-CHANNEL SENSOR NETWORKS: SMART HOME DATA FUSION MODELAfter the multi-channel sensor network system is built, human activity data and environmental data will be collected in the smart home environment. We propose a data fusion model for a multi-channel sensor network in smart home environments based on the JDL model in Figure <ref>, called Smart Home Data Fusion Model (SHDFM).Level 0 This level implements the most basic data fusion, but unlike the traditional JDL model, the focus of this level is on multi-source data alignment, where the input is data from separate timelines and the output is data with the same timestamp. The synchronization components and timestamp alignment methods mentioned above are applied to align the multi-modal data in time units. At this level, the other modal data sizes can be aligned to the primary modal by specifying a few primary modalities. In the example, the amount of information contained in the images and audio is taken as a reference. The high-frequency data, such as the data of the multi-camera and microphone system, are chosen as the primary modality for alignment, considering the large amount of semantic information contained in the images and audio, but not in the sensors, such as environment and usage. This level addresses the critical limitation of mismatched timestamps often found in traditional data fusion methods. Here, rather than just fusing data, we lay emphasis on multi-source data alignment. The input comprises data from disparate timelines, and the output synchronizes them to have identical timestamps. By aligning high-frequency data, such as images and audio from multi-camera and microphone systems, which carry significant semantic information, to a primary modality, this level establishes a foundational solution for subsequent levels. Level 1 At this level, multi-model data sources are used for data filtering and labeling. For example, in a smart home environment using a multi-camera system, camera data is aligned at a specific frame rate. Main cameras are identified, unoccupied frames removed via person detection algorithms, and relevant frames are calibrated or deleted based on comparison with voice data. Building on the alignment achieved in Level 0, this level harnesses multi-modal data sources for data filtering and labeling. In practical scenarios like a smart home, this helps eliminate redundancy and enhances accuracy. Data from multi-channel sensors is fused to extract more useful data and features. Using the multi-camera system again as an example, a spatial coordinate system is established. Each camera's relative spatial coordinates are calibrated, and skeletal point coordinates are extracted and reconstructed in 3D, improving data accuracy. This, combined with aligned floor data, helps obtain the subject's movements and position in the room.Ideally, the processes of these initial stages can feed into and optimize the outputs of further stages. Level 2 This level involves further extraction of data features and high-level states, such as types of activities, emotions, health conditions, and interactions between objects. Here, the labeling results are more complex, influenced by prior knowledge and human factors. Also, the fusion process goes through cycles for optimization, knowledge updates, and expansion of dynamic variable influence on the output. Studies at this level focus on advanced state behavior recognition, multi-modal emotion detection, and specific interaction studies. For instance, in ongoing research, researchers are trying to identify thirteen advanced states of occupants based on camera and floor sensor data (see example in the Figure <ref>). Furthermore, they performed emotion recognition based on gait and speech information, categorizing the multi-modal data into nine emotional states.Level 3 Complex Decision Making: The advanced tags, such as various human activities, emotions, events, and behaviors in the smart home environment obtained in the previous level, are fused and combined with environmental data for complex decision-making or higher-level activity recognition. Compared to level 2, level 3 has more uncertainty in the output but more personalization. Also, from this level onwards, there will be interactions initiated by things to people, such as active recommendation systems and emotional interventions. Therefore, from this level, intelligent interactive devices within the smart home system are invoked. Level 4.1 Smart City Access: External resources beyond the limits of the smart home begin to be called upon, such as big data and more shared data beyond the scope of the smart home. This level can result from quantitative changes, such as independent smart home networks accessing the broader smart city. At the core of this is the fact that sensor channels and coverage areas reach a sizable scale in a given area of human activity, and data sharing is no longer bound by space and time , and the data can support multiple events and context awareness. Level 4.2 Virtual Smart Home: it represents an untapped yet promising avenue parallel to level 4.1. It encompasses the virtualization of a smart home, achieved through continuous data feature interaction, extraction, recognition, fusion, and decision-making. The integral processes of this level and level 4.1 mutually influence each other without a hierarchical relationship. The multi-channel sensor network data can be rendered into a virtual space through digital twins, establishing a connection between actions within the virtual space and the real world. This virtual smart home transcends traditional notions of space and time, entirely supported by the underlying data infrastructure.SHDFM weaves the smart home environment with a multi-channel sensor network in a dynamic, adaptable architecture. As we climb the SHDFM ladder, requirements and resources evolve, becoming more intricate. Interestingly, as the complexity increases, the volume of output data decreases, a testament to the refined processes at play. While each level has the flexibility for self-iteration and influences across levels, they all present vast research potential.Presently, extensive research efforts are directed toward levels 0 to 3, and with technological advancements, level 4 is also being explored. Moving forward, we anticipate continued improvements across levels 0 to 3, potentially in a non-linear fashion. Moreover, we foresee the exploration of levels 4.1 and 4.2 as the focus of future work, provided the resource ceiling is lifted. However, delving into these advanced levels may bring ethical considerations into sharper focus.§ FURTHER DISCUSSIONThe smart home environment introduces unique challenges that conventional data fusion methods often need to be equipped to tackle. Traditional solutions need to improve in the face of smart homes' dynamic nature, diversity of data, and the pivotal role of privacy and security.Smart homes are characterized by the simultaneous evolution of numerous factors, including residents' behaviors, varied ambient conditions, and the status of interconnected smart devices. These homes generate vast data types, requiring a flexible fusion strategy to manage disparate data scales efficiently. Additionally, a high emphasis on privacy and security becomes necessary, which isn't typically a primary concern in other data fusion scenarios.SHDFM is designed to address these specific challenges, utilizing a non-unidirectional flow architecture that permits dynamic adjustments and iterative optimization across various layers. This novel approach facilitates the model's adept adaptation to the dynamic nature of smart homes.Conventional data fusion methods, founded on a linear or unidirectional data processing flow, need help to adapt efficiently to smart homes' dynamism and diverse data types. Furthermore, they may need more mechanisms to ensure adequate privacy and security protections. While these methods could be modified to a degree, they may not offer robust or optimal solutions in the smart home context.Our proposed SHDFM, with its structured yet adaptable approach, confronts these challenges head-on while vigilantly maintaining user privacy and security. As we continue to refine this model, we anticipate it is playing a transformative role in advancing the capabilities and user-friendliness of future smart home systems.§ FUTURE WORKOne of our ongoing works focuses on advancing the application of Digital Twin technology in our smart home system. We are striving to optimize the multi-modal data collection from our network of over 100 IoT devices and enhance the accuracy of our smart home virtual representation. In parallel, we enrich our understanding of human-centric data through our multi-channel sensor network, emphasizing capturing and interpreting behavioral and cognitive insights. The ultimate goal is to extend our capabilities to virtually map human interactions within the smart home, providing a comprehensive picture of user behavior.The prototype (Figure <ref>) we've created is a detailed replication of a real-world smart home environment, capturing key elements such as furniture, devices, and sensor placements. Its objective is to provide an extensive visual interpretation of the smart home setup. Our next step involves upgrading the prototype to allow interaction with specific devices within this virtual space, driven by data from the actual smart home. This enhancement would be possible through further improvements in the prototype's capabilities. Additionally, we are progressing with a project that leverages the SHDFM to analyze and synthesize primary behavioral data along with environmental data. The goal of this project is to extract high-level semantic information like human emotion, intent, and cognition <cit.>. For a more thorough digital twin, we gather human data through multi-channel sensors, which aid in affective analysis.§ CONCLUSIONThis paper delves into the research landscape of smart homes, scrutinizes multi-channel sensor networks and data fusion literature specific to this realm, and pinpoints existing challenges. We propose a robust method for constructing multi-channel sensor networks and present a comprehensive example as a reference for researchers, particularly those venturing into interdisciplinary fields. We further introduce a modified version of the JDL data fusion model, dubbed the SHDFM, to establish a blueprint for smart home data fusion studies. To date, we have accomplished Level 0 and 1, along with aspects of Level 3 and 4.2, with the remaining levels slated for future work.Key areas of focus for next-generation smart home sensor networks include amplifying network transmission rates and incorporating snap-in sensors that are simpler to install. The emphasis on Level 4 of the SHDFM is apparent and poised to grow. Overcoming technical obstacles while addressing ethical concerns affiliated with smart homes remains paramount. Furthering our understanding of user psychology and needs will allow for more targeted and practical frameworks and platforms.Our proposed framework and methodology are designed to aid in constructing smart home research platforms, underpin theoretical research on sensor networks and data fusion, and guide the development of smart home products. By providing examples of smart home platforms built on the foundation of this framework and theory, we aim to streamline subsequent research. Looking forward, the creation of advanced smart home experimental platforms, incorporating comprehensive sensor networks and data fusion, will stand as a crucial focus for our future endeavors in HCI studies.This work was supported by the National Natural Science Foundation of China (Grant No. 62172252)ACM-Reference-Format § APPENDICESX
http://arxiv.org/abs/2312.16697v1
{ "authors": [ "He Zhang", "Robin Ananda", "Xinyi Fu", "Zhe Sun", "Xiaoyu Wang", "Keqi Chen", "John M. Carroll" ], "categories": [ "cs.HC" ], "primary_category": "cs.HC", "published": "20231227193043", "title": "Multi-channel Sensor Network Construction, Data Fusion and Challenges for Smart Home" }
#1 1 Statistical monitoring of European cross-border physical electricity flows using novel temporal edge network processes Anna Malinovskaya Leibniz University Hannover, Germany Rebecca Killick Lancaster University, United Kingdom Kathryn Leeming Leicestershire, United Kingdom Philipp Otto University of Glasgow, United Kingdom============================================================================================================================================================================================================================================= Conventional modelling of networks evolving in time focuses on capturing variations in the network structure. However, the network might be static from the origin or experience only deterministic, regulated changes in its structure, providing either a physical infrastructure or a specified connection arrangement for some other processes. Thus, to detect change in its exploitation, we need to focus on the processes happening on the network. In this work, we present the concept of monitoring random Temporal Edge Network (TEN) processes that take place on the edges of a graph having a fixed structure. Our framework is based on the Generalized Network Autoregressive statistical models with time-dependent exogenous variables (GNARX models) and Cumulative Sum (CUSUM) control charts.To demonstrate its effective detection of various types of change, we conduct a simulation study and monitor the real-world data of cross-border physical electricity flows in Europe. Keywords: CUSUM Control Charts; GNARX Model; Network Modelling; Network Monitoring.1.45 § INTRODUCTIONNetwork analysis is an important statistical inference tool in various disciplines including economics, social sciences, and chemistry <cit.>; It provides insight into the network structure and beyond. Usually, the underlying network structure is considered to be random, meaning that both nodes and edges may appear and disappear over time. However, processes also originate in network structures with limited or no temporal dynamics. In both cases, to understand whether a change has occurred we need to inspect the data over time. This procedure is also known as network monitoring. Hence, to distinguish the case when the network is considered to be a random observation from the case when the network structure is deterministic but the process on it is random, we utilise the terminology random network monitoring and fixed network monitoring introduced in <cit.>.Usually, the modelling and monitoring of networks are focused on detecting changes and anomalies reflected in the geometric properties of a graph, falling under the type of random network monitoring. Illustrative studies include the monitoring of e-mail communication and the daily flights within a country for detecting unusual communication patterns (cf. ). However, for some networks, the development of connections or inclusion of new nodes is either no longer possible or would not be of considerable monitoring interest.In general, the integration of nodal or edge attributes for improving the modelling of the underlying network formation mechanism and in turn the network monitoring is not a new concept <cit.>. However, in existing research, the attributes are considered to regulate the presence or absence of an edge.In other words, the contextual information available either on vertices or edges is viewed as an extra dimension to the graph helping to explain the likelihood of the observed links <cit.>. In contrast, we consider the process happening on edges as the primary or only source of information about the network state.It is beneficial to distinguish general random processes we regard on the fixed network structures from specific areas of analysis of processes on networks. The first well-established perspective is the analysis of spatial point patterns on networks (cf. ), where the accurate location of the object on the physical network is of concern. The second area concerns the analysis of network flows (cf. ), where the physical constraints of a network structure and the flow itself play a vital role. In this work, however, we could think of the analysed process as being a flow or traffic with no associated constraints unless they are imposed explicitly.Starting with the description of TEN processes in Section <ref>, we explain our monitoring framework in Section <ref>. Afterwards, we perform a simulation study by testing the proposed methodology under different anomalous scenarios in Section <ref>. To illustrate the importance and the idea of how to monitor the real-world TEN processes, we monitor cross-border physical electricity flows in Europe in Section <ref>. In the last section, we discuss further research direction and summarise the perspectives and limitations of the proposed approach.§ TEN PROCESS Consider a network G = (V, E), where the elements of V represent vertices (or nodes) and E – edges (or links). Further, we assume a fixed structure of G over time described by an adjacency matrix Y (Y_ij)_i,j = 1, …, |V|, with |V| being the number of nodes. When two vertices i, j ∈ V are connected by an edge e ∈ E, they are called adjacent. Usually, the entries of Y are binary, i.e. Y_ij = 1 if (i,j)∈ E,ij, and 0 otherwise. The graph can be directed or undirected, in the latter case is Y symmetric. To each existing edge e ∈ E we relate time series {x_e, t}, where t = 1, …, T, being the attributed process of G. The complete representation X = (X_e, t)_e = 1, …, |E|, t = 1, …, T, where |E| denotes the number of edges, together with the graph G forms a dynamic Temporal Edge Network (TEN) process illustrated in Figure <ref> (top) for two time stamps.The following section proposes one approach to modelling and monitoring TEN processes. § NETWORK MONITORING FRAMEWORK Usually, statistical network monitoring is subdivided into two parts: 1) Network modelling and 2) Process monitoring <cit.>. Instead of formal network modelling, it is possible to collect network features and monitor those. For example, <cit.> comprehensively discuss which network characteristics are suitable for which types of change. In our work, prior to the monitoring procedure in Section <ref>, we first define a model that can be suitable for TEN processes. §.§ Modelling of TEN Processes The majority of research on network models considers the network (or graph) structure to be a random variable. Available models include Markov process models (cf. ), (temporal) exponential random graph models (cf. ) and latent process-based models, e.g. (dynamic) stochastic block models (cf. ). In contrast, TEN processes assume a static structure of a network over time. One could adapt the aforementioned random network models to this context but they involve computationally intensive estimation that relies on numeric approximation. Hence, we consider alternative model forms designed for fixed network structures. The modelling of multivariate time series with an underlying network dependency structure is gaining popularity. <cit.> present the Generalized Network Autoregressive (GNAR) framework, where a network is associated with multivariate time series and modelled as one. The GNAR models time series that occur at nodes of the network, e.g. growth rates of gross domestic product. The GNAR model was extended to incorporate time-dependent exogenous variables (GNARX) in <cit.>. After the introduction of the original GNARX model in Section <ref>, we present its extension from nodal time series to time series on network edges in Section <ref>. §.§.§ GNARX Model In this section, consider {x_i, t} to be a time series related to each node i. The GNARX model (p, s, q) with H exogenous regressors {z_h, i, t: h = 1, …, H,i ∈ V,t = 1, …, T} and the autoregressive order p, where (p, s, q) ∈ℕ×ℕ^p_0 ×ℕ^H_0 holding for all vertices i ∈ V, is specified as x_i,t = ∑_l=1^p(α_i,lx_i,t-l + ∑_r = 1^s_lβ_l,r∑_j∈𝒩^(r)(i)ω_i,jx_j,t-l) + ∑_h=1^H∑_q=0^q_hγ_h,qz_h,i,t-q+ϵ_i, t. The order p also determines the maximum order of neighbour time lags, i.e. s = (s_1, …, s_p) with s_l being the maximum stage of neighbour dependence for time lag l. For example, s_1 = 2 means that nodes depend on their first and second-stage neighbours in G in the first time lag. Similarly, the maximum time lag of the h-th exogenous regressor {z_h, i, t} is defined as q_h and collectively these are q = (q_1, …, q_H). In case q_h = 0, the current value of the exogenous variable is considered at time point t. The noise is denoted by ϵ_i,t and is assumed to be independent and identically distributed at each vertex i with mean zero and variance σ^2_i.The parameters α_i,l, β_l,r, γ_h,q∈ℝ define autoregressive influence, neighbouring influence and external influence from regressors, respectively. It is possible to estimate a global-α model, where α_i, l = α_l, assuming the same autoregressive process for all nodes.The set 𝒩^(r)(i) denotes the r-th stage neighbourhood set of node i ∈ G. For instance, the stage-1 neighbours of a node i ∈ V are the adjacent nodes j ∈ V, connected by an edge. Further, the stage-2 neighbourhood set of a node i ∈ V is the stage-1 neighbours of the adjacent nodes j ∈ V as can be seen in Figure <ref>. Moreover, there are weights ω∈ [0, 1] associated with every pair of nodes that, in our case, depend on the size of the neighbour set as explained in <cit.>.By fitting the GNARX model, we obtain the estimates of the parameters α_i,l, β_l,r, γ_h,q that can be used for predicting the x̂_i, t + 1 value from {x_i, t}. The one-step-ahead forecast errors are then determined as u_i, t + 1 = x_i, t + 1 - x̂_i, t + 1which can be utilised in our monitoring framework. However, the GNARX model is defined for time series related to nodes. Thus, we need to extend the representation of TEN processes to enable a correct application of the GNARX model for performing the monitoring.§.§.§ Extension to TEN Processes There are different ways of thinking about why we require an alternative representation. First, our main focus lies in discovering changes happening between and within the streams, i.e. a process captured on the edges. Thus, we actually indirectly consider it to be a network not of single nodes anymore but of pairs of nodes.Second, the influence from node to node and the influence from one stream to its neighbouring stream portray two distinct standpoints. In the case of the dependency between two traffics, we have more than two nodes involved and other underlying mechanisms that determine the exchange and its strength. Therefore, we need a different adjacency matrix to describe this novel dependency structure.For distinguishing between G and Y, and the novel representation of the TEN process as G', we introduce notation for the new adjacency matrix as Y'. Now, as displayed in Figure <ref> (below), the edges e ∈ E from the original graph G become vertices ι∈ V' with the number of nodes being |V'| = |E| so that the time series {x_e, t} placed on edges e ∈ Eare now placed on the nodes, becoming {x_ι, t}. Consequently, we would also have new edges ξ∈ E'. Considering the connectivity structure captured by the adjacency matrix,without any expert knowledge, the natural choice for creating Y' would be to use a model for generating random graphs. A potential candidate is Erdős-Rényi model, where all graphs on a fixed vertex set with a fixed number of edges are equally likely. Another possibility is to consider sampling from a stochastic block model which produces graphs with a community structure. For example, links within a community may be more probable than between communities. Both of those options are tested in Section <ref>. There is, however, an additional approach to choosing a new connectivity structure by referring to other types of graphs from graph theory. In this case, we use a deterministic approach to construct Y'. One relevant representation technique is the line (also known as edge-to-vertex dual) graph L(G) of G. Here, L(G) is constructed on E, where e ∈ E are adjacent as nodes ι∈ V' if and only if they share a common node as edges in G <cit.>. In Figure <ref> (below) this connectivity structure is adapted to the original TEN process shown above. Such formation can suit well real-world applications as it offers a more logical structure in case of limited knowledge about any existing communities or insufficiency of a connectivity structure generated from a random graph model. After redefining the representation of TEN processes, we can proceed with their modelling by applying the GNARX model and their monitoring using a residual-based control chart introduced in the next section.§.§ Monitoring of TEN ProcessesRecall that for network monitoring we need a model and a monitoring procedure. For monitoring, it is vital to determine the framework in which we are aiming to perform it. One possibility is to monitor the estimates of the model parameters although this is computationally intensive (cf. ). A more efficient alternative is to perform monitoring based on residuals. For instance, <cit.> compute graph residuals as the difference between the observed graph and its expected value. In modelling the TEN with the GNARX model (see Equation <ref>), it is uncomplicated to calculate the residuals as the deviations from the current observation and the GNARX prediction. As soon as a change occurs, we would expect the residuals to increase indicating a lack of fit. According to <cit.>, who offers a detailed introduction to Statistical Process Monitoring (SPM) and especially to the forecast-based monitoring methods, when the model or prediction is accurate, the prediction errors are uncorrelated. Thus, we can apply traditional SPM techniques such as control charts to these forecast errors. A technique that is particularly suitable for testing whether the current residual-based statistic is anomalous is a control chart.Belonging to the methods of SPM (cf. ), control charts are both straightforward to implement and interpret. <cit.> proposes a framework for monitoring forecast models, showing that changes in complex data are reflected in the forecast errors. They utilise the Cumulative Sum (CUSUM) control chart, and we extend their approach to monitoring residuals of the GNARX model for TENs as shown subsequently. A comprehensive overview of residual control charts and their comparison to other types of control charts is provided in <cit.> and <cit.>. Applying control charts in an online manner, i.e. performing a sequential change point detection, at each time point t we test whether the process remains in control. Hence, the specific hypothesis as well as the definition of a change point are clarified in Section <ref> before coming to the description of the implemented control chart in Section <ref>.§.§.§ Hypothesis FormulationAs we base our monitoring part on the residuals obtained by finding the difference between the actual process and the predictor at each flow (Equation <ref>), it is vital to obtain an accurate model before monitoring starts. Our objective is then to test H_0, t:The observed TEN process coincides with the fitted GNARX modelagainst the alternativeH_1, t:The observed TEN process does not coincide withthe fitted GNARX modelfor each t. Now the question arises of which variable to use for the test statistic that could provide us with the information about when H_0 is violated. The change point τ is defined as x_t ∼{[ F(μ, Σ) ift < τ; F_τ(μ_τ, Σ_τ)ift ≥τ; ]. ,where x_t = (x_1, t, …, x_|V'|, t)', μ and Σ define the mean vector and variance-covariance matrix of the network flow distribution F in our case, respectively. However, the change in the mean or/and in the variance of the raw data would lead to the respective changes in the forecast errors <cit.>. Hence, by constructing a test statistic based on u_t = (u_1, t, …, u_|V'|, t)', we can accordingly determine the time point τ when the change has occurred. Following, we introduce the change point detection framework specified by <cit.> and discuss the respective test statistic for applying residual-based CUSUM control charts. §.§.§ Residual-Based CUSUM Control ChartBefore describing the technicalities of the considered control chart, it is worth beginning with an explanation of its key elements and framework for application in practice. In general, a control chart, being a graphical tool for detecting unusual variations of the process, consists of three components. These are a central line (CL) for plotting the average of the process, an upper control line for the upper control limit and a lower control line for the lower control limit. When the process considerably deviates from its expected state, the control statistic starts crossing the limits, indicating a substantial change beyond random variation. Usually, the are two phases involved in the implementation of control charts. Phase I corresponds to the exploration and calibration period, where we estimate the target process and calibrate the control chart, e.g. computing control limits. It is based on the assumption that the process during this phase is in control, i.e. stable, predictable and repeatable <cit.>. In Phase II the actual application of the control chart begins which could be viewed as a sequential implementation of the hypothesis test introduced in Section <ref>. The more precise the estimation of the parameters in Phase I, the more reliable the performance of the control chart in Phase II. The desired operation of control charts during Phase II is a quick detection of the change point in a process when it experiences an out-of-control state, i.e. unusual variation of a process is present. At the same time, we strive for a long-running scheme without false alarms occurring, meaning the case when the process actually remains in control but the scheme signals a change. This translates to a balance between small control limits for fast detection and large control limits for small false alarm rates. In real-world applications, it is usually unknown what properties, e.g. mean or variance, of a process may change. Thus, an important criterion for selecting a suitable control chart is its ability to track many types of change simultaneously. Another point considers the decision about what exactly to monitor – the data itself or some process related to it. As discussed by <cit.>, the monitoring of the data directly might deteriorate in case of temporal dependency, complex trends or seasonality effects within its structure. However, monitoring forecast residuals of the process omits these issues and is capable of reflecting changes in either the process mean or the process variance.As under the real settings, it is usually unknown whether the change occurs in a mean and/or variance of the process, we utilise a more general type of Page's CUSUM detector, which is based on the (centred) squared data and is able to detect a combination of the mean and/or variance change in the original data. For monitoring forecast errors u_ι,t obtained for each flow ι, <cit.> adapts Page's CUSUM test statistic(cf. ) to the centered squared forecast errors as followsQ_ι(m, k) = ∑_t = m + 1^m + k(u_ι,t - b̂)^2 - k/m∑_t=1^m(u_ι,t - b̂),where m corresponds to the length of Phase I, k is the current time point in Phase II and b̂ is the mean estimate of the forecast errors computed from Phase I.From that, we compute the control chart statistic as D_ι(m, k) = max_0≤ a ≤ k|Q(m, k) - Q(m, a)|,and the corresponding upper control limit is given by UCL = σ̂_ιζ_αg(m, k, ν),with ζ_α being the critical value of the distribution with the significance level α and σ̂_ι the estimate of the standard deviation of the centred squared forecast errors belonging to the flow ι. The part g(m, k, ν) defines the weight function with the tuning parameter ν, both are described in detail in <cit.> and <cit.>. In this work, we consider ν = 0. The current time point k = τ, i.e. the control chart detects a change point if D_ι(m, k) ≤ UCL. It is worth noting that no lower control limit is defined as the control statistic obtains non-negative values only and, therefore, has its natural lower control limit which is zero.§.§.§ Monitoring Scope and Performance Function The considered CUSUM chart belongs to the univariate control charts, meaning that only one process variable is monitored by the scheme. In our case, it means that we simultaneously implement n = |V'| control charts to monitor the TEN process completely. Consequently, it is also possible to perform local monitoring and control the behaviour of only one essential flow from a complete TEN process. In this work, however, we are interested in proposing a monitoring procedure suitable for tracking the complete TEN process. Thus, as a performance metric, we introduce a cumulative intensity function of a change that presents a cumulative sum of flows that triggered a change at a time point tI_X (T) = ∑_t = 1^T∑_ι = 1^n1_[UCL, ∞)[D_ι(m, k)]/n.If in one flow ι a change was detected, the monitoring of this flow stops, resulting in max(I) = 1. Thinking about how a signal is produced by computing I_X (T), we need to define a suitable threshold W based on expert knowledge or system requirements.In the subsequent section, we illustrate a simulation study that handles different cases of changes as well as compares monitoring results with respect to the selected model for constructing the adjacency matrix Y'. § SIMULATION STUDY The reason to conduct a simulation study is twofold: First, we aim to quantify the detection speed of changes in the effects captured by the model parameters α_i,l, β_l,r or γ_h, q. Second, as mentioned in Section <ref>, there are different possibilities to construct the adjacency matrix Y' as introduced in Section <ref>. Here, we examine two distinct random graph models that cover both the availability and absence of expert knowledge about suitable sampling models. If a TEN process is well understood, the application of a Stochastic Block Model (SBM), where the flows should be first subdivided into clusters, is a good choice. Alternatively, it is possible to run a clustering algorithm. If the cluster structure is not suitable or no specific knowledge about the TEN process is available, the structure of G' can be sampled by applying Erdős-Rényi model. An example of both structure types is presented in Figure <ref>. In the next part, we explain the design of the conducted simulation study, presenting the results as well as comparing them with regard to the generated graph structure. §.§ Experimental SettingTo perform a simulation study, we have to decide on the design of a TEN process, the initial parameters and the test cases. To keep the study simple for reproducing it but comprehensive for designing different anomaly types as well as testing two distinct Y' structures, we create a TEN process with 10 flows and medium connectivity. That means, for sampling Y' from the Erdős-Rényi model, we set the number of nodes to be 10 and the number of connections to be 30, considering that everything else is random. For sampling from an SBM, we separate the flows into two clusters c_1 and c_2 of equal size. Additionally, we need to define the probabilities P that the flows are connected within a cluster or between the clusters. Regarding the settings for sampling from a GNARX model, we design a process with a global autoregressive effect of order p = 1. Also, the stage-1 neighbourhood is considered together with two exogenous regressors and corresponding time lags q = (0, 0). To be precise, we generate separate data for estimating the GNARX model (600 observations) before designing Phase I with 200 observations and Phase II with 100 observations which is subdivided into 50 in-control and 50 out-of-control samples. The detailed description of each simulation step is presented in Procedure <ref>. In total, we conduct 500 iterations for each test case, presenting the outcomes subsequently.§.§ Results and Comparison of Connectivity StructuresAs explained in Section <ref>, we aim to detect changes in the whole TEN process, i.e. the anomalous network states. Therefore, we compute the cumulative change intensity given in Equation <ref>. For summarising the performance in the simulation study, for each time point in Phase II we average the values of the cumulative change intensity function, obtaining a mean cumulative change intensity. It is worth noting that we do not define a threshold W here, focusing on the general detection capability of the approach as well as fluctuation in the performance. It is important to note that as soon as the control chart signals for a particular flow, the monitoring of that flow stops. It means, that no multiple change points for the same flow are possible.For each parameter α, β and γ_1 we have three test cases of changes whose values gradually increase. Comparing the different types of sampling the adjacency matrix Y', we exchange the initially estimated parameters with anomalous either in all flows or only in flows being part of c_1. In the following Figures <ref>, <ref> and <ref> we display the monitoring results in Phase II (PII). The blue area defines the in-control and the red the out-of-control part. First, in all plots, we notice the same pattern: The curve that shows the simulation with Y' sampled from the SBM stays below another curve that corresponds to the setting with an adjacency matrix generated from the Erdős-Rényi model. That corresponds to the way the change in the effects was implemented: While in the case of the SBM structure, only one community (half of the flows) was affected by anomalous parameter sizes, the simulation that involves the Erdős-Rényi structure experiences the change in the complete network, i.e. all flows were affected. Thus, the cumulative intensity change reaches a value of only 0.5 which is the normalised size of a community with 5 flows that presents anomalous behaviour when using the SBM. Second, we clearly recognise that the most difficult type of anomaly for the proposed approach is the change in the neighbourhood effects captured by the parameter β. It can be seen from the wider uncertainty span and longer run length to detect the insertion of a change in β. Overall, the simulation study reveals that the monitoring approach is highly effective, meaning that no signals occur when no actual change has been introduced but is only efficient in detecting quickly the time stamp after the change if we are aware of the underlying structure of G' or aim to detect medium or large process innovations.In case one specific flow reflects the state of the whole system or is particularly relevant for the flawless functionality of a network, we could focus solely on its monitoring. For that, we would apply the CUSUM control chart and directly determine the change point. Visually, a possible outcome could look as displayed in Figure <ref>.In the following section, as a real-world illustration, we present the monitoring of cross-border physical electricity flows across Europe.§ MONITORING OF CROSS-BORDER PHYSICAL ELECTRICITY FLOWS Cross-country analysis of electricity trade, especially in the context of renewable energy, is a relevant field of study that i.a. contributes to the evaluation of transmission infrastructure policies (cf. ). It is worth noting that the cross-border physical flow network indicates the actual flow of electricity that corresponds to the laws of physics and could differ from the scheduled commercial exchanges which reflect the economic relations between the market parties and, therefore, be of particular technical importance, allowing for the flawless electricity trade across Europe. To the best knowledge of the authors, there are no studies related to the change point detection in European Cross-Border Physical Flows (CBPFs), so we illustrate the monitoring of these processes below.§.§ Data Description European Network of Transmission System Operators for Electricity (ENTSO-E) provides data about the cross-border physical flow and scheduled commercial electricity exchanges of more than 40 countries[Central collection and publication of electricity generation, transportation and consumption data and information for the pan-European market. ENTSO-E Transparency platform (2023).<https://transparency.entsoe.eu>]. It offers the data for each hour for years starting from 2014. However, after careful investigation of the amount of missing data, we decided to concentrate on the period from January 1, 2018, to November 27, 2022, and use a weekly aggregation of the electricity values given in megawatts.The graph in Figure <ref> (a) represents the aggregated quantity of electricity which was transmitted in the year 2019 so that the wideness of edges reflects the total amount of power which was passed between the two countries. The higher this value is, the wider the edge between two nodes. However, what we can also notice is the existence of parallel edges, i.e. one flow f_1 coming from France (FR) to Spain (ES) and another flowf_2 back. If we decide to directly apply a new representation as introduced in Section <ref>, we would obtain a network that would be considerably bigger than the original graph. Hence, we need to aggregate both flows f_1 and f_2 and take one vertex to represent a country pair. There are different possibilities for doing it, we decide on three of them and discuss each of them subsequently. §.§ Phase I Modelling To avoid a considerable expansion in the new representation of the CBPFs (see Section <ref>), we introduce three different aggregation strategies of both flows f_1 and f_2, fitting and comparing three models, respectively. The first statistic is the Box-Cox transformation of the sum of both flows ℳ_1 = ln(f_1 + f_2 + 1) with λ_1 = 0 and λ_2 = 1 (cf. ) that reflects the overall strength of the exchange. The second statistic measures the asymmetry in both flows as the difference of the Box-Cox transformed variables f_1 and f_2, i.e. ℳ_2 = ln(f_1 + 1) - ln(f_2 + 1). It implies whether it is more common for one country to import or export electricity. The third statistic captures proportionally differences in both flows, being ℳ_3 = f_1 - f_2/f_1 + f_2. No additional transformation is required in this case as ℳ_3 ∈ [-1, 1]\{0}. Figure <ref>(b) illustrates a new representation of CBPFs, where the new adjacency structure is constructed using the rules of a line graph described in Section <ref>. As no exceptional events are known in year 2018 and 2019 that could considerably disturb the CBPF network, we decide to choose these two years as Phase I. Testing different model settings, the final set-up involves a global autoregressive parameter α with p = 1, 1-stage neighbourhood β and two covariates that correspond to the Box-Cox transformed total amount of electricity generated by renewable energy sources from both countries with the effects γ_1 and γ_2, having q = (0, 0).Table <ref> presents the results of estimating the parameters for each choice of the aggregation statistic ℳ using 7828 observations (76× (52× 2 - p)), where 76 defines the number of bilateral exchanges. As we can observe, the GNARX model fits the data in Phase I well and confirms the relevance of accounting for the network structure as well as external effects when modelling TEN processes. Using the displayed coefficients, we proceed with the monitoring of Phase II which consists of the years 2020–2022. §.§ Phase II Monitoring Figures <ref>, <ref> and <ref> display the monitoring results during Phase II. Out of 76 country pairs, between 27 and 39 pairs have had a change point according to the implemented CUSUM control charts based on either ℳ_1, ℳ_2 or ℳ_3 statistics. We highlight two periods during Phase II: The period related to severe situations due to the COVID-19 pandemic and the period starting in February 2022 where several crises are expected or happened due to the Russian-Ukrainian war. Considering the threshold W, we define it to be W = 0.2, i.e. when in at least 16 bilateral exchanges a change point is detected.Comparing three different aggregation methods, we notice similarities in detecting changes during the pandemic. However, starting from June 2021 the behaviour of the control charts significantly differs. In the case of the results with ℳ_1, the time between two defined periods remains relatively stable compared to two other charts with ℳ_2 and ℳ_3. Then, the first detection on March 6, 2022 (aggregating the days February 28 – March 6) considers the beginning of the war between Russia and Ukraine. Surprising could be the fact that another country pair experiences a change in the same week, namely Belarus and Ukraine. Later, another change detection occurs at the end of May in the pair Russia and Estonia,reflecting the political agreement of the Baltic states to stop any energy trade with Russia. The same date and the reason are also given in the control chart with ℳ_3 in the pair Russia and Lithuania[https://enmin.lrv.lt/en/news/no-more-russian-oil-gas-and-electricity-imports-in-lithuania-from-sunday].In terms of the signal in the pair Ukraine and Moldova (control chart with ℳ_2), it could be related to the launch of the planned commercial exchanges announced at the end of June 2022 and further being increased in the beginning of autumn [https://www.entsoe.eu/news/2022/09/04/transmission-system-operators-of-continental-europe-decide-to-further-increase-trade-capacity-with-the-ukraine-moldova-power-system/].Overall, we can notice that without expert knowledge of the CBPF network, it is challenging to reflect a particular reason for the change point, however, we also can see a reliable performance of the proposed framework in detecting major events such as the pandemic and the energy crisis related to the begin of the Russian-Ukrainian war. The caveat to be aware of is the interpretation of the chosen aggregation statistic ℳ_1, ℳ_2, ℳ_3 as in some of the cases the signalled anomalies considerably differ. § CONCLUSIONThe network with a given structure but a random process on its edges that we define in this work as a Temporal Edge Network (TEN) can be of particular interest for guaranteeing the safety of the infrastructure but also for foreseeing possible accidents. In this manuscript, we present the monitoring framework to detect anomalies in TENs by combining the GNARX model and the CUSUM control chart based on residuals.There might still exist the question of why monitoring networks with a fixed structure needs special treatment. For example, why not try to “randomise” the connections? The explanation is straightforward: It could be possible to select a flow threshold for creating dynamics in the graph by deleting or adding the links, however, the anomalies which would be detected in this case are similar to those which are the focus of the random network monitoring. In our case, we are rather interested in anomalies occurring in the edge process, e.g. changes in the flow's strength or some other temporal deviations. Hence, we introduce a new way of representing TEN processes where edges become the main focus of both modelling and monitoring parts.The proposed change point detection framework is applied only to the TENs observed at discrete times. It is an open research field on how and when to extend the monitoring to continuous times. Equally important is to research when different adjacency matrices for different time points should be introduced and whether it will benefit the overall monitoring procedure. Also, for monitoring a TEN process at once, we have introduced a cumulative change intensity function. However, an appealing way to perform such monitoring would be a suitable multivariate control chart or another statistical monitoring tool that allows for medium-sized multivariate processes.As an empirical illustration, we have monitored a network of bilateral electricity flows across Europe. Beyond that, our framework could potentially be applied for monitoring traffic flows on roads or rail systems to optimise transportation infrastructure as well as within the computer communication domain, aiding in cybersecurity. In an environmental context, TENs may be useful in evaluating water transfers between areas or companies, contributing to sustainable water resource management. This work is devoted to networks with static structures that fit well with the considered process on edges being cross-border physical electricity flow. However, thinking of potential processes with a changing underlying structure over time, it is important to determine whether the monitoring framework would experience substantial changes in terms of re-estimating the parameters in Phase I or challenges in performing multivariate monitoring when some of nodes or edges disappear with time. Moreover, the effectiveness of the proposed monitoring procedure strongly relies on the assumption of the GNARX model being suitable to the considered data. Thus, in case the data cannot be well represented by the GNARX model, i.e. the count data are given, an alternative modelling approach or an extension to the currently existing model is required.AbrellRausch2016abrell2016cross Abrell, J.Rausch, S.2016. Cross-country electricity trade, renewable energy and european transmission infrastructure policy, Journal of Environmental Economics and Management 79: 87–113.[Ahuja et al.]Ahuja, Magnanti Orlin1993ahuja1993network Ahuja, R. K., Magnanti, T. L.Orlin, J. B.1993. Network Flows: Theory, Algorithms, and Applications, Prentice Hall.[Alexopoulos et al.]Alexopoulos, Goldsman, Tsui Jiang2004alexopoulos2004spc Alexopoulos, C., Goldsman, D., Tsui, K.-L.Jiang, W. 2004. SPC monitoring and variance estimation, Frontiers in Statistical Quality Control 7: 194–209.[Azarnoush et al.]Azarnoush, Paynabar, Bekki Runger2016azarnoush2016monitoring Azarnoush, B., Paynabar, K., Bekki, J.Runger, G. 2016. Monitoring temporal homogeneity in attributed network streams, Journal of Quality Technology 48(1): 28–43.[Baddeley et al.]Baddeley, Nair, Rakshit, McSwiggan Davies2021baddeley2021analysing Baddeley, A., Nair, G., Rakshit, S., McSwiggan, G.Davies, T. M. 2021. Analysing point patterns on networks—a review, Spatial Statistics 42: 100435.BoxCox1964box1964analysis Box, G. E.Cox, D. R.1964. An analysis of transformations, Journal of the Royal Statistical Society Series B: Statistical Methodology 26(2): 211–243.De MasiGallegati2007de2007debt De Masi, G.Gallegati, M.2007. Debt-credit economic networks of banks and firms: the Italian case, Econophysics of Markets and Business Networks: Proceedings of the Econophys-Kolkata III, Springer, pp. 159–171.Diestel2017Diestel2017 Diestel, R.2017. Graph Theory, Springer.FlossdorfJentsch2021flossdorf2021change Flossdorf, J.Jentsch, C.2021. Change detection in dynamic networks using network characteristics, IEEE Transactions on Signal and Information Processing over Networks 7: 451–464.Grundy2021grundy2021aspects Grundy, T. D.2021. On Aspects of Changepoint Analysis Motivated by Industrial Applications, Lancaster University (United Kingdom).[Hanneke et al.]Hanneke, Fu, Xing et al.2010hanneke2010discrete Hanneke, S., Fu, W., Xing, E. P. et al.2010. Discrete temporal models of social networks, Electronic Journal of Statistics 4: 585–605.[Horváth et al.]Horváth, Hušková, Kokoszka Steinebach2004horvath2004monitoring Horváth, L., Hušková, M., Kokoszka, P.Steinebach, J.2004. Monitoring changes in linear models, Journal of statistical Planning and Inference 126(1): 225–251.[Jensen et al.]Jensen, Jones-Farmer, Champ Woodall2006jensen2006effects Jensen, W. A., Jones-Farmer, L. A., Champ, C. W.Woodall, W. H. 2006. Effects of parameter estimation on control chart properties: a literature review, Journal of Quality technology 38(4): 349–364.[Jeske et al.]Jeske, Stevens, Tartakovsky Wilson2018jeske2018statistical Jeske, D. R., Stevens, N. T., Tartakovsky, A. G.Wilson, J. D. 2018. Statistical methods for network surveillance, Applied Stochastic Models in Business and Industry 34(4): 425–445.KhrabrovCybenko2010khrabrov2010discovering Khrabrov, A.Cybenko, G.2010. Discovering influence in communication networks using dynamic graph analysis, 2010 IEEE Second International Conference on Social Computing, IEEE, pp. 288–294.[Knight et al.]Knight, Leeming, Nason Nunes2020JSSv096i05 Knight, M., Leeming, K., Nason, G.Nunes, M.2020. Generalized network autoregressive processes and the GNAR package, Journal of Statistical Software 96(5): 1–36.KnothSchmid2004knoth2004control Knoth, S.Schmid, W.2004. Control charts for time series: A review, Frontiers in statistical quality control 7: 210–236.KolaczykCsárdi2020Kolaczyk2020 Kolaczyk, E. D.Csárdi, G.2020. Analysis of Network Flow Data, Springer International Publishing, Cham, pp. 169–186.MalinovskayaOtto2021malinovskaya2021online Malinovskaya, A.Otto, P.2021. Online network monitoring, Statistical Methods & Applications 30(5): 1337–1364.MatiasMiele2017matias2017statistical Matias, C.Miele, V.2017. Statistical clustering of temporal networks through a dynamic stochastic block model, Journal of the Royal Statistical Society Series B: Statistical Methodology 79(4): 1119–1141.[McDermott et al.]McDermott, Dwaraknath Persson2021mcdermott2021graph McDermott, M. J., Dwaraknath, S. S.Persson, K. A. 2021. A graph-based network for predicting chemical reaction pathways in solid-state materials synthesis, Nature communications 12(1): 3097.[Miller et al.]Miller, Arcolano Bliss2013miller2013efficient Miller, B. A., Arcolano, N.Bliss, N. T.2013. Efficient anomaly detection in dynamic, attributed graphs: Emerging phenomena and big data, 2013 IEEE International Conference on Intelligence and Security Informatics, IEEE, pp. 179–184.Montgomery2012montgomery2012statistical Montgomery, D. C.2012. Statistical Quality Control, Wiley Global Education.NasonWei2022nason2022quantifying Nason, G. P.Wei, J. L.2022. Quantifying the economic response to COVID-19 mitigations and death rates via forecasting purchasing managers' indices using generalised network autoregressive models with exogenous variables, Journal of the Royal Statistical Society Series A: Statistics in Society 185(4): 1778–1792.Page1954page1954continuous Page, E. S.1954. Continuous inspection schemes, Biometrika 41(1/2): 100–115.Perry2020perry2020ewma Perry, M. B.2020. An EWMA control chart for categorical processes with applications to social network monitoring, Journal of Quality Technology 52(2): 182–197.ShaghaghiSaghaei2020shaghaghi2020pca Shaghaghi, M.Saghaei, A.2020. PCA likelihood ratio test approach for attributed social networks monitoring, Communications in Statistics-Theory and Methods 49(12): 2869–2886.Snijders2005snijders2005models Snijders, T. A.2005. Models for longitudinal network data, Models and methods in social network analysis 1: 215–247.[Stevens et al.]Stevens, Wilson, Driscoll, McCulloh, Michailidis, Paris, Parker, Paynabar, Perry, Reisi-Gahrooei Sparks2021astevens2021research Stevens, N. T., Wilson, J. D., Driscoll, A. R., McCulloh, I., Michailidis, G., Paris, C., Parker, P., Paynabar, K., Perry, M. B., Reisi-Gahrooei, M.Sparks, R.2021a. Research in network monitoring: Connections with SPM and new directions, Quality Engineering 33(4): 736–748.[Stevens et al.]Stevens, Wilson, Driscoll, McCulloh, Michailidis, Paris, Paynabar, Perry, Reisi-Gahrooei, Sengupta Sparks2021bstevens2021foundations Stevens, N. T., Wilson, J. D., Driscoll, A. R., McCulloh, I., Michailidis, G., Paris, C., Paynabar, K., Perry, M. B., Reisi-Gahrooei, M., Sengupta, S.Sparks, R.2021b. Foundations of network monitoring: Definitions and applications, Quality Engineering 33(4): 719–730.Vining2009vining2009technical Vining, G.2009. Technical advice: Phase I and phase II control charts, Quality Engineering 21(4): 478–479.
http://arxiv.org/abs/2312.16357v1
{ "authors": [ "Anna Malinovskaya", "Rebecca Killick", "Kathryn Leeming", "Philipp Otto" ], "categories": [ "stat.AP", "stat.ME" ], "primary_category": "stat.AP", "published": "20231226235052", "title": "Statistical monitoring of European cross-border physical electricity flows using novel temporal edge network processes" }
skins breakable shield externalize ad [breakable,colback=red!7!white, sharp corners, boxrule=0pt,frame hidden,enhanced, borderline west=1mm-2mmred] Andrii:adfr [breakable,colback=red!7!white, sharp corners, boxrule=0pt,frame hidden,enhanced, borderline west=1mm-2mmred] Froilán:adft [breakable,colback=red!7!white, sharp corners, boxrule=0pt,frame hidden,enhanced, borderline west=1mm-2mmred] Fernando: theoremTheorem[section] proposition[theorem]Proposition lemma[theorem]Lemma algorithm[theorem]Algorithm definition[theorem]Definition remark[theorem]Remark corollary[theorem]Corollary example[theorem]Example cm]Fernando De Terán [email protected]]Andrii Dmytryshyn [email protected]]Froilán M. Dopico [email protected][cm]Departamento de Matemáticas, Universidad Carlos III de Madrid, Avenida de la Universidad 30, 28911, Leganés, Spain.[um]School of Science and Technology, Örebro University, 701 82, Örebro, Sweden. We show that the set of m × m complex skew-symmetric matrix polynomials of even grade d, i.e., of degree at most d, and (normal) rank at most 2r is the closure of the single set of matrix polynomials with certain, explicitly described, complete eigenstructure. This complete eigenstructure corresponds to the most generic m × m complex skew-symmetric matrix polynomials of even grade d and rank at most 2r. The analogous problem for the case of skew-symmetric matrix polynomials of odd grade is solved in <cit.>.complete eigenstructuregenericitymatrix polynomialsskew-symmetrynormal rankorbitspencils15A1815A21 Even grade generic skew-symmetric matrix polynomials with bounded rank [======================================================================§ INTRODUCTIONDuring the recent years, the problem of determining the generic eigenstructures for sets of matrix pencils and matrix polynomials has been a subject of research interest, in particular, for the sets of matrix pencils and matrix polynomials with fixed grade and bounded rank. The description of such sets as a union of closures of certain “generic sets” of pencils and matrix polynomials is given in terms of their eigenstructures for general matrix pencils <cit.> and polynomials <cit.>, as well as for matrix pencils that are skew-symmetric <cit.>, symmetric <cit.>, T-palindromicand T-alternating <cit.>, and Hermitian <cit.>. Moreover, in the case of odd grade the generic sets are also derived for skew-symmetric <cit.> and symmetric <cit.> matrix polynomials. In this paper, we tackle the even-grade-case for skew-symmetric matrix polynomials. In Table <ref> we present a summary of the contributions mentioned above.The reason for the even-grade-case being frequently omitted when structured matrix polynomials are studied, e.g., <cit.>, is the lack of structured linearization templates <cit.>. In this paper, we resolve this issue by adding an extra term toa given matrix polynomialof even grade (thus making it of odd grade) and tracking howthis addition affects the eigenstructure of the polynomial. The main result is that the set of skew-symmetric matrix polynomials of even grade and bounded rank is the closure of a single set with a certain complete eigenstructurewhich is completely analogous to the one for the odd-grade-case described in <cit.>.One reason to study generic sets is the possible applications in the investigation of the effect of low rank perturbations on the spectral information of pencils and matrix polynomials <cit.>. We also refer the reader to the introductions of the papers cited in Table <ref> for more references and background information. Determining the generic eigenstructures of particular sets of structured matrix pencils and matrix polynomials (in particular, those with bounded rank) is also useful for obtaining the stratification of structured matrix pencils and polynomials. Describing the stratification of matrix pencils and polynomials has been considered in some references by different authors, for several sets of matrix pencils and polynomials, like <cit.> for general matrix pencils, <cit.> for controllability and observability pairs, <cit.> for system pencils associated with state-space systems, <cit.> for full rank matrix polynomials, and <cit.> for general matrix polynomials, as well as for skew-symmetric matrix pencils <cit.> and odd-grade polynomials <cit.>.To facilitate the reading of this paper and its comparison with previous results, we keep its structure and style as close as possible to the paper <cit.> that covers the odd-grade-case. In this respect, we also warn the reader that several repetitions of background concepts and auxiliary results appearing in <cit.> are unavoidable to keep this paper self-contained. Thus, Section <ref> includes some basic results on general and skew-symmetric pencils andalso a new description of the set of skew-symmetric matrix pencils with rankat most 2 w and fixed number ofcanonical blocks associatedwith the infinite eigenvalue. Section <ref> presents a number of preliminaryknown results on skew-symmetric matrix polynomialsand some new auxiliary results on orbits of skew-symmetric matrix polynomials. In Section <ref> we prove our main result for generic skew-symmetric matrix polynomials of fixed even grade and fixed rank, i.e., Theorem <ref>. Moreover, Theorem <ref> is combined with the corresponding result for odd-grade skew-symmetric matrix polynomials obtained in <cit.> for presenting in Theorem <ref> the general result, which is independent of the parity of the grade. Finally, Section <ref> includes the computation of the codimension of the orbit of the generic skew-symmetric matrix polynomials of fixed even grade and bounded rank, which requires a different approach to theone used to obtain the corresponding result for the odd-grade-case considered in <cit.>. The conclusions and future research on generic eigenstructures of matrix polynomials are discussed in Section <ref>. § SKEW-SYMMETRIC MATRIX PENCILS In this paper we considermatrix pencils and matrix polynomials whose matrix coefficients have complex entriesnamely, a matrix polynomial is of the form P(λ)=∑_i=0^dλ^iA_i, with A_i∈ℂ^n× n, and a matrix pencil is a matrix polynomial with d=1. For matrix pencils we use calligraphic letters.For brevity, the dependence on λ in P(λ) will be often omitted, specially in the proofs, and we will use just P for denoting a matrix polynomial, and similarly for matrix pencils.The rank of a matrix polynomialP, denoted by P, isthe largest size of a non-identically zero minor(see <cit.> for matrix pencils).§.§ PreliminariesIn this section we recall the Kronecker canonical form (KCF) of general matrix pencils and the canonical form of skew-symmetric matrix pencils under congruence. Define ℂ := ℂ∪∞. For each k=1,2, …,and for each μ∈ℂ define the k× k matricesJ_k(μ):=[ μ 1; μ ⋱; ⋱ 1; μ ], I_k:=[ 1; 1; ⋱; 1 ],and for each k=0,1, …, define the k× (k+1) matricesF_k := [ 0 1; ⋱ ⋱; 0 1; ],G_k := [ 1 0; ⋱ ⋱; 1 0; ].All non-specified entries of J_k(μ), I_k, F_k, and G_k are zeros.Two m × n matrix pencils λ A - B and λ C - D are called strictly equivalent if and only if there are non-singular matrices Q and R such that Q^-1AR =C and Q^-1BR=D. We also define the orbit of λ A - B under the action of the group GL_m(ℂ) × GL_n(ℂ) on the space of all matrix pencils by strict equivalence as follows:^e (λ A - B) = {Q^-1 (λ A - B) R:Q ∈ GL_m(ℂ), R ∈ GL_n(ℂ)}. Theorem <ref> introduces the Kronecker Canonical Form (KCF) ofmatrix pencils.<cit.> Each m × n matrix pencil λ A - B is strictly equivalent to a direct sum, uniquely determined up to permutation of summands, of pencils of the formE_k(μ) :=λ I_k - J_k(μ),withμ∈ℂ,E_k(∞):=λ J_k(0) - I_k, L_k :=λ G_k - F_k,andL_k^T:=λ G_k^T - F^T_k.This direct sum is called the KCF of λ A -B.The blocks E_k(μ) and E_k(∞)are associated to the finite and infinite eigenvalues, respectively,and all together form the regular part of λ A - B. The blocks L_k and L_k^Tare associated to theright andleft minimal indices, respectively(see page mi),and all together form the singular part of λ A - B. The number of blocks L_k (respectively, L_k^T) in the KCF of λ A - B is equal to the dimension of the right (respectively, left) rational null-space of λ A - B.A pencil λ A - B is skew symmetric if and only if (λ A - B)^T = -(λ A - B). An n × n matrix pencil λ A - B is called congruent to λ C - D if there is a non-singular matrix S such that S^TAS =C and S^TBS=D. We also definethe congruence orbit of λ A - B under the action of the group GL_n(ℂ) on the space of all skew-symmetric matrix pencils by congruence as follows:^c (λ A - B) = {S^T (λ A - B) S:S ∈ GL_n(ℂ)}.In Theorem <ref> we recall the canonical form under congruence of skew-symmetric matrix pencils, so called skew-symmetric KCF. <cit.>Each skew-symmetric n × n matrix pencil λ A - B is congruent to a direct sum, uniquely determined up to permutation of summands, of pencils of the formH_h(μ) := λ[0I_h; -I_h0 ] - [ 0J_h(μ); -J_h(μ)^T 0 ] ,μ∈ℂ,K_k := λ[ 0J_k(0); -J_k(0)^T 0 ] - [0I_k; -I_k0 ],M_m := λ[0G_m; -G_m^T0 ] - [0F_m; -F_m^T0 ]. Notably the block M_0 is the 1 × 1 zero matrix pencil. Similarly to the KCF, the blocks H_h(μ) and K_k correspond to the finite and infinite eigenvalues, respectively, and all together form the regular part of λ A - B. The blocks M_m correspond to the right (column) and left (row) minimal indices, which are equal in the case of skew-symmetric matrix pencils, and form the singular part of λ A - B. Theorem <ref> also shows that skew-symmetric matrix pencils always have even rank. For brevity, we will refer to the number of K-blocks or M-blocks of a skew-symmetric matrix pencil to mean the number of blocks of the form K_k or M_k, respectively, in the skew-symmetric KCF of the pencil. §.§ Generic skew-symmetric matrix pencils with bounded rank and fixed number of blocks corresponding to theinfinite eigenvalue.In this section we find the most genericskew-symmetric matrix pencils with rank bounded by a fixed value and with exactly r K-blocks in the skew-symmetric KCF. First, we recall some definitions and results.By _m × n we denote the space of all m× n matrix pencils. A distance in _m × n can be defined with the Frobenius norm of complex matrices <cit.> as d(λ A - B, λ C - D) := √(A-C_F^2 + B-D_F^2), which makes _m × nto be a metric space. This metric allows us to consider closures of subsets of _m × n, in particular, closures of orbits by strict equivalence, denoted by ^e(λ A - B). Using these concepts, Theorem <ref>, see <cit.>, describes all the possible changes in the KCF of a general unstructured matrix pencil under arbitrarily small perturbations. If in the KCF of a matrix pencil the blocks X are changed to the blocks Y (X and Y of the same size) we write X⇝ Y.<cit.> Let P_1 and P_2 be two matrix pencils in KCF. Then,^e( P_1) ⊃^e ( P_2) if and only if P_1 can be obtained from P_2by changing canonical blocks of P_2after applying a sequence of rules,that can be of thefollowing six types: * L_j-1⊕ L_k+1⇝ L_j ⊕ L_k, 1≤ j ≤ k;* L_j-1^T ⊕ L_k+1^T ⇝ L_j^T ⊕ L_k^T, 1≤ j ≤ k;* L_j⊕ E_k+1(μ) ⇝ L_j+1⊕ E_k(μ), j,k=0,1,2, … and μ∈ℂ;* L_j^T ⊕ E_k+1(μ) ⇝ L_j+1^T ⊕ E_k(μ), j,k=0,1,2, … and μ∈ℂ;* E_j(μ) ⊕ E_k(μ) ⇝ E_j-1(μ) ⊕ E_k+1(μ), 1≤ j ≤ k and μ∈ℂ;* L_p⊕ L_q^T ⇝⊕_i=1^t E_k_i(μ_i), if p+q+1= ∑_i=1^t k_i and μ_i ≠μ_i' for i ≠ i', μ_i ∈ℂ.Observe that in the rules above any block E_0 (μ) should be understood as the empty matrix.The vector space of skew-symmetric matrix pencils of size n× n is denoted by _n× n^ss. A distance in _n× n^ss is defined as in _n× n and, thus, the topology considered in _n× n^ss is the induced topology from _n× n. With this topology, we also consider closures of subsets in _n× n^ss and, in particular, closures of orbits by congruence of skew-symmetric matrix pencils, which are denoted by ^c (λ A - B). We often use expressions as “the pencil P_1 is more generic than the pencil P_2”, whose meaning is that ^e ( P_1) ⊃^e ( P_2) or ^c ( P_1) ⊃^c ( P_2), depending on the context. In this language, the most generic skew-symmetric pencil withrank at most 2w and with exactly r K-blocks associated to the infinite eigenvalue in the skew-symmetric KCF, is a pencil such that the closure of its congruence orbit includes the congruence orbit of any other skew-symmetric pencil withrank at most 2w and with exactlyr K-blocks.Define_r ⊂_n× n^ss as the set of all n × n skew-symmetric matrix pencilshaving exactly rK-blocks. Let n,w, and r be integers such that n ≥ 2, 2≤ 2w ≤ n-1 andr ≤ w. The set of n × n complex skew-symmetric matrix pencilswith rank at most 2w and with exactly r K-blocks corresponding to the infinite eigenvalue in its skew-symmetric KCF, is a closed subset of _r equal to ^c( W) ∩_r, whereW =( M_α+1,…, M_α+1_s,M_α,…, M_α_n-2w-s,K_1,…, K_1_r)with α= ⌊ (w-r)/(n-2w) ⌋ and s≡ (w-r)mod(n-2w).Let us define, in the set of n× n complex skew-symmetric matrix pencils with rank 2r_1, the following skew-symmetric canonical form:W (λ)=(M_α+1,,M_α+1_s, M_α,,M_α_n-2r_1-s)where α= ⌊ r_1/(n-2r_1) ⌋ and s= r_1mod(n-2r_1). Then, (i) ^c( M) ⊇^c( X) for every n_1× n_1 skew-symmetric matrix pencil X(λ) with rank at most 2r_1.(ii) The set of n_1 × n_1 complex matrix pencils with rank at most 2r_1 is a closedsubset of _n_1 × n_1 equal to ^e( M ).We follow a strategy similar to the one of the proof of <cit.>, but taking into account the presence of r K-blocks, which were not present in <cit.>. More precisely, taking into account <cit.> (or the stronger result <cit.>), in this proof we work with the KCF rather than with the skew-symmetric KCF under congruence of Theorem <ref>, but we always apply rules from Theorem <ref> in pairs, such that the corresponding change of the skew-symmetric KCFpreserves transparently the skew-symmetry. Note that the most generic skew-symmetric pencil with rank at most 2w and exactly r K blocks has rank exactly 2w, because otherwise there would be more than n-2w pairs of equal left and right minimal indices (equivalently, in the skew-symmetric KCF there would be more than n-2w M-blocks, since the number of left/right singular blocks is equal to the dimension of the rational left/right null-space of the pencil). Moreover, due to the even parity of the rank, there would be an excess of at least two pairs of equal left and right minimal indices. In that case, the rule 6 in Theorem <ref> can be applied twice to obtain a more generic skew-symmetric matrix pencil with rank larger in two units and with exactly r K-blocks. Therefore, we focus, in the rest of the proof, onn × n skew-symmetric matrix pencils with rank exactly 2 w and with exactly r K-blocks. Each of such pencils has the following KCF: ( L_γ_1,…, L_γ_n-2w, L^T_γ_1,…, L^T_γ_n-2w,J,J,E_k_1(∞),…, E_k_r(∞),E_k_1(∞),…, E_k_r(∞)),since the left and right singular blocks are paired up accordingto Theorem <ref> and the blocks J of the regular part associated withthe finiteeigenvalues are paired up, aswell as the blocks that correspond to the infinite eigenvalue.Note that the sizes ofthe 2r blocks that correspond to the infinite eigenvalue,i.e. E_k_i(∞), in the KCF of a generic skew-symmetric matrix pencil of rank equal to 2w and with exactly rK-blocks can not be larger than 1 (otherwise rules 3 and 4 from Theorem <ref> can be applied to obtain a more generic skew-symmetric pencil and to decrease the sizes of the 2r blocks E_k_i(∞) to 1). The remaining part of the proof repeats the proof of <cit.>with the difference that <cit.> is now used for determining the most generic (w-r) × (n-r-w) matrix pencil and <cit.> for determining the most generic (n-r-w) × (w-r) matrix pencil. The details can be found in <cit.>,but the proof reduces to apply rules 3 and 4 from Theorem <ref> for every couple of identical single Jordan blocks associated with a finite eigenvalue (using a left and a right singular block L_j and L_j^T, which are coupled) to eliminate these Jordan blocks by shrinking the size to 0, and then applying rules 1 and 2 from Theorem <ref> to each couple of blocks L_j-1⊕ L_k+1 and L_j-1^T⊕ L_k+1^T.§AUXILIARY RESULTS ON SKEW-SYMMETRIC MATRIX POLYNOMIALS§.§ Complete eigenstructure of skew-symmetric matrix polynomialsWe consider skew-symmetric m× m matrix polynomials P(λ) of grade d, i.e., of degree less than or equal to d, over ℂ:P(λ) = λ^dA_d + … +λ A_1 + A_0,A_i^T=-A_i,A_i ∈ℂ^m × m fori=0, …, d. Note that we do not require that the leading coefficient, A_d, of P(λ) is nonzero. As usual, the degree of P(λ), denoted as (P), is the largest index k such that A_k0.We denote the vector space of m × m skew-symmetric matrix polynomials of grade d by _d, m× m^ss.We writeinstead of _d, m× m^ss,if there is no risk of confusion. Note that _1, n× n^ss = _n× n^ss. As in the case of pencils, see Sections <ref>, by using the Frobenius matrix norm of complex matrices <cit.>, a distance in _d, m× m^ss is defined as d(P,P') = ( ∑_i=0^d || A_i - A'_i ||_F^2 )^1/2, where P(λ) = ∑_i=0^d λ^i A_i and P'(λ) = ∑_i=0^d λ^i A'_i, making _d, m× m^ss a metric spacewith theEuclidean topology induced by this distance. For convenience, we define the Frobenius norm of the matrix polynomial P as ||P(λ)||_F = ( ∑_i=0^d || A_i ||_F^2 )^1/2. An arbitrarily small pencil or matrix polynomial is a pencil or matrix polynomial with arbitrarily small Frobenius norm. Two matrix polynomials P(λ) and Q(λ) are called unimodularly congruent if F(λ)^T P(λ) F(λ)=Q(λ) for some unimodular matrix polynomial F(λ) (i.e. F(λ) ∈ℂ\{0}), see also <cit.>.<cit.>Let P(λ) be a skew-symmetric m× m matrix polynomial. Then there exist r ∈ℕ with 2r ≤ m and a unimodular matrix polynomial F(λ) such thatF(λ)^T P(λ) F(λ) = [ 0g_1(λ); -g_1(λ) 0 ]⊕…⊕[ 0g_r(λ); -g_r(λ) 0 ]⊕ 0_m-2r=:S(λ),where g_j is a monic polynomial, for j=1, …, r, and g_j(λ) divides g_j+1(λ), for j=1, …, r-1. Moreover, the canonical form S(λ) is unique.Similarly to the unimodular congruence, two matrix polynomials P(λ) and Q(λ) are called unimodularly equivalent if U(λ) P(λ) V(λ)=Q(λ) for some unimodular matrix polynomialsU(λ)andV(λ)(i.e. U(λ),V(λ) ∈ℂ\{0}). Notably, the canonical form in Theorem <ref> is the skew-symmetric version of the well-known Smith form for matrix polynomials under unimodular equivalence <cit.>. Note that the (normal) rank of the skew-symmetric matrix polynomial P(λ) in Theorem <ref> is equal to the nonnegative integer 2r, i.e., it is always an even number. The monic scalar polynomials g_1(λ),…,g_r(λ) in Theorem <ref> are called the invariant polynomials of P(λ), and, for any α∈ℂ, each of them can be uniquely factored asg_j(λ) = (λ - α)^σ_j p_j (λ),and σ_j ≥ 0being an integer. The sequence 0 ≤σ_1 = σ_1 ≤σ_2 = σ_2 ≤⋯≤σ_r = σ_r is called the sequence of partial multiplicities of P(λ) at α. The number α is a finite eigenvalue of P(λ) if the partial multiplicity sequence at α contains at least two nonzero terms, or, equivalently, if α is a root of at least one invariant polynomial g_j (λ). The elementary divisors of P(λ) associated with a finite eigenvalue α is the collection of factors (λ - α)^σ_1,(λ - α)^σ_1, (λ - α)^σ_2,(λ - α)^σ_2, … , (λ - α)^σ_r, (λ - α)^σ_r for which σ_j > 0.Let P(λ) ∈^ss_d,m× m. Then, the sequence of partial multiplicities of P(λ) at infinity is defined to be the sequence of partial multiplicities of the matrix polynomial _d P(λ):= λ^d P(1/λ) at zero. Moreover, if zero is an eigenvalue of _dP(λ), then we say that λ = ∞ is an eigenvalue of the matrix polynomial P(λ) ∈^ss_d,m× m. The elementary divisors for the zero eigenvalue of _d P(λ) are the elementary divisors associated with the infinite eigenvalue of P(λ). We emphasize that the sequence of partial multiplicities of P(λ) at infinity, as well as the fact that λ = ∞ is an eigenvalue of P(λ), depend on the grade d chosen for P(λ). This observation plays a key role in the results of this paper.Theorem <ref> connects the smallest partial multiplicity at ∞ with the grade and the degree of the matrix polynomial and with the fact of the leading coefficient being equal to zero. It is a consequence of <cit.>. The complete proof can be found in <cit.>. Since Theorem <ref> is important in this paper we sketch the simple proof for the sake of completeness. Let P(λ) = ∑_i=0^d λ^i P_i ∈_d,m × m^ss with P=2r,r>0, and sequence of partial multiplicities at ∞ equal to 0 ≤γ_1=γ_1 ≤γ_2 = γ_2 ≤…≤γ_r = γ_r. Then,(a) γ_1 = d - (P), and(b) P_d=0 if and only if 1 ≤γ_1=γ_1 ≤γ_2 = γ_2 ≤…≤γ_r = γ_r.Observe that P_d = 0 if and only if d > (P). Therefore part (b) follows from part (a), so we only need to prove part (a). Note that _d P(λ) = λ^d- (P)_(P) P(λ). Therefore, the (skew-symmetric) Smith form of _d P(λ) is the one of _(P) P(λ) multiplied by λ^d- (P), which implies thatthe smallest partial multiplicity of P at ∞ is γ_1 = d- (P) + γ_1, where γ_1 is the smallest partial multiplicity at 0 of _(P) P(λ). The fact that (_(P) P) (0) = P_(P) 0, implies that γ_1 = 0 and the result is proved. For an m× n matrix polynomial P(λ), the left and right null-spaces, over the field of rational functions ℂ(λ), are defined as follows:N_ left(P) := {y(λ)^T ∈ℂ(λ)^1 × m: y(λ)^TP(λ) = 0_1× n}, N_ right(P) := {x(λ) ∈ℂ(λ)^n× 1: P(λ)x(λ) = 0_m× 1}.Each subspace V of ℂ(λ)^n has bases consisting entirely of vector polynomials. A basis of V consisting of vector polynomials whose sum of degrees is minimal among all bases of V consisting of vector polynomials is called a minimal basis of V. The ordered list of degrees of the vector polynomials in any minimal basis of V is always the same. These degrees are called the minimal indices of V <cit.>.This allows us to define the left and right minimal indices of a matrix polynomial P(λ)as those of N_ left(P) and N_ right(P), respectively. Note that for a skew-symmetric matrix polynomial the left minimal indices are equal to the right ones.We define the complete eigenstructure of a skew-symmetric matrix polynomialP(λ) ∈^ss_d, m× m of grade d to be the collection of all finite and infinite eigenvalues, the corresponding elementary divisors(or, equivalently, the corresponding sequences of partial multiplicities), and the left and right minimal indices of P(λ). To refer concisely to thesubset of ^ss_d, m× m of skew-symmetric matrix polynomials with the same size m× m, with the same grade d, and with the same complete eigenstructure, we define the notion of orbit of a skew-symmetric matrix polynomial. Notably, an analogous definition for general polynomials is given in <cit.>. Let P(λ) ∈^ss_d, m× m. The subset of matrix polynomials in ^ss_d, m× m with the same complete eigenstructure as P is called the orbit of P, denoted (P).Observe that Theorem <ref>-(a) implies that all the polynomials in (P) have the same degree. Moreover, they also have the same rank, since the rank is determined by the size m× m and the number of left (or right) minimal indices. Note that, by contrast to congruence orbits of skew-symmetric matrix pencils, Definition <ref> is not associated with an action of any group. §.§ Orbits of the same skew-symmetric matrix polynomial viewed with different grades The proofs of the main results in this paper rely on comparing the properties of a skew-symmetric matrix polynomial P(λ) ∈^ss_d, m × m of grade d with the entry-wise identical polynomial P (λ) = λ^d+10 + P(λ) ∈^ss_d+1, m × m of grade d+1, as well as on comparing the orbits, and their closures, of these two matrix polynomials. Lemma <ref> establishes these comparisons. Recall in the statement of this lemma that (P) ⊆ (P) ⊆^ss_d, m × m and that (P) ⊆ (P) ⊆^ss_d+1, m × m. Let P(λ) ∈^ss_d, m × m have rank 2r,r>0, and let P (λ) := λ^d+10 + P(λ) ∈^ss_d+1, m × m. Then: (a) P = P = 2r.(b) P(λ) and P (λ) have the same finite eigenvalues, with the same sequences of partial multiplicities, or, equivalently, with the same elementary divisors.(c) P(λ) and P (λ) have the same left and right minimal indices.(d)γ_1=γ_1 ≤γ_2 = γ_2 ≤…≤γ_r = γ_r is the sequence of partial multiplicities at ∞ of P(λ) if and only if γ_1 +1=γ_1 +1≤γ_2 +1= γ_2 +1 ≤…≤γ_r +1 = γ_r +1 is the sequence of partial multiplicities at ∞ of P(λ).(e) If Q (λ) = ∑_i=0^d+1λ^i Q_i ∈ (P), then Q_d+1 = 0. (f) If Q (λ) = ∑_i=0^d+1λ^i Q_i ∈ (P), then Q_d+1 = 0. (g) Let Q(λ) = λ^d+10 + Q(λ) ∈^ss_d+1, m × m, where Q(λ) ∈^ss_d, m × m. Then, * Q∈ (P) if and only if Q ∈ (P),* Q∈ (P) if and only if Q ∈ (P), and* (Q) ⊆ (P) if and only if (Q) ⊆ (P).Parts (a), (b) and (c) are trivial because the rank, the (skew-symmetric) Smith form and the left and right rational null spaces of a matrix polynomial do not depend on the grade chosen for that polynomial.Part (d) follows from the fact that _d+1P (λ) = λ_d P (λ). Therefore, the (skew-symmetric) Smith form of _d+1P (λ) is equal to λ times the Smith form of _d P (λ), which implies the result. Part (e). Since all the matrix polynomials in (P) ⊆^ss_d+1, m × m have the same complete eigenstructure, all of them have the same sequence of partial multiplicities at ∞. Then, Theorem <ref>-(a) implies that all of them have the same degree. So, Q_d+1 = 0 because the coefficient of λ^d+1 in P is P_d+1 =0.Part (f).Q (λ) = ∑_i=0^d+1λ^i Q_i ∈ (P) if and only ifQ is the limit of a sequence of matrix polynomials in (P). But, by (e), all the terms in this sequence have the coefficient of λ^d+1 equal to zero. So, the limit of the sequence have also the coefficient of λ^d+1 equal to zero.Part (g)-1 follows from the definition of orbit and from applying (b), (c), and (d) to P and P and to Q and Q.Part (g)-2. If Q∈ (P), then Q = lim_k→∞Q^(k), for a sequence {Q^(k)}_k ∈ℕ whose terms belong to (P) ⊂^ss_d+1, m × m. Taking into account (e), we have thatQ^(k)(λ) = λ^d+10 + Q^(k)(λ), where {Q^(k)}_k ∈ℕ⊂^ss_d, m × m, and, taking into account (g)-1, {Q^(k)}_k ∈ℕ⊂ (P). Thus, Q = lim_k→∞ Q^(k), which implies that Q ∈ (P).Conversely, if Q ∈ (P), thenQ = lim_k→∞ Q^(k), for a sequence {Q^(k)}_k ∈ℕ whose terms belong to (P) ⊂^ss_d, m × m. By part (g)-1, the sequence {λ^d+10 + Q^(k)}_k ∈ℕ is included in (P) ⊂^ss_d+1, m × m. Moreover, lim_k →∞λ^d+10 + Q^(k) = Q, which implies that Q∈ (P). Finally, for part (g)-3, let us first assume that (Q)⊆(P). Let Q'∈(Q). We want to prove that Q'∈(P). Since Q'∈(Q), by part (g)-1 we conclude that Q':=λ^d+10+Q'∈(Q), and then Q'∈(P), by hypothesis. But now part (g)-2 implies Q'∈(P).Conversely, let us assume that (Q)⊆(P), and let Q'∈(Q). We want to prove that Q'∈(P). By part (e) we know that Q'=λ^d+10+Q', for some Q'∈ POL^ss_d,m× m, so part (g)-1 together with the hypothesis Q'∈(Q) imply that Q'∈(Q). But this in turn implies Q'∈(P), by hypothesis, sopart (g)-2 gives Q'∈(P), as wanted. §.§ Linearization of skew-symmetric matrix polynomials and their perturbations A matrix pencil F_P is called a linearization of a matrix polynomial P(λ)of grade d if F_P has the same finite eigenvalues and associated elementary divisors, the same number of left minimal indices, and the same number of right minimal indices as P <cit.>. If in addition,_1F_P is a linearization of _d P then F_P is called a strong linearization of P and, then, F_P and P have also the same infinite elementary divisors.The following pencil-template is known to be a skew-symmetric strong linearization of skew-symmetricmatrix polynomials P(λ) = ∑_i=0^d λ^i A_i ∈^ss_d, m× m of odd grade d <cit.>, see also <cit.>:F_P(i,i) =λ A_d-i+1 + A_d-i ifiis odd,0ifiis even, F_P(i,i+1) = - I_mifiis odd,- λ I_mifiis even,F_P(i+1,i)= I_mifiis odd, λ I_mifiis even,where F_P(j,k) denotes an m × m matrix pencil which is at the position (j,k) of the block pencil F_P andj,k=1, … , d. The blocks of F_P in positions which are not specified above are zero. We rewrite this strong linearization template ina matrix form as follows:F_P(λ)= λ[ A_d; ⋱ ⋱; ⋱ 0-I; I A_3; 0-I; I A_1; ] - [ -A_d-1I; -I0⋱; ⋱⋱;-A_2I;-I0;-A_0;].Note that the linearization F_P is definedonly for skew-symmetric matrix polynomials of odd grade. The reason is that there is no skew-symmetric linearization-template (i.e., a skew-symmetric companion form in the language of <cit.>) for skew-symmetric matrix polynomials of even grade <cit.>. For any skew-symmetric matrix polynomial P(λ) ∈^ss_d, m× m, the strong linearization (<ref>) preserves the finite and infinite elementary divisors of P(λ) but does not preserve the left and right minimal indices of P(λ). Nevertheless, the relations between the minimal indices of a skew-symmetric matrix polynomial P(λ) and its linearization (<ref>) are derived in <cit.>, see also <cit.>. We recall them in Theorem <ref>.<cit.>Let P(λ) be a skew-symmetric m× m matrix polynomial of odd grade d ≥ 3, and let F_P be itsstrong linearization (<ref>) given above. If 0 ≤ε_1 ≤ε_2 ≤ …≤ε_t are the right (left) minimal indices of P then0 ≤ε_1 + 1/2(d-1) ≤ε_2 + 1/2(d-1) ≤⋯≤ε_t + 1/2(d-1)are the right (left) minimal indices of F_P. The linearization F_P in (<ref>) is crucial for obtaining the results in Section <ref>. Thus we define the generalized Sylvester space consisting of the linearizations F_P of all m × m skew-symmetric matrix polynomials of odd grade d, namely:^ss_d,m× m= { F_P : P(λ) ∈^ss_d, m× m with oddd }.If there is no risk of confusion we will writeinstead of ^ss_d, m× m.Given F_P = λ A - B and F_P' = λ A' - B', the function d( F_P, F_P'):= ( ||A-A'||_F^2 + ||B-B'||_F^2 )^1/2 mentioned in Sections <ref> and <ref> is a distance onand it makesa metric space.Since d( F_P,F_P') = d(P,P'), there is a bijective isometry (and therefore a homeomorphism):f: ^ss_d, m× m→^ss_d, m× msuch that f(P) =F_P.We also define the orbit of the skew-symmetric linearizations of type (<ref>) of a fixed skew-symmetric matrix polynomialP∈^ss_d, m× m of odd grade d as( F_P) = {(S^T F_P S) ∈^ss_d, m× m :S ∈GL_md(ℂ) }.We emphasize that all the elements of ( F_P) have the block structure of the elements of ^ss_d, m× m. Thus, in particular, (P) = f^-1(( F_P)), as a consequence of the properties of strong linearizations and Theorem <ref>, and (P) = f^-1(( F_P)), as a consequence of f being a homeomorphism. Moreover, we also have that, for any m × m skew-symmetric matrix polynomials P, Q of odd grade d, (P) ⊇(Q) if and only if ( F_P) ⊇( F_Q), where it is essential to note that the closures are taken in the metric spacesand , respectively, defined above.Similarly to the unstructured matrix pencil case, ( F_P) is open in its closure in the relative Euclidean topology, and so is (P) since f is a homeomorphism.§ GENERIC SKEW-SYMMETRIC MATRIX POLYNOMIALS WITH BOUNDED RANK AND FIXED GRADEIn this section we present the complete eigenstructure of the generic m× m skew-symmetric matrix polynomials of rank at most 2r and even grade d in Theorem <ref>. Since the case of odd grade is known, we also provide a result that does not depend on the parity of din Theorem <ref>. We start by proving the following auxiliary lemma.Let d, m, and r be integers such that d≥ 1, m≥ 2, and 2 ≤ 2 r ≤ (m-1). For any skew-symmetric matrix polynomial Q (λ) ∈^ss_d, m × m, with Q=2r_1 and r_1<r, there exists a sequence of skew-symmetric matrix polynomials { P^(k) (λ)}_k ∈ℕ⊂^ss_d, m × m with P^(k) = 2r, for all k ∈ℕ, such that lim_k →∞ P^(k)= Q. Let μ∈ℂ be such that Q(μ) = 2 r_1, that is, μ is not an eigenvalue of Q(λ). Since the constant matrix Q(μ) ∈ℂ^m× m is skew symmetric, there exist a unitary matrix U ∈ℂ^m× m and positive real numbers s_1, … , s_r_1 such that (see <cit.>)Q(μ)= U ( [0s_1; -s_10 ]⊕…⊕[0s_r_1; -s_r_10 ]⊕ 0_m-2r_1) U^T .Then, we define the following constant m× m skew-symmetric matrixE := U ( 0_2r_1⊕[01; -10 ]⊕…⊕[01; -10 ]⊕ 0_m-2r) U^T,where there are r-r_1 blocks equal to [[01; -10 ]]. Then, the desired sequence of skew-symmetric matrix polynomials is defined as P^(k)(λ):=Q(λ)+1/k E. Note that P^(k) =2r, because P^(k)≤ Q +(1/k E) = 2 r_1 + 2 r - 2 r_1 = 2 r and P^(k)≥ P^(k)(μ) =(Q(μ) + 1/k E) = 2 r.We recall the following lemma that reveals,forskew-symmetric matrix polynomials P withoddgrade d, a relation between ( F_P), where the closure is taken in ^ss_d,m× m, and ^c( F_P), where the closure is taken in ^ss_n × n, with n = m d. <cit.> Let P be an m× m skew-symmetric matrix polynomial with odd grade d and F_P be its linearization(<ref>). Then ( F_P)= ^c( F_P) ∩^ss_d,m× m.We are finally in the position of stating and proving the main result of this paper. Let d,m, and r be integers such that d ≥ 1 is even, m ≥ 2, and 2 ≤ 2r ≤ (m-1). The set of m× m complex skew-symmetric matrix polynomials of grade d with rank at most 2r is a closed subset of _d,m× m^ss equal to (W), whereW ∈^ss_d, m× m is an m × m complex skew-symmetric matrix polynomial of degree exactly d and rank exactly 2r with no elementary divisors at all, with t left minimal indices equal to (β +1) and with (m-2r-t) left minimal indices equal to β, where β = ⌊ rd / (m-2r) ⌋ and t ≡ rd(m-2r), and with the right minimal indices equal to the left minimal indices.By Lemma <ref>, each skew-symmetric matrix polynomial in ^ss_d, m× m of rank 2r_1, with r_1 < r, is in the closure of the subset of ^ss_d, m× m formed by the matrix polynomials of rank exactly 2r. Therefore it remains to show that any skew-symmetric matrix polynomial of rank exactly 2r in ^ss_d, m× m is in the closure of the orbit of Was in the statement.Denote the complete eigenstructure described in the statement (consisting only of the left and right minimal indices) byW: {β+1, … , β+1_t,β, … , β_m-2r-t^left minimal indices, β+1, … , β+1_t, β, … , β_m-2r-t^right minimal indices}.First we show that there exists an m× m skew-symmetric matrix polynomialW ∈^ss_d, m× m of degree exactly d and rank exactly 2r that has the complete eigenstructure Win (<ref>). By <cit.> it is enough to show that the sum of the left (or right) minimal indices of W is equal to rd:∑_1^t (β +1) + ∑_1^m-2r-tβ = ∑_1^m-2rβ + t = (m-2r) ⌊ rd/(m-2r) ⌋ + t = rd.Next, as in Subsection <ref>, we add the term λ^d+10 to W and denote the (entry-wise identical) result by W, i.e., W = λ^d+10 + W ∈^ss_d+1, m× m, in order to view the polynomial as a polynomial of odd grade d+1. Parts (a)–(d) in Lemma <ref> imply that the complete eigenstructure of W is given by the minimal indices in (<ref>) together with 2r infinite elementary divisors with degree equal to one, i.e., the sequence of partial multiplicities at ∞ of W is γ_1=γ_1 = γ_2=γ_2 = … = γ_r = γ_r =1.For everyskew-symmetric m × m matrix polynomial P of odd grade d+1 and rank at most 2r, the linearization F_P in (<ref>) has rankequal to (P) + md, i.e., rank at most 2r+md, because F_P is unimodularly equivalent to P ⊕ I_md. The linearization F_W of the matrix polynomial W is an m(d+1) × m(d+1) skew-symmetric matrix pencil with rank 2r+md and, by Theorem <ref>, the KCF of F_W is the direct sum of the following blocks:{ L_β+η+1, … ,L_β+η+1_t, L_β+η, … ,L_β+η_m-2r-t,L_β+η+1^T, … ,L_β+η+1^T_t,L_β+η^T, … ,L_β+η^T_m-2r-t, E_1(∞),…, E_1(∞)_2r},where η=1/2d.Next, we show that the KCF of F_W as in (<ref>) coincides with the KCF of the most generic skew-symmetric matrix pencil W of rank 2w=2r+md and size n × n, where n = m (d+1), having exactly rK-blocks associated to the infinite eigenvalue in its skew-symmetric KCF. This generic KCF is given in Theorem <ref>, namely:{ L_α+1, … ,L_α+1_s, L_α, … ,L_α_n-2w-s,L_α+1^T, … ,L_α+1^T_s,L_α^T, … ,L_α^T_n-2w-s,E_1(∞),…, E_1(∞)_2r}.In other words, weare going to show that the numbers and the sizes of the L and L^T blocks in (<ref>) and (<ref>) coincide, namely β+ d/2 = α, s=t, and m-2r-t = n - 2w - s. For the sizes of the blocks we haveβ+d/2=⌊rd/m-2r⌋ + d/2 =⌊(m-2r)d + 2rd/2(m-2r)⌋=⌊ md/2(m-2r)⌋ = ⌊md+2r - 2r/2(m(d+1) - (md + 2r))⌋ = ⌊2w - 2r/2(n - 2w)⌋ = α. For the numbers of the blocks, we havet≡ rd≡(m-2r)d/2 + rd(m-2r) ≡ (md+2r - 2r)/2(m(d+1) - (md + 2r))≡ w-r ≡ s (n - 2w)and, since s,t < n-2w, it must be t=s. Moreover,m-2r-t = m(d+1) - md - 2r -t= n - 2w - s. Thus F_W is congruent to the most generic skew-symmetric matrix pencil 𝒲, with exactly rK-blocks corresponding to the infinite eigenvalue in its skew-symmetric KCF and of rank 2w = 2r + md, obtained in Theorem <ref>, since they both are skew-symmetric and have the same KCF. Therefore ^c( F_W)=^c(𝒲). Given any P ∈^ss_d,m× m of rank exactly 2r, the entry-wise identical matrix polynomial P = λ^d+10 + P ∈^ss_d+1,m× m has rank exactly 2r and exactly 2r infinite elementary divisors, by Lemma <ref>-(a)-(d). So, by Theorem <ref>, we have that ^c(𝒲)⊇^c ( F_P), thus ^c( F_W) =^c( W)⊇^c ( F_P). Therefore^c( F_W) ∩^ss_d+1,m× m⊇^c( F_P) ∩^ss_d+1,m× m, and Lemma <ref> implies ( F_W)⊇( F_P). According to the discussion after (<ref>), ( F_W) ⊇( F_P) is equivalent to (W) ⊇(P) and byLemma <ref>-(g)-3 the latter is equivalent to (W) ⊇(P). Therefore any m× m skew-symmetric matrix polynomial of grade d, with rank exactly 2rbelongs to (W). This completes the proof (we remind that the skew-symmetric matrix polynomials of grade d with rank smaller than 2r are treated at the beginning of this proof).In Table <ref> we provide examples of the eigenstructures (namely, the left minimal indices) of generic skew-symmetric matrix polynomials of grade 2 with various sizes (from 3 × 3 to 7× 7) and ranks (2,4, and 6) obtained from Theorem <ref>. Note that Theorem <ref> is an even-grade-version of the corresponding theorem for the odd gradeskew-symmetric matrix polynomials <cit.>. This allows us to state the following theorem that does not depend on the parity of d.Let m,r, and d be integers such that m ≥ 2, d ≥ 1, and 2 ≤ 2r ≤ (m-1). The set of m× m complex skew-symmetric matrix polynomials of grade d with rank at most 2r is a closed subset of _d,m× m^ss equal to (W), whereW ∈^ss_d, m× m is an m × m complex skew-symmetric matrix polynomial of degree exactly d and rank exactly 2r with no elementary divisors at all, with t left minimal indices equal to (β +1) and with (m-2r-t) left minimal indices equal to β, where β = ⌊ rd / (m-2r) ⌋ and t = rd(m-2r), and with the right minimal indices equal to the left minimal indices. If d is odd then the statement coincides with <cit.>, and if d is even then it coincides with Theorem <ref>. § CODIMENSION COMPUTATIONSWe start by recalling that the congruence orbit of an n× n skew-symmetric pencil λ A - B is adifferential manifold in the complex n^2-n dimensional space of skew-symmetric pencils^ss_n × n. Forλ A - B, define the dimension of ^c (λ A - B) to be the dimension of the tangent space to this orbit at the point λ A - B, namely:_λ A - B^c:={λ (X^TA+AX) - (X^TB + BX): X∈ℂ^n× n}The orthogonal complement (with respect to the Frobenius inner product) to _λ A - B^c, is called the normal space to ^c (λ A - B) at the point λ A - B. The dimension of the normal space is the codimension of the congruence orbit of λ A - B and is equal to n(n-1) minus the dimension of the congruence orbit of λ A - B. Explicit expressions for the codimensions of congruence orbits of skew-symmetric pencils in _n× n^ss are presented in <cit.>, see also <cit.>, and implemented in the MCS (Matrix Canonical Structure) Toolbox <cit.>.Recall that, for Q ∈_d, m× m^ss of odd grade, we define the codimension of (Q) ⊂_d, m× m^ss to be _ (Q) := _( F_Q), where by _X () we mean the codimension of the orbitin the space X, see <cit.>. Moreover, _( F_P)= _^c( F_P), by <cit.>.Now let P∈_d, m× m^ss be a matrix polynomial of even grade d andP := 0λ^d+1 + P ∈_d+1, m× m^ss be a matrix polynomial of odd grade d+1. Denote by_d+1, m× m^ss,0 the space of the linearizations (<ref>) ofskew-symmetric polynomials P of odd grade d+1 withzero coefficient for λ^d+1. For a polynomial P of even grade d, we define the codimension of the orbit, _ (P), as _^0 ( F_P), i.e., the codimension of ( F_P) in the space_d +1, m× m^ss,0. Recall also that the dimension of ( F_P) in the space_d+1, m× m^ss is defined as the dimension of_ F_P^c ∩_d+1, m× m^ss and, similarly, the dimension of_ F_P^c ∩_d+1, m× m^ss,0 is equal to the dimension of ( F_P) in_d+1, m× m^ss,0. In Lemma <ref> we show that these two dimensions are equal to each other.Let P= 0λ^d+1 + P_d λ^d + … + P_1 λ + P_0∈^ss_d+1, m× m be askew-symmetric matrix polynomial. The dimension of ( F_P) in_d+1, m× m^ssis equal to the dimension of ( F_P) in_d+1, m× m^ss,0. The tangent space _ F_P^c = {X^TF_P +F_P X : X∈ℂ^n× n} to ^c( F_P) at the point F_P consist of the skew-symmetric pencils of the following shape:X^TF_P +F_P X= λ[ 0_m× m*…*;*…*;⋮⋮⋮;*…*;] - [ * * … *; * … *; ⋮ ⋮ ⋮; * … *; ],where * denote possibly non-zero entries. The zero block in the (1,1) entry of the λ-matrix of (<ref>) ensures that _ F_P∩_d+1, m× m^ss is equal to _ F_P∩_d+1, m× m^ss,0, and so their dimensions are equal to each other. Recall that the dimension and codimension of any given orbit sum up to the dimension of the whole space. Note also that_d+1, m× m^ss = _d+1, m× m^ss,0 + m(m-1)/2. Thus, taking into account Lemma <ref>, the codimension of ( F_P) in _d+1, m× m^ss has to be m(m-1)/2larger than the codimension of ( F_P) in _d+1, m× m^ss,0.Therefore, for the generic m× m skew-symmetric matrix polynomial P with even grade d identified in Theorem <ref>, we have_(P)= _^0( F_P) = _( F_P) - m(m-1)/2= _^c ( F_P) - m(m-1)/2 = _^c ( W) - m(m-1)/2,where W is the pencil in Theorem <ref> with the identifications n=m(d+1) and w = (md + 2r)/2. Using <cit.>, see also <cit.>, applied to the skew-symmetric pencil W in (<ref>), the codimension of ^c ( W) in _n× n^ss is a sum of the part corresponding to the K blocks,namely r(2r-1), the part corresponding to the interaction between the K blocks and M blocks,namely 2r(n-2w), and the part corresponding to the M blocks,namely ∑_i<j (2 max{m_i,m_j } + ε_ij), where m_i and m_j are the indices of the M blocks (either α or α +1), ε_ij = 2 if m_i=m_j and ε_ij = 1 otherwise, resulting in:_^c ( W) = (n-2w-s)(n-2w-s-1)(2α +2)/2 a + s(s-1)(2(α+1) +2)/2+ s(n-2w-s)(2(α+1) +1) + 2r(n-2w) + r(2r-1) a = (n-2w-s)(n-2w-s-1)(α +1) + s(s-1)(α+1)+ s(s-1) a + 2s(n-2w-s)(α+1) + s(n-2w-s) + 2r(n-2w) + r(2r-1)a = (α +1) ((n-2w-s)(n-2w-1) + s(n-2w-1))a + s(n-2w-1) + 2r(n-2w) + r(2r-1)a =(n-2w)(n-2w-1)(α +1)+s(n-2w-1)+ 2r(n-2w) + r(2r-1)a =(n-2w-1)((n-2w)α +s + n-2w)+ 2r(n-2w) + r(2r-1) a =(n-2w-1)(n-w-r) + 2r(n-2w) + r(2r-1).Substituting n=m(d+1) and w = (md + 2r)/2 inthe previous identity we obtain:_( F_P) = (n - 2w - 1)(n-w-r) +2r(n-2w) + r(2r-1)= (m(d+1) - 2r -md - 1)(m(d+1) - (2r + md)/2 -r) +2r(m(d+1) -2r -md) + r(2r-1)= (m - 2r - 1)(md+2m-4r)/2 +2mr -2 r^2 - r = (m - 2r - 1)(md+m-2r)/2 +m(m-1)/2 .Taking into account equation (<ref>), we obtain:_(P)=_^0( F_P) = _( F_P) - m(m-1)/2 = (m - 2r - 1)(md+m-2r)/2, which coincides with the codimension found in <cit.> for the odd-grade case.§ CONCLUSIONS AND FUTURE WORK This paper closes the knowledge gap for skew-symmetric matrix polynomialswith bounded rank and fixed grade. Namely, it provides the generic complete eigenstructure for skew-symmetric matrix polynomials of rank at most 2r and even grade d. It is also the first paper that tackles such problem for the structured matrix polynomials of even grade. Moreover, the obtained formulas (for the generic eigenstructures and codimensions) are exactly the same as in the case of odd d obtained in <cit.>, showing that the restriction to the odd case was motivatedby a need of using the linearization techniques.The extensions of this result to another classes of structured matrix polynomials of even grade, such as symmetric, symmetric/skew-symmetric, or palindromic, is a part of our future work.§ ACKNOWLEDGEMENTS The work of A. Dmytryshyn was supported by the Swedish Research Council (VR) grant 2021-05393. The work of F. De Terán and F. M. Dopico has been partially funded by the Agencia Estatal de Investigación of Spain through grants PID2019-106362GB-I00 MCIN/ AEI/ 10.13039/ 501100011033/ and RED2022-134176-T, and by the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M23), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation).abbrv
http://arxiv.org/abs/2312.16672v1
{ "authors": [ "Fernando De Terán", "Andrii Dmytryshyn", "Froilán M. Dopico" ], "categories": [ "math.RT", "cs.NA", "math.NA", "15A18, 15A21" ], "primary_category": "math.RT", "published": "20231227183445", "title": "Even grade generic skew-symmetric matrix polynomials with bounded rank" }
An improved spectral inequality]An improvedspectral inequalityfor sums of eigenfunctions Axel Osses(Departamento de Ingeniería Matemática, Center for Mathematical Modeling. Universidad de Chile, Chile; [email protected])Faouzi Triki(Laboratoire Jean Kuntzmann, Université de Grenoble-Alpes, France; [email protected])[ Axel Osses, Faouzi Triki January 14, 2024 ================================ We establish a new spectral inequality for the quantified estimation of the H^s-norm, s≥ 0 of a finite linear combination of eigenfunctions in a domain in terms of its H^s-norm in a strictly open subset of the whole domain. The corresponding upper bound depends exponentially on the square root of the frequency number associated to the linear combination. Keywords: Spectral inequalities; Unique continuation; Frequency number.§ INTRODUCTIONLet Ω⊂ℝ^d, d≥ 2, be a C^2 bounded domainwith boundary ∂Ω. Let us consider the eigenvalues and eigenfunctions {(λ_k, φ_k)}_k ≥ 1 of the elliptic operator:-÷(γ∇φ_k)= λ_k φ_k, in Ω, φ_k=0, on ∂Ω,for γ∈ C^2(Ω) with γ(x)≥γ_0>0in Ω, and where φ_k∈ H^1_0(Ω) form an orthonormal basis in L^2(Ω) with a non decreasing sequence of eigenvalues. We will consider only a finite number of eigenvalues with maximal value λ (hence until an index K including eventually the repeated multiplicities):0<λ_1≤λ_2≤…≤λ_k≤λ_K=λ.We recall the following spectral inequality for a finite linear combination of eigenfunctions of Lebeau and Jerison: Given an nonempty open set ω⊂Ω, ω≠Ω, there exist constants C_1>0 and C_2≥ 0 such that, if we defineψ(x)=∑_λ_k≤λa_kφ_k(x), x∈Ω,for some coefficients a_k∈ℝ, thenψ_L^2(Ω)≤ C_1 e^C_2√(λ)ψ_L^2(ω).The original proofof <cit.> follows an approach similar to <cit.>, and is based on the fact that the function u(x,t)=1/√(λ_k)sinh(√(λ_k)t)φ_k(x) is the unique solution of the elliptic equation u_tt+÷(γ∇ u)=0 in Ω×(-T,T) for T>0 withhomogeneous Dirichlet boundary conditions on ∂Ω×(-T,T) and Cauchy data u(x,0)=0 and u_t(x,0)=ψ on Ω. So the inequality (<ref>) is in fact the quantified unique continuation froma neighborhoodω×{0}. The spectral inequality (<ref>) is somehow aquantification of the unique continuation of a finite sum ofeigenfunctions from ω to the whole domain. The exponential dependency in terms of the highest eigenvalue√(λ) shows that the finite sum of eigenfunctions surprisingly behaves as a solution of an ellipticPDE. Indeed previously Donnelly<cit.> in a first result has used the fact that ψ satisfies:(÷(γ∇·) + λ_1 ·)(÷(γ∇·) + λ_2 ·) ⋯(÷(γ∇·) + λ_K ·) ψ = 0.Even if the inequality (<ref>) is asymptotically optimal (see Theorem <ref>), there are still some interesting open problems. One important one is how the constants depend on ω <cit.>. For instance, in the very simple case of a single frequency ψ=φ_k, for some fixed k, it is well known that we can take the constant C_2=0 in (<ref>) provided that ω satisfies the geometrical control conditions (GCC) (see<cit.> and references therein). Indeed, if we define u(x,t)=1/√(λ_k)sin(√(λ_k)t)φ_k(x), we see that u is the unique solution of the wave equation:u_tt-÷(γ∇ u) = 0 in Ω×(0,T)u(x,0) = 0 in Ωu_t(x,0) = φ_k in Ωu = 0 on ∂Ω×(0,T).From the internal observability inequality for the wave equation withω, we have that thereexists a observability time T_0=T_0(ω,Ω, γ, d)>0, and aconstant C_0=C_0(T,ω,Ω, γ, d)>0 such that for all T≥ T_0:(u(·,0),u_t(·,0))_H^1(Ω)× L^2(Ω)≤ C_0 u_t_L^2(ω×(0,T)),and since u_t=cos(√(λ_k)t)φ_k we easily obtain for C_0=C_0√(T_0) thatφ_k_L^2(Ω)≤C_0φ_k_L^2(ω).This shows that in contrast with sums of two and more eigenfunctions a singleeigenfunction behaves as asolution of a PDE of hyperbolic type. Moreover when the geometrical control conditions (GCC) are not fulfilled anda weak observability controllability holds,the optimal spectral inequality for oneeigenfunction can be derived following the approach developed recentlyin <cit.>. The other open problem is that the bound in √(λ) in (<ref>) is uniform for all the possible finite linear combinations. The objective of this paper is to establish an improvednon-uniformbound that depends on the frequency of the linear combination <cit.>. In fact theinequality (<ref>) seems to be not suited for example for the sum ϕ_1+2^-λ_Kϕ_K with a frequency that tends to λ_1 for large λ=λ_K. This will not be in contradiction with thespectral inequality (<ref>) which is in fact asymptotically optimal for large λ. Indeed, there is the following counterexample due to Lebeau and Jerison <cit.>.For the sake of completeness, we recall here the proof. Given an nonempty open set ω⊂Ω, ω≠Ω, there exists C>0, such that for all λ≥λ_1 there exist a_k∈ℝ such thatψ(x)=∑_λ_k≤λa_kφ_k(x), x∈Ω,satisfiesψ_L^2(Ω)≥ Ce^C√(λ)ψ_L^2(ω). Let y_0∈Ω∖ω be fixed, and let p_y_0(x,t)=∑_k=1^∞ e^-λ_k tφ_k(x)φ_k(y_0) be the kernel in Ω of the heat equation p_t-÷(γ∇ p)=0 with homogeneous Dirichlet boundary conditions and with an initial condition the Dirac mass at the point y_0. By the Maximum principle p_y_0≤ P_y_0 where P_y_0(x,t)=1/(4π t)^d/2exp(-|x-y_0|^2/4t) is the corresponding kernel of the heat equation in the whole space ℝ^d, and thereforep_y_0(x,t)≤ c_1exp(-c_2/t)for all x∈ω,t>0,where c_1=c/d_0^d, c_2=d_0^2/8, c= r^-d/2exp(-r/8)_L^∞(ℝ_+)>0,andd_0>0 is the distance from y_0 to ω.If we chooseψ=∑_λ_k≤λ a_kφ_k,with a_k=exp(-λ_k/√(λ))φ_k(y_0)thenψ_L^2(ω)^2 ≤2p_y_0(1/√(λ),·)-ψ_L^2(ω)^2+2p_y_0(1/√(λ),·)_L^2(ω)^2 ≤ 2∑_λ_k>λa_k^2+2c_1 e^-2c_2√(λ).On other hand we have <cit.>φ_k(x) ≤ c_3 (√(λ)_k)^d/2+1,∀ x∈Ω.for some positive constant c_3 depending only on Ω. For λ_k>λ and from (<ref>) and (<ref>) we have that ∑_λ_k>λ a_k^2≤ c_3 e^-√(λ)∑_λ_k>λ e^-√(λ)_kλ_k^d/2+1≤ c_4 e^-c_5√(λ).for some positive constant c_4 and then from (<ref>)ψ_L^2(ω)≤ c_5e^-c_6√(λ),for some constants c_5>0 and c_6>0. Finallyψ_L^2(Ω)^2=∑_λ_k≤λa_k^2≥ e^-2√(λ_1)|φ_1(y_0)|^2>0, and henceψ_L^2(ω)≤c_5/|φ_1(y_0)|e^√(λ_1) e^-c_6√(λ)ψ_L^2(Ω).§ MAIN RESULTSWe will introduce the frequency numbers for a linearcombination of eigenfunctions given by its coefficients a_k.This is the main definition to obtain a spectral inequalitywith a non-uniform exponential bound that depends on the coefficients of the linear combination.Given τ>0 and n∈ℕ, n≥ 1, we define the frequency number of order n associated to the coefficients a_k∈ℝ for the indices k such that λ_k≤λ byΛ_n(τ)=∑_λ_k≤λλ_k^n a_k^2exp(2τλ_k)/∑_λ_k≤λλ_k^n-1 a_k^2exp(2τλ_k).Given coefficients a_k∈ℝ for λ_k≤λ then the associated spectral numbers Λ_n(τ) form a non-decreasing sequence in n of non-decreasing functions of τ>0, such that0<λ_1<Λ_1(τ)≤Λ_2(τ)≤…≤λ. Given a fixed collection of coefficientsa_k∈ℝ for λ_k≤λ, and n∈ℕ withn≥ 1, on one hand, by (<ref>), it is direct to see that 0<λ_1≤Λ_n(τ)≤λ.On the other hand, if we take the derivative with respect to τ>0, it is easy to check that (all the sums are over λ_k≤λ)Λ_n'(τ) = 2(∑λ_k^n-1 a_k^2 e^2τλ_k)(∑λ_k^n+1 a_k^2 e^2τλ_k) -(∑λ_k^n a_k^2 e^2τλ_k)^2/(∑λ_k^n-1 a_k^2 e^2τλ_k)^2= 2(Λ_n(τ)Λ_n+1(τ)-Λ_n^2(τ)),and then we have the relationshipΛ_n'(τ)=2Λ_n(τ)(Λ_n+1(τ)-Λ_n(τ)).Notice thatΛ_n'(τ) = 2( ∑λ_k^2 λ_k^n-1 a_k^2 e^2τλ_k/∑λ_k^n-1 a_k^2 e^2τλ_k- (∑λ_kλ^n-1 a_k^2 e^2τλ_k/∑λ_k^n-1 a_k^2 e^2τλ_k)^2) = 2(∑α_kλ_k^2-(∑α_kλ_k)^2),α_k=∑λ_k^n-1a_k^2 e^2τλ_k/∑ a_k^2 e^2τλ_k,and from Jensen's inequality, since the function x→ x^2 is convex and ∑α_k=1, α_k≥ 0, we deduce thatΛ_n'(τ)≥ 0,from which we conclude that Λ_n is a non-decreasing function of τ and thanks to (<ref>) for all τ>0Λ_n+1(τ)≥Λ_n(τ).There is an alternative way to prove that Λ_1 is an increasing function of τ, see <cit.>. Indeed, using that u(x,t)=∑ a_k e^(τ-t)λ_kϕ_k(x) is solution of the heat equation u_t-÷(γ∇ u)=0 with homogeneous Dirichlet boundary conditions in Ω×(0,τ) and initial condition u_0(x)=∑ a_k e^τλ_kϕ_k(x), it is possible to show that the function N(t)=∇ u(·,t)_L^2(Ω)^2/u(·,t)_L^2(Ω)^2,is non-increasing in t which is equivalent to say that Λ_1(τ)=N(0) is non-decreasing in τ. This property also characterizes the log-convexity of the heat equation. The following is the main result of the paper.For all non-empty open subset ω⊂Ω,there exist constants C_i=C_i(Ω,ω, γ, d) >0, i=1,2,3, such thatfor all linear combinations ψ(x) = ∑_λ_k≤λ a_kφ_k(x),for allx∈Ω,there exists τ_0∈ (0, C_3/√(λ_1)) such thatψ_L^2(Ω)≤ C_1 e^C_2√(Λ_1(τ_0))ψ_L^2(ω).Moreover, for all s≥ 0 there exists τ_s ∈ (0, C_3/√(λ_1)) such that|ψ|_H^s(Ω)≤ C_1 e^C_2√(Λ_1+s(τ_s)) |ψ|_H^s(ω),where |ψ|_H^s(A)=∑_λ_k≤λλ_k^s/2 a_kφ_k_L^2(A) for A⊆Ω.First notice that the previous calculations for n∈ℕ can be easily extended replacing n by a real number η≥ 1. For the heat equation for any τ>0u_t-÷(γ∇ u) = 0 inΩ×(0,τ),u = 0 on∂Ω×(0,τ),and given a_k∈ℝ and η≥ 1,let us take as initial conditionu(x,0)=∑_λ_k≤λλ_k^η-1/2 a_k e^τλ_kφ_k(x),so the unique solution is given byu(x,t)=∑_λ_k≤λλ_k^η-1/2a_k e^(τ-t)λ_kφ_k(x).We know from Theorem 4.1 in <cit.> that given ω⊂Ω there exist C=C(Ω,ω, γ, d)>0 and θ=θ(Ω,ω, γ, d)∈(0,1) such that for all τ>0u(·,τ)_L^2(Ω)≤ e^C(1+1/τ)u(·,τ)_L^2(ω)^θu(·,0)_L^2(Ω)^1-θ.Therefore∑λ_k^η-1/2 a_kφ_k _L^2(Ω)≤ e^C(1+1/τ)∑λ_k^η-1/2a_kφ_k _L^2(ω)^θ(∑λ_k^η-1a_k^2 e^2τλ_k)^1-θ/2,that is for s=η-1|ψ|_H^s(Ω)≤ e^C(1+1/τ) |ψ|_H^s(ω)^θ(ϕ(τ))^1-θ/2.whereϕ(τ)=∑λ_k^η-1a_k^2 e^2τλ_k.Nowϕ'(τ) = 2∑λ_k^η a_k^2 e^2τλ_k= 2∑λ_k^η a_k^2 e^2τλ_k/ϕ(τ)ϕ(τ)= 2Λ_η(τ)ϕ(τ),where Λ_η(τ) was introduced in Definition <ref>. Thereforeϕ(τ)=ϕ(0)e^2∫_0^τΛ_η(r)dr,ϕ(0)=|ψ|_H^s(Ω)^2.By replacing in (<ref>) gives|ψ|_H^s(Ω)≤ e^c_1 |ψ|_H^s(ω)e^c_1/τ+c_2∫_0^τΛ_η(r) dr.where c_1=C/θ and c_2=(1-θ)/θ. The constants c_1and c_2 depend only in Ω, ω, γ, d,and are independent of τ.Letf(τ)=c_1/τ+c_2∫_0^τΛ_η(r) dr,sof'(τ)=-c_1/τ^2+c_2Λ_η(τ).By optimizing in τ since 1/τ^2 is strictly decreasing to zero and Λ_η(τ) is a non decreasing in τ (see Proposition <ref>) and not identically zero, we obtain that there exist a τ_s>0 solution of the equationτ_s=√(c_1/c_2)/√(Λ_η(τ_s)), c_1/c_2=C/(1-θ),and since Λ_η is increasing in τ we havef(τ_s) = c_1/√(c_1/c_2)√(Λ_η(τ_s))+c_2∫_0^τ_sΛ_η dr≤ c_1/√(c_1/c_2)√(Λ_η(τ_s))+c_2 τ_s Λ_η(τ_s)≤c_3√(Λ_η(τ_s)),where c_3=2c_1/√(c_1/c_2)=2/θ√(C(1-θ)). Replacing in(<ref>) we obtain|ψ|_H^s(Ω)≤ e^c_1e^c_3√(Λ_η(τ_s)) |ψ|_H^s(ω),and we conclude the proof with C_1=e^c_1 and C_2=c_3 after replacing η=1+s.Again since τ→Λ_η(τ) is non-decreasing, we deduce from(<ref>), thatτ_s ≤√(c_1/c_2)/√(Λ_η(0)) = √(c_1/c_2)/√(λ_1).Taking finally C_3 = √(c_1/c_2) finishes the proof of the Theorem. Notice thatΛ_1+s(τ_s)≤λ,so we recover from the previous result the original inequality of Lebeau-Jerison <cit.>, but the bound in terms of the frequency number is more precise since it depends on the coefficients of the linear combination of the eigenfunctions.We see in the proof of Theorem <ref>that the constants C_1 and C_2 appearing in (<ref>) and (<ref>) are the same, and they are in fact the same constants as the ones appearing in Theorem <ref> in (<ref>). Indeed,if we bound λ_k≤λ uniformly, then from (<ref>) we obtain:|ψ|_H^s(Ω)≤ e^C/θ(1+1/τ)e^τλ1-θ/θ |ψ|_H^s(ω).so after optimizing in τ we obtainτ=√(C/1-θ)1/√(λ), from which we obtain that|ψ|_H^s(Ω)≤ e^c_1e^c_3√(λ) |ψ|_H^s(ω),with the same constants c_1=C/θ and c_3=2/θ√(C(1-θ)) of (<ref>). Since Λ_1(τ) ≤λ for all τ∈ℝ_+, we deduce from Theorem <ref> that for any given an nonempty open set ω⊂Ω, ω≠Ω, there exists C>0, such that for all λ≥λ_1 there exist a_k∈ℝ such thatψ(x)=∑_λ_k≤λa_kφ_k, x∈Ω,satisfiesψ_L^2(Ω)≥ Ce^C√(Λ_1(τ))ψ_L^2(ω),for all τ∈ℝ_+. This shows that the estimate (<ref>) is also asymptotically optimal. Notice that the estimate (<ref>)in Theorem <ref> is independent of K, the largest index of the eigenvalues witheigenfunctions appearing in the finitesums. Therefore it is straightforward to verify that it remainsvalidfor infinite sumsif in addition Λ_1(τ)<∞ holds for all τ.§ NUMERICAL TESTS We consider an example where we compute the eigenfrequencies and eigenvalues with Ω=B(0,1) a unit circle, C=1 and different values of the constant θ. We compute the first 60 eigenvalues and their corresponding eigenvectors using Bessel functions.The examples we consider are the followings:a_k=e^-(λ_k/√(λ))^p, p=1.0;1.2;1.3.The case p=1 corresponds to coefficients that decay as in the counterexample of the proof of Theorem <ref> which is in some extend the critical case, for which the differences betweenλ and Λ_1 will begin to appearand the other cases p=1.2 and p=1.3 correspondto faster decays where the difference will became stronger.In Figure <ref> we show the intersection for the case of the example with p=1.2 where we see the curves C/(1-θ)τ^2 and Λ_1, Λ_2 and Λ_3. The optimal time τ_s for the new inequality (<ref>) corresponds to theintersection of these curves. The value of τ_0 is shown, which corresponds to the intersection of C/(1-θ)τ^2 and Λ_1 and it is compared with the optimal valuefor the inequality (<ref>) given by τ=√(C(1-θ))/√(λ). Thanks to Remark <ref>, for given fixed values of C and θ, we know that the involved constants of the inequalities(<ref>) and (<ref>) can be taken as the same. Therefore, we can directly compare √(λ) and √(Λ_1) in order to compare the estimates.This is exactly what is shown in Figures <ref>, <ref> and <ref>. We observe in these figures that the estimation with √(Λ_1) is better than the estimation with √(λ) notably in the cases where the coefficients of the linear combination decay exponentially (Examples 1 and 2). When the coefficients are uniform (Example 3) there are no big differences between the boundsin √(λ) and √(Λ_1) as expected.alpha
http://arxiv.org/abs/2312.16495v2
{ "authors": [ "Axel Osses", "Faouzi Triki" ], "categories": [ "math.AP", "math.SP", "35P05, 93C20" ], "primary_category": "math.AP", "published": "20231227095858", "title": "An improved spectral inequality for sums of eigenfunctions" }
Compilers for accelerator design languages (ADLs) translate high-level languages into application-specific hardware. ADL compilers rely on a hardware control interface to compose hardware units. There are two choices: static control, which relies on cycle-level timing; or dynamic control, which uses explicit signalling to avoid depending on timing details. Static control is efficient but brittle; dynamic control incurs hardware costs to support compositional reasoning.is an ADL compiler that unifies static and dynamic control in a single intermediate language (IL). Its key insight is that the IL's static fragment is a refinement of its dynamic fragment: static code admits a subset of the run-time behaviors of the dynamic equivalent.can optimize code by combining facts from static and dynamic submodules, and it opportunistically converts code from dynamic to static control styles. We implementas an extension to an existing dynamic ADL compiler, Calyx. We useto implement an MLIR frontend, a systolic array generator, and a packet-scheduling hardware generator to demonstrate its optimizations and the static–dynamic interactions it enables. Unifying Static and Dynamic Intermediate Languages for Accelerator Generators Corto Mascle [email protected], University of Bordeaux, FranceNathanaël Fijalkow [email protected], LaBRI, Université de Bordeaux, FranceGuillaume Lagarde [email protected], University of Bordeaux, France=====================================================================================================================================================================================================================================================================================================§ INTRODUCTION Accelerator design languages (ADLs) <cit.> raise the level of abstraction for hardware design. The idea is analogous to traditional software compilation: we want users to work not with gates, wires, and clock cycles, but with high-level or domain-specific concepts such as tensor operations <cit.>, functional programs <cit.>, and recurrence equations <cit.>. Compilers then translate these high-level descriptions into efficient hardware designs. ADLs suffer cross-cutting compilation challenges, and the architecture community has responded with a range of compiler frameworks and intermediate languages <cit.>.This paper identifies a central challenge for ADL compilers: the control interface for composing units of hardware.The choice of interface has wide-ranging implications on a compiler's expressive power, its ability to optimize programs, and the semantics of its intermediate language.There are two categories.Dynamic or latency-insensitive interfaces abstract away timing details and streamline compositional design, but they incur fundamental overheads <cit.>. Static or latency-sensitive interfaces are efficient, but they depend on the cycle-level timing of each module and therefore leak implementation details across module boundaries.Intermediate languages (ILs) for ADLs use either dynamic interfaces <cit.>, static interfaces <cit.>, or both <cit.>. Static interfaces alone are insufficient because some computations, such as off-chip memory accesses, have fundamentally variable latencies. Infrastructures that support both interfaces typically stratify the IL into separate dynamic and static sub-languages <cit.>. While stratified compilers can bring customized lowering and optimization strategies to bear on each sub-language, they entail duplicated implementation effort and miss out on cross-cutting optimizations that span the boundary between static and dynamic code. Stratification also infects the frontends targeting the IL: they must carefully separate code between the two worlds and manage their interaction.We introduce , an IL and compiler for accelerator designs that freely mix static and dynamic interfaces. The key insight is that static IL constructs are all refinements of their dynamic counterparts: they admit a subset of the run-time behaviors. This unified approach lets transformations and optimizations work across both interface styles.also enables the incremental adoption of static interfaces: frontends can first establish correctness using compositional but slow dynamic code, and then opportunistically convert to efficient static interfaces. Refinement inguarantees that this transition is correct.We implementas an extension to the dynamic-first Calyx infrastructure <cit.>. This paper shows how to compile 's static extensions into pure Calyx. We lift Calyx's existing optimizations to support 's static abstractions and implement new time-sensitive optimizations.can also automatically infer when some dynamic Calyx code has fixed latency, and promote it to static code.We evaluate 's new optimizations using a frontend that translates from high-level MLIR <cit.> dialects to . Time-sensitive optimizations improve execution times by 2.5× on average over dynamic Calyx code.We also implement a packet-scheduling engine and study howoptimizations, in concert with domain-specific human insight, are able to improve the performance of the generated hardware. As another domain-specific case study, we extend a systolic array generator to support fused dynamic operations to understand howcan support interactions between fundamentally static and dynamic components. [C]Do we want to briefly mention performance improvements too? § HARDWARE INTERFACESConsider compiling the integer computation (a+b)× c ÷ d into hardware.Generating a hardware datapath entails orchestrating physical units—such as adders, multipliers, and dividers—over time. For this example, we use sequential, i.e., non-pipelined, hardware units.While many hardware units, such as adders and multipliers, have fixed latency, many do not: integer dividers, for instance, typically have data-dependent timing.Control logic for these two categories is fundamentally different.A variable-latency divider may expose 1-bit signal wires to start computation and to signal completion.A fixed-latency multiplier, however, needs no explicit completion signalling: clients can simply provide inputs and wait the requisite number of cycles.For this example, assume we have an adder with latency 1, a multiplier with latency 3, and a divider with variable latency. Dynamic compilation. <Ref> shows how dynamic-first ILs, such Calyx <cit.> and Dynamatic <cit.>, might compile our expression. All units expose explicitly signalled dynamic interfaces; each static module requires a wrapper that counts clock cycles up to the unit's latency and then signals completion.A purely dynamic compiler benefits from a uniform interface and compositional reasoning, because no module can depend on the timing of any other.However, these wrappers incur time and space overheads, and optimizations cannot exploit timing information (<ref>). Static compilation. Static-first ILs, such as HIR <cit.>, require fixed-latency operations. They can support dynamic operators like dividers by using an upper-bound latency. Upper bounds are pessimistic, however, and some hardware operations have unbounded latency: the latency for an arbiter that manages conflicting memory accesses, for instance, fundamentally depends on the address stream. Stratified static–dynamic compilation. <Ref> illustrates a hybrid approach, such as DASS <cit.>, that combines static and dynamic compilation. The idea is to compile the two parts of the program separately: first using static interfaces for the fixed-latency fragment, (a + b) × c, and then using dynamic interfaces to combine this fragment with the variable-latency divider. This combination allows latency-sensitive optimizations on the static fragment while still allowing dynamic scheduling where it is beneficial.This stratified approach, however, needs separate ILs for the two styles of computation. The compiler cannot exploit information across the static–dynamic boundary.Furthermore, it complicates the job for frontends that emit these ILs: switching a single subcomputation from static to dynamic requires a global change in the way the program is encoded. Unified static–dynamic compilation in . <Ref> represents our approach with : a unified IL that expresses both static and dynamic interfaces in one program. extends Calyx <cit.>, an existing dynamic-first IL, with static constructs that refine the semantics of its dynamic constructs.By mirroring the dynamic IL abstractions with static counterparts,enables compositional reasoning, incremental adoption, and whole-program optimization across the static–dynamic boundary.§ THEINTERMEDIATE LANGUAGEThis section introduces , a unified IL for compiling hardware accelerators.extends Calyx <cit.>, an existing dynamic IL. Calyx has a growing family of frontends, such as for Halide <cit.> and MLIR dialects in CIRCT <cit.>, that can adopt 's static interfaces to improve performance.We introduceusing the program in <ref> as a running example. Our extensions to Calyx are in red. Deletions when porting from Calyx toare commented out with red slashes.We describe the existing Calyx IL (<ref>), show that its original hint-based treatment of static interfaces is insufficient (<ref>), and then introduce 's extensions.§.§ The Calyx ILThe Calyx IL intermixes software-like control operators with hardware-like structural resources <cit.>. The former simplifies encoding of high-level language abstractions, while the latter enables optimizations that exploit control information to optimize the physical hardware implementation.Components. Components define units of hardware with input and output ports. In <Ref>, expr has four 32-bit input ports (a through d) and one output port (out). A component has three sections: |cells|, |wires|, and |control|. Cells. The |cells| section instantiates subcomponents. Cells can be either other Calyx components or external definitions defined in a standard HDL. The component |expr| instantiates three cells from the standard library: |add|, |mult|, and |div|. Each is parameterized by a bitwidth.Wires. Calyx uses guarded assignments to connect two ports when a logical condition, called the guard, is true. Consider:add.left = c0 ? 10; add.left = c1 ? 20; add.right = 30; Here, |add.left| has the value 10 or 20 depending on which guard, |c0| or |c1|, is true. Meanwhile, |add.right| unconditionally has the value 30.Calyx's well-formedness constraint requires that all guards for a given port be mutually exclusive: it is illegal for |c0| and |c1| to simultaneously be true.Groups. Assignments can be organized into unordered sets called groups.A group can execute over an arbitrary number of cycles and therefore requires a 1-bit |done| condition to signal completion. In <ref>, the assignments in |do_div| compute mult.out ÷ d by passing in inputs and asserting the divider's “start” signal, |div.go|. The group's |done| signal is connected to the divider's |done| port, which becomes 1 when the divider finishes.Control.The control section is an imperative program that decides when to execute groups. Calyx supports sequential (seq), parallel (par), conditional (if), and iterative (while) composition.The if and while constructs use one-bit condition ports.An |invoke| operator is analogous to a function call: it executes the control program of a subcomponent fully and then returns control to the caller.§.§ Latency Sensitivity in CalyxAs a fundamentally dynamic language,Calyx provides no guarantees on inter-group timing in its control programs: programs cannot rely on the relative execution schedule of any two groups. For example, any amount of time may pass between steps in a |seq| block; and different threads in a |par| block may start at different times, so no thread may rely on the timing of another <cit.>.The compiler exploits this semantic flexibility to optimize programs by adjusting this timing.However, latency insensitivity is expensive <cit.>. To help mitigate this cost, Calyx comes with an optional attribute, |@static(n)|, that hints to the compiler that a group or component has a fixed latency of n cycles.These hints do not affect the program's semantics, so the compiler can disregard or erase them.However, this optional nature makes them challenging to support and reason about.Each compiler optimization pass must treat the hint pessimistically; there is no contract to maintain time-sensitive behavior, which has lead to several bugs <cit.>.Erasable hints are also unsuitable for integrating with external hardware, such as a module that produces an answer exactly 4 cycles after reading an input. This paper's thesis is that the distinction between static and dynamic control is too important—and too semantically meaningful—to be encoded as an optional hint. Instead, the IL's static constructs must be a semantic refinement ( <ref>) of its dynamic equivalents: converting from dynamic to static restricts a program's timing behavior; the reverse is not allowed because it allows more possible behaviors. §.§ Static Structural Abstractions extends Calyx with new, time-sensitive structural abstractions: static components and static groups. Static components. 's static components are like Calyx's dynamic components, but they use a different “calling convention.” Where dynamic components, such as |std_div|, use a |go| signal to start computation and a |done| signal to indicate completion, static components only use |go|. Compare the interface of a multiplier to that of a divider:[language=calyxspec] @static<3>@ primitive std_mult[W]( go: 1, left: W, right: W) -> (out: W); primitive std_div[W]( go: 1, left: W, right: W) -> (out: W, done: 1) The |static<n>| qualifier indicates a latency of n cycles that is guaranteed to be preserved by thecompiler. Static groups and relative timing guards. Static groups inuse relative timing guards, which allow assignments on specific clock cycles. This group computes ans = 6 × 7:[language=calyxspec,numbers=left, firstnumber=1] static@<4>@ group mult_and_storemult.left = @ mult.right = @ mult.go = @ ans.in = @ ans.write_en = @Like |do_div| in <ref>, the group sends operands into the |left| and |right| ports of an arithmetic unit. Here, however, relative timing guards encode a cycle-accurate schedule: a guardis true in the half-open interval from cycle i to cycle j of the group's execution. The assignments to ports |mult.left| and |mult.right| are active for the first 3 cycles. The guard | is syntactic sugar forso the write into the |ans| register occurs on cycle 3. The |static<4>| annotation tells us the group is doneon cycle 4.'s relative timing guards resemble cycle-level schedules in some purely static languages <cit.>. However, they count relative to the start of the group, not that of the component. This distinction is crucial since it letsuse static groups in both static and dynamic contexts.§.§ Static Control Operatorsprovides a static alternative to each dynamic control operator in Calyx.Unlike the dynamic versions, static operators guarantee specific cycle-level timing behavior. The |static| qualifier marks static control operators. While dynamic commands may contain both static and dynamic children, static commands must only have static children. We write |c| for the latency of a static command c. Sequential composition. A static seq like this: static seq c_1; c_2; ...; c_n; has a latency of ∑_1^n |c_i| cycles. c_1 executes in the interval [0, |c_1|) after the |seq|'s start, c_2 in [|c_1|, |c_1|+|c_2|), and so on.Parallel composition. A static par statement: static par c_1; c_2; ...; c_n; has latency max_1^n |c_i|. Command c_1 is active between [0, |c_1|), program c_2 between [0, |c_2|), and so on.The parallel threads in a |static par| can depend on the “lockstep” execution of all other threads. Threads can therefore communicate, whereas conflicting parallel state accesses in Calyx are data races and therefore undefined behavior <cit.>. Conditional. Static conditionals use a 1-bit port p:static if pc_1elsec_2The latency is the upper bound of the branches, max(|c_1|, |c_2|). Iteration. There is no static equivalent to Calyx's unbounded |while| loops.instead adds both static and dynamic variants of fixed-bound |repeat| loops: static repeat ncThe body executes n times, so the latency is n × |c|. Invocation. 's |static invoke| corresponds to Calyx's function-call-like operation and requires the target component to be static. The latency is that of the invoked cell. Group enable. A leaf statement can refer to a |static group| (e.g., |do_add| in <ref>). The latency is that of the group. §.§ Unification Through Semantic Refinement's static constructs are all semantic refinements <cit.> of their dynamic counterparts in Calyx.The semantics of dynamic code admit many concrete execution schedules, such as arbitrary delays between group executions. Each static construct instead selects one specific cycle-level schedule from among those possibilities.Refinement enables incremental adoption: a frontend can first generate purely dynamic code, establish correctness using the original Calyx semantics based on partial ordering between group executions, and then add |static| qualifiers. We can establish correctness for the |static| code by the same argument as the original code, since it admits a subset of the original's cycle-level executions.This implication also means thatmay automatically infer |static| qualifiers for some code (<ref>).Semantic refinement also enhances optimization (<ref>).can enrich existing Calyx passes with timing information to expose more optimization opportunities in static code. New optimizations can also combine information across static and dynamic code. This kind of optimization would be challenging in a stratified compiler like DASS <cit.> with separate ILs and lowering paths for static and dynamic code.§ COMPILATION <ref> shows the compilation flow for . After optimizations (<ref>), we translateconstructs to pure Calyx.Thecompiler relies on control interfaces for static code, dynamic code, and invocations that cross the static–dynamic boundary. For example, in a control statement like |seqa; b; |, both the parent (the seq) and the children (a and b) could use either static or dynamic control.<Ref> lists the four possible cases, denoted I_p → I_c where the parent and child interfaces I are static (S) or dynamic (D).The all-dynamic case, D → D, is the Calyx baseline.The all-static case, S → S, works by counting cycles (<ref>).For D → S, the compiler adds a dynamic wrapper around the static child (<ref>). disallows the S → D case with a compile-time error: if the child takes an unknown amount of time, it is impossible to give the parent a static latency bound.Given the prohibition against S → D composition, we can think of anyprogram as a dynamic control program with interspersed static islands <cit.>. Compilation starts by collapsing static islands into static groups (<ref>) and then generating FSM logic to implement relative timing guards (<ref>). Finally, it wraps static islands for use in their dynamic context (<ref>). §.§ Collapsing ControlFigures <ref>a–b illustrate howcollapses static control statements into static groups. The new group contains all the assignments from the old groups used in the statement (|do_add| and |do_mult| in the example), with their timing guards updated to implement the statement's timing.We collapse each static island in a bottom-up order: to compile any statement, we first collapse all its children. Before collapsing, we preprocess assignments to add timing guards where they are missing: for example, the assignment |mul.right = c| in <ref>a is normalized to |mul.right =We combine these timing guards with any existing guards using the conjunctive operator | |.Parallel composition. With all timing guards explicit and the children already collapsed, compiling static par is simple: we merge the assignments from the children into a single static group. The new group's latency is the maximum latency among the children.For example, we compile: static<1> group Ar1.in = 1; r1.write_en = 1;static<2> group Br2.in = 4; r2.write_en = 1;controlstatic parA; B;into:static<2> group comp_parr1.in =r2.in = controlcomp_par;Sequential composition. To compile static seq, we can merge assignments from child groups. We rewrite each timing guard | The new group's latency is the sum of latencies of the children. For example: controlstatic seqA; B; compiles (where |A| and |B| are as above) into: static<3> comp_seqr1.in =r2.in = controlcomp_seq; Conditional. Semantically, static if only checks its condition port once: it must ignore any changes to the port while either branch executes. To honor this while compiling |static if condAelseB|, we stash |cond|'s value in a special register on the first cycle, and leave the register's value unchanged thereafter. We generate logic to select between |A| and |B| using |cond| directly during the first cycle, and the special register for the remaining cycles.Iteration. To implement |static repeat ng |, the collapsed body group g must run n times.Activating a static group inentails asserting its |go| signal for the group's entire latency.We can therefore compile the loop into a group that asserts g's |go| signal for n × |g| cycles: static<n × |g|> repeat_groupg[go] = 1;In this case, the body group g remains alongside the new |repeat_group|. The body group's FSM (see  <ref>) is responsible for resetting itself every |g| cycles. §.§ FSM InstantiationFigures <ref>b–c illustrate the next compilation step: eliminating static timing guards (<ref>).For a static group with latency n, this pass generates a finite state machine (FSM) counter that counts from 0 to n-1; it automatically resets back to 0 immediately after hitting n-1.We translate each timing guardinto the guard j ≤ f < k where f is the counter.Resetting the counter from n-1 to 0 lets static groups re-execute immediately after finishing. Compiled |repeat| and |while| loops, for example, can chain invocations of static bodies without wasting a cycle between each iteration.While FSM instantiation would work the same on the original program, it is more efficient to run it after collapsing control. Generating fewer static groups yields fewer FSM registers and incrementers. §.§ Wrapper Insertion Figures <ref>c–d illustrate the final compilation step: converting each collapsed, timing-guard-free static group (<ref>c) into a dynamic group (<ref>d). We generate a dynamic wrapper group for every static group that has a dynamic parent. Like any dynamic group, the wrapper exposes two 1-bit signals, |go| and |done|.When activated with |go|, the wrapper in turn actives the |go| signal of the static group. To generate the |done| signal, the wrapper uses a 1-bit signal sig to detect if a static island's FSM has run once. When the FSM is 0 and sig is high, we know that the FSM has reset back to 0: the wrapper asserts |done|. Special case: while with static body. The wrapper strategy works in the general case, but when the dynamic parent is a |while| loop, the compiled code “wastes” one cycle per iteration to check the loop condition. This strategy incurs a relative overhead of 1/b when the body takes b cycles, which is bad for short bodies and large trip counts. This special case is common because it lets programs build long-running computations from compact hardware operations, so we handle it differently to eliminate the overhead.To compile |while cg | where g is static, we generate a wrapper for the entire |while| loop instead of a wrapper for g alone. Each time the FSM returns to the initial state, the wrapper concurrently checks the condition port and asserts |done| if the condition is false.This is another application of refinement in : Calyx's |while| operator admits multiple possible cycle-level timing behaviors, and we generate a specific one to meet our objectives.§ OPTIMIZATIONSWe design a pass to opportunistically convert dynamic code to static code and new time-sensitive static optimizations. §.§ Static Inference and PromotionCalyx code written as dynamic often does not need to be dynamic: its latency is deterministic. Promoting such code to use static interfaces can save time and resources for dynamic signalling—but it is not always profitable. We therefore split the process into two steps: inference, which detects when dynamic groups and control have a static latency, and promotion, which converts dynamic code to static code when it appears profitable. Inference records information without affecting semantics, while promotion refines the program's semantics. We infer freely but promote cautiously. Inferring static latencies.We use an existing Calyx pass called |infer-static-timing| pass to infer latencies for both groups and control programs.It infers a group's latency by analyzing its uses of its go and done. Suppose we have: [language=calyx] group greg.in = 10;// reg is a register (latency 1) reg.write_en = 1; g[done] = reg.done; The pass observes that (1) |reg.write_en| is asserted unconditionally, (2) the group's |done| flag is tied to |reg.done|, and (3) the register component definition declares a latency of 1. Calyx therefore attaches a @static(1) annotation to g: the group will take exactly one cycle to run.For control operators, e.g., |seq|, inference works bottom-up. If all of a |seq|'s children have @static annotations, the |seq| gets a @static(n) annotation where n is the sum of the latencies of its children. Despite this inference, Calyx's original time-sensitive FSM generation pass cannot compile static control islands; instead, the entire component needs to be static <cit.>.lifts this restriction. Promoting code from dynamic to static. We can promote groups and control based on inferred |@static| annotations. For example, after inferring the |@static(1)| annotation for the group |g|, we can promote it to: static<1> group greg.in=10; reg.write_en=1;While static control has lower control overhead and enables downstream optimizations, it incurs two major costs. We introduce promotion heuristics to balance each of these costs.First, each static island requires one wrapper interface and one counter register. This cost is constant for each island, while the benefit ofsimpler static control scales with the code size of the island. Therefore, the compiler introduces a threshold parameter that only promotes static islands above a certain code size, in terms of the number of groups and conditional ports.The second cost affects long-running static islands, which can require large FSM registers and associated comparators. Some islands reach hundreds of millions of cycles (see <ref>), so two smaller islands can sometimes be cheaper than one large island. The compiler accepts a parameter that gives an upper bound on the number of cycles that a potential static island can run for, and does not promote any island that would run longer than this upper bound.We empirically calibrate these parameters' default settings using experience with real programs; see <ref>. §.§ Schedule Compactionfeatures a new schedule compaction optimization to maximize parallelism while respecting data dependencies. Schedule compaction is only feasible in a unified compiler. In a dynamic IL, the compiler lacks latency information altogether. In a static IL, the compiler has latency information but is barred from rescheduling code, which could violate timing properties that the program relies on.Traditional C-based high-level synthesis (HLS) compilers accomplish similar scheduling optimizations, but by translating between two vastly different representations: from untimed C to a fully static HDL.A unified IL, in contrast, can perform this optimization within a single abstraction by exploiting the interaction between static and dynamic code. Compaction occurs during the transition from dynamic to static code, after |@static| inference and as a supplement to standard promotion. Consider the following |seq|: @static(22) seqA; B; C; D;where <Ref> shows the groups' latencies and data dependencies. If we only perform promotion, it will take 1 + 10 + 1 + 10 = 22 cycles.'s schedule compaction pass reschedules the group executions to start as soon as their dependencies have finished. Specifically, A and B start at cycle 0 because they have no dependencies; C and D start on cycle 10 and 1 respectively: the first cycle after their dependencies have finished. This compacted schedule takes only 11 cycles.The optimization extracts a data dependency graph for the children of the seq and topologically sorts it to produce an as-soon-as-possible schedule.Next, it reconstructs a control program to implement this schedule. It emits a static par with one group per thread. To delay a group's start, it uses an empty delay group, as shown in <ref>. Since all del_n groups are removed during the collapsing step of compilation (<ref>), they incur no overhead. §.§ Cell Sharing Calyx has a register sharing pass <cit.> to reduce resource usage. It uses Calyx's control flow to compute registers' live ranges and remaps them to the same instance when the ranges do not overlap. 's variant is a generalized cell sharing pass that works with arbitrary components instead of just registers. first extends this pass to work with mixed static–dynamic designs and then enhances it to opportunistically exploit static timing information.In addition to working uniformly on both static and dynamic code, 's cell sharing optimization can share cells across the static–dynamic boundary: static and dynamic parts of the design can use the same cell. This is not possible in stratified ILs <cit.> that use separate optimization pipelines for the two interface styles.'s cell sharing pass also improves over sharing in Calyx when it can exploit cycle-level timing in static code. The original Calyx optimization must over-approximate live ranges because of Calyx's loose timing semantics.For example, |par| provides no guarantees about the cycle-level timing of its threads (<ref>), so the compiler must conservatively assume that all live ranges in one thread may overlap with the live ranges in a different thread. This prevents Calyx from sharing cells between sibling |par| threads. 's enhanced cell sharing optimization exploits timing guarantees (<ref>) to compute precise, cycle-level live ranges. These live ranges are soundly comparable across |par| threads and enables sharing between them.This enhancement is an example of a latency-sensitive optimization from <ref>.§ EFFECTS OFOPTIMIZATIONSWe compare 's performanceto Calyx when compiling linear algebra kernels and a packet scheduling engine. §.§ Linear Algebra KernelsCIRCT <cit.> is an MLIR <cit.> subproject for designing open-source hardware flows. Calyx is a core dialect within CIRCT and can be generated from C++ or PyTorch programs.We lower the Polybench benchmarks <cit.>, written in C++, to Calyx using the CIRCT flow, automatically promote them to use 's new abstractions (<ref>), and report the cycle counts and resource usage on an FPGA. Of the 30 Polybench benchmarks, the CIRCT flow fails to compile 11 due to various limitations in the frontend dialects. The 19 compilable benchmarks are chiefly dense loop nests,so large parts of them can be scheduled statically. However, there is some dynamic behavior, including dynamically-timed integer division and “triangular” nested loops (i.e., the inner loop bound depends on the outer loop's index). Configurations. To generatedesigns from Calyx, we first perform static promotion (<ref>). Then, we compile each design with different configurations of the schedule compaction (SC, <ref>) and cell sharing (SH, <ref>) passes. * Baseline: The standard Calyx compiler, withoutabstractions or promotion.* SH: Static promotion, then cell sharing. * SC: Static promotion, then schedule compaction. * SH→SC: Static promotion, sharing, then compaction. * SC→SH: Static promotion, compaction, then sharing.Experimental setup. We use Verilator v5.006 <cit.> to obtain cycle counts. Our synthesis flow uses Vivado 2020.2 and targets the Xilinx Alveo U250 board with a period of 7 ns.We report post place-and-route resource estimates for lookup tables (LUTs, FPGAs' primary logic resource) and registers.The geometric mean of the worst timing slack across the 19 benchmarks varies within 5% between the five configurations. We therefore believe that 7 ns is an appropriate clock period for these designs, so measuring cycle counts suffices to reflect actual running time.We also ran experiments to explore promotion parameters (<ref>). While they unsurprisingly yield nonuniform trade-offs between area and latency,we select default parameters that provide a good balance across benchmarks: 1 for the static island size and 4096 for the cycle-count limit.[C]Add heuristic graphs to appendix?. §.§.§ Schedule Compaction and Cell Sharing <Ref> compares the configurations normalized to baseline (B). We report the geometric means for cycle counts, LUTs, and register usage across the 19 benchmarks. Cycle counts. Static promotion has significant impact on cycle count. Running SH, without compaction, can isolate the impact of promotion: a 1.67× geomean speedup over B. Schedule compaction (SC) improves the cycle count further, yielding a 2.5× geomean speedup over the baseline designs.LUT usage. SH saves LUTs (0.96×) while SC increases them (1.02×), although both are quite similar to B. SH benefits from 's static control interface, while SC incurs logic overhead to implement its parallelized schedules. Register usage. Sharing hardware resources (SH) performs essentially the same (1.01×) as to B. This is because B already has a sharing optimization, and there were not many opportunities to apply theextensions to exploit time sensitive sharing (<ref>). SC incurs a register cost (1.47×) to implement its schedules.§.§.§ Phase OrderingSchedule compaction and cell sharing are partially in conflict: the former adds parallelism, while the latter exploits non-parallel code to share resources. They embody a fundamental trade-off between performance and area. We measure their interaction in either order: Cycle counts. SC→SH performs identically to SC alone (2.5× speedup). The opposite ordering, SH→SC, is slightly slower (1.92×), but still faster than SH (1.67×). Sharing impedes some, but not all, opportunities for compaction.LUT usage. SH→SC saves LUTs slightly (0.94× of B) while SC→SH performs similarly to B (0.99×). However, the effects across benchmarks are nonuniform, and the combinations of optimizations can sometimes outperform SH alone. Register usage. Running sharing first (SH→SC) achieves similar register reduction to SH alone (1.01 × of B). The reverse ordering (SC→SH) is slightly worse (1.07×) but still significantly better than SC alone (1.47×).Running SC first only opportunistically adds parallelism; the designs still have some fundamental sequential behavior that allows sharing. §.§ Packet Schedulers We use a second, more domain-specific case study to understandoptimizations in more detail.In software-defined networking (SDN) <cit.>, programmable packet scheduling offers flexible policies for allocating bandwidth and ordering packet delivery. PIFO trees <cit.> are a flexible mechanism for line-rate packet scheduling. The packet buffer of a switch consists of a compositional hierarchy of priority queues (PIFOs), each of which implements a policy for scheduling the data held by its children.We implement a new -based generator for PIFO tree packet schedulers as shown in <ref>. We push incoming packets into the PIFO tree by inserting into a leaf node and adding priority metadata to each parent.To pop the highest-priority packet for forwarding, we query the tree to identify it and update the metadata.The tree also maintains telemetric data—counts of classes of packets—by reporting to a separate statistics component () at each push. An SDN controller () might exploit these statistics to implement adaptive scheduling policies. Our implementation generates the PIFO tree itself, which is fundamentally dynamic because of the data-dependent behavior of queues, and a simple static statistics unit. While it is not the focus of this case study, we also include a simple dynamic controller to consume the statistics. Implementation. We implement a flexiblePIFO tree generator in 600 lines of Python. The generator can produce binary PIFO trees of varying heights, arrangements, and capacities. It can also implement different scheduling policies by deciding how packets get assigned to leaves and metadata to internal nodes. For our experiment, we generate a tree with 5 PIFOs and overall capacity 10. We set up the scheduling parameters to implement a hierarchical round-robin scheduling policy.We use the generator to synthesize four hardware configurations: plain Calyx, Calyx promoted to , explicitly annotated , and annotated, promoted . The second configuration is the result of automatically promoting the first (see <ref>). The third includes manually inserted |static<>| annotations that encode domain-specific insight into the generated hardware's timing. The fourth configuration is the result of automatically promoting the third. The generated design is 1,100 lines ofIL. Results. We generate a workload of 10,000 packets with randomly interspersed but balanced push and pop events. We measure the LUT count, register count, and cycles per push (C/push) for each design. The best values are in bold.[c] Configuration LUTs Registers C/push Calyx 1547 393 143.25Promoted to1610 391 139.25Annotated1556 385 140.25Annotated, Promoted1544 381 137.25 The resource usage of the three designs is similar: the PIFO tree (which is always dynamic) is the dominant component in all three designs, and the statistics component (which is dynamic in Calyx and static in ) is small.The promotedimplementation improves on the original Calyx implementation's C/push measure because the promotion pass exploits small opportunities for static promotion in all components, including components that are understood to be dynamic. This comes at the cost of resource usage: promotion toalso triggers schedule compaction (<ref>), and compaction costs LUTs. The manually annotatedimplementation also improves on the baseline's C/push measure—domain knowledge lets the human guide the compiler—but without suffering a LUT cost. The annotated, promotedimplementation performs the best of all. Its C/push measure is better than the manually annotatedimplementation because, as before, the promotion pass exploits opportunities for promotion that are not clear to a human. Critically, its LUT count does not suffer compared to annotated : compaction only runs on promoted static islands, not user-defined static components.§ SYSTOLIC ARRAYSSystolic arrays <cit.> are a class of architecture commonly used in machine learning <cit.> built from interconnected processing elements (PEs). PEs perform simple computations and communicate with other PEs in a simple, regular manner. We redesign an existing systolic array generator that targets Calyx to useabstractions and demonstrate how it enables efficient composition and incremental adoption. §.§ Systolic Arrays inCalyx has an existing systolic array generator that produces hardware to multiple fixed-size matrices. The interface of the generated systolic array accepts rows and columns of input matrices A and B in parallel using an output-stationary dataflow. Each PE performs a multiply-accumulate operation and forwards its operands.This case study addresses three main limitations: * Calyx's dynamic interfaces between PEs make it challenging to pipeline computations, hindering performance.enables efficient pipeline execution using static interfaces. * A purely static implementation can only efficiently support fixed-sized matrices. 's unified approach makes it possible to support flexible matrix sizes while maintaining efficient pipelined execution. * Systolic arrays often have fused post operations that apply elementwise functions to the product matrix. We show how 's mixed interfaces support various post post ops and optimize the composed design across the static–dynamic boundary.Pipelining processing elements.Calyx's systolic array generator decouples the logic for the PE from the systolic array itself to modularize code generation. This means that the systolic array must communicate with its PEs through dynamic interfaces. Because there are no timing guarantees, the generator does not pipeline the PEs and instead uses sequential multipliers. Extending the generator to a dynamically pipelined design would add unnecessary overhead; we would need queues to buffer values between PEs.Instead,abstractions let the systolic array communicate with its PEs using efficient static interfaces that facilitate pipelining. Besides removing the overhead from dynamic interfaces, this also simplifies the logic of the systolic array fabric, which is in charge of data movement. Because we have a pipeline with initiation interval of 1, the fabric can unconditionally move data every cycle and guarantee the right value will be read.Fixed contraction dimension. While static interfaces allow for efficient, pipelined execution, they can limit computational flexibility. For example, an output-stationary matrix-multiply systolic array should be able to multiply matrices of sizes i × k and k × j for any value of k.However, this requires dynamic control flow: the computation needs to repeat k times where k is a runtime value.abstractions support this with ease: we use a |while| loop to execute the systolic array's logic k times. Furthermore, because the control program in the loop body is purely static, 's special handling ensures that the body executes every cycle (<ref>).Supporting fused post operations.A common optimization in machine learning frameworks <cit.> fuses matrix multiplication with elementwise post operations, such as nonlinearities, to avoid writing the intermediate matrix back to memory. These post operations can be either fundamentally static or dynamic. Our goal is to decouple the implementation of post operations from the systolic array: to keep the code generation modular without sacrificing efficient interfaces. We implement two post operators (POs): (1) a static ReLU operation, |x > 0 ? x : 0|, and (2) a dynamic leaky ReLU <cit.> operation, |x > 0 ? x : 0.01*x|. The latter is dynamic because the true branch can directly forward the output while the false branch requires a multiplication.<Ref> overviews the architecture. We instantiate the systolic array and PO components for the number of rows in the resulting matrix. If the PO is dynamic, the PO controller instantiates buffers to queue the output stream but elides them for static POs. The interface between the systolic array and PO is pipelined: a row's PO starts its computation as soon as an output is available. Most of the code—the systolic array, the controller, the PEs—is reused regardless of the PO's interface; 's unified abstractions enable this reuse.§.§ Evaluation Our evaluation seeks to answer the following questions: * Does the pipelined -generated systolic arrays outperform the existing Calyx-generated designs? * Canimplement a runtime-configurable contraction dimension for systolic arrays with low overhead? * Do cross-boundary optimizations leteliminate overheads when the systolic array is coupled with a static post operation?Effect of pipelining. For the 16 × 16 design, the pipelined implementation inachieves a max frequency of 270 MHz and performs the computation in 52 cycles in comparison to the original design's 250 MHz and 248 cycles. The latency improvement is from the pipelined execution and the frequency improvement from simplified control logic. Configurable matrix dimensions. We compare systolic arrays with flexible and fixed matrix size support. The flexible design takes 1 extra cycle to finish, uses 8% more LUTs (for logic to check the loop iteration bound), and uses the same number of registers. The flexible design pays some overhead to gain dynamic functionality, while the fixed design is fully static, thereby eliminating dynamic overhead:expresses both with minimal code changes.Overhead of dynamic post operations.We perform a synthetic experiment to quantify overhead of a dynamic interface between the systolic array and the PO: we use the simple ReLU post operation in its default, static form and compare it against a version that artificially wraps it in a dynamic interface. Since the computation is the same, the only difference is the interface. <Ref> report the cycle counts, LUTs, and register usage of the resulting designs. In addition to a higher cycle count, the dynamic implementation also has higher LUT and register usage, stemming from the extra control logic and buffers respectively.We also implemented a truly dynamic post operator, leaky ReLU. We omit its measurements here because it conflates the costs of the operation and static–dynamic interaction.§ RELATED WORKbuilds on a rich body of prior work on compilers for accelerator design languages (ADLs): high-level programming models for designing computational hardware. However, these compilers tend to prioritize either static or dynamic interfaces in the hardware they generate—or, when they combine both strategies, to disallow fluid transitions between the two styles.Traditional C-based high-level synthesis (HLS) compilers <cit.> intermix static and dynamic-latency operations, such as dividers. They do so using software ILs like LLVM <cit.>, which ties them to C-like, sequential computational models. Critically, traditional HLS tools are monolithic: they do not expose consistent intermediate representations that support modular pass development, decoupled frontends and backends, and layered correctness arguments.contributes a stable IL that includes both software- and hardware-like abstractions and thus supports modular passes that address both static and dynamic control.The most closely related compilers seek to combine aspects of static and dynamic control <cit.>.DASS <cit.> is the first HLS compiler we are aware of to specifically balance static and dynamic scheduling within the same program. In DASS, either the user <cit.> or some heuristic <cit.> identifies parts of the high-level design that would benefit from static scheduling. Compilation proceeds in two phases: DASS first compiles all the static islands, and then it uses a second, dynamic, approach to schedule the rest of the program while treating the pre-compiled islands as opaque operators. In contrast, 's unified IL can treat static portions of the program transparently and optimize them in the same framework as dynamic code.Szafarczyk et al. <cit.> provide the opposite approach to DASS: it finds sections of programs that are amenable to dynamic scheduling in a previously statically-scheduled program. The dynamic sections are decoupled from the static parts and compiled into processing elements that communicate over latency-insensitive channels. Hector <cit.> is a dialect of MLIR <cit.> that supports three scheduling styles: pipeline, static, and dynamic. Each style corresponds to a different Hector component type, uses a different a syntax and semantics, and uses a different lowering strategy. In contrast to these,provides a unified IL in which either the frontend, or a compiler heuristic (<ref>), can easily covert dynamic programs to static and vice-versa, and lowers them using a single compilation pipeline. This letsreuse optimizations between the two modes and even optimize across the boundary between dynamic and static code. Other ILs for ADL compilers also give passes control over scheduling, but focus on either static <cit.> or dynamic interfaces <cit.>. In particular, HIR <cit.> is an MLIR-based IL that describes schedules using time variables that describe the clock cycles on when each value in a design is available. Filament <cit.>, like HIR, explicitly dictates the cycle-level schedule of hardware operations, but it encodes these time intervals into a type system. 's relative timing guards (<ref>) work similarly and describe the cycle-level schedule for assignments. However, 's timing guards are relative to the start of each group's execution. This relative timing limits the scope of static schedules and enables flexible composition with dynamic groups, scalable reasoning, and efficient lowering (<ref>). Finally, unlike both systems,supports both static and dynamic interfaces.§ CONCLUSION Latency-sensitive hardware refines the semantics of latency-insensitive hardware. Every practical accelerator compiler must combine the two styles, and this correspondence is the foundation for combining them soundly.plain
http://arxiv.org/abs/2312.16300v1
{ "authors": [ "Caleb Kim", "Pai Li", "Anshuman Mohan", "Andrew Butt", "Adrian Sampson", "Rachit Nigam" ], "categories": [ "cs.PL", "cs.AR" ], "primary_category": "cs.PL", "published": "20231226191225", "title": "Unifying Static and Dynamic Intermediate Languages for Accelerator Generators" }
Federated learning (FL) enables multiple clients to collaboratively learn a shared model without sharing their individual data. Concerns about utility, privacy, and training efficiency in FL have garnered significant research attention. Differential privacy has emerged as a prevalent technique in FL, safeguarding the privacy of individual user data while impacting utility and training efficiency. Within Differential Privacy Federated Learning (DPFL), previous studies have primarily focused on the utility-privacy trade-off, neglecting training efficiency, which is crucial for timely completion. Moreover, differential privacy achieves privacy by introducing controlled randomness (noise) on selected clients in each communication round. Previous work has mainly examined the impact of noise level (σ) and communication rounds (T) on the privacy-utility dynamic, overlooking other influential factors like the sample ratio (q, the proportion of selected clients). This paper systematically formulates an efficiency-constrained utility-privacy bi-objective optimization problem in DPFL, focusing on σ, T, and q. We provide a comprehensive theoretical analysis, yielding analytical solutions for the Pareto front. Extensive empirical experiments verify the validity and efficacy of our analysis, offering valuable guidance for low-cost parameter design in DPFL.trustworthy federated learning, bi-objective optimization, differential privacy A Theoretical Analysis of Efficiency Constrained Utility-Privacy Bi-Objective Optimization in Federated Learning Hanlin Gu1 , Xinyuan Zhao1, Yuxing Han, Yan Kang, Lixin Fan, Member, IEEE, and Qiang Yang, Fellow, IEEE1 Hanlin Gu and Xinyuan Zhao contribute equally in this paper 2 Yuxing Han is the corresponding author This work was supported by the National Natural Science Foundation of China (NO.62206154) and Shenzhen Startup Funding (No. QD2023014C).============================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONThe escalating stringency of legal and regulatory parameters, exemplified by initiatives like GDPR[GDPR is applicable as of May 25th, 2018 in all European member states to harmonize data privacy laws across Europe. <https://gdpr.eu/>] and HIPAA[HIPAA is a federal law of the USA created in 1996. It required the creation of national standards to protect sensitive patient health information from being disclosed without the patient’s consent or knowledge], has imposed rigorous constraints on user privacy. This has led to a situation where the amalgamation of private data from distinct users or organizations for the purpose of training machine learning models is no longer allowed. Federated learning <cit.> is a pioneering paradigm in machine learning that addresses the challenge of training models on decentralized data sources while respecting stringent privacy and security constraints. In federated learning, multiple participating entities or clients collaboratively train a shared machine learning model without directly sharing their raw data. Recently, apart from traditional utility issue, privacy and efficiency issues in federated learning attract wide research attention. In order to prevent privacy leakage from the exchanged information in FL, differential privacy has been proposed as an important privacy protection mechanism <cit.>, which is achieved by introducing noise into the exchanged gradients in the training process.In differential privacy federated learning (DPFL), there is a trade-off between utility and privacy as demonstrated in <cit.>.For instance, it was shown that attackers may infer training images at pixel level accuracy even random noise are added to exchanged gradients <cit.>, but exceedingly large noise jeopardise learning reliability and lead to significant degradation of utility <cit.>. On the other hand, training efficiency which descries the global training time is also an important concern in the federated learning framework <cit.>. Ignoring the training efficiency may lead to exceedingly long training time outside an acceptable timeframe.A series of work attempts to balance utility loss and privacy leakage in DPFL, but neglect the training efficiency. Specifically, these work provides different strategies to search for better parameters to look for optimal privacy-utility trade-off in DPFL. Some work <cit.> aimed to minimize the utility loss while accounting for the resulting privacy leakage within the confines of a specified privacy budget (ϵ_0). Another line of work <cit.> firstly trained the model until convergence with different noise extent.Subsequently, a comparison is drawn between resulting privacy leakages to identify the parameter settings that yield the least privacy leakage.However, the existing methods ignore the training efficiency in DPFL, which refers to the global training time (discussed in Sec. <ref> in detail). Moreover, in the process of searching optimal parameters, the traditional methods primarily concentrate on the influence of parameters as noise level (σ) and communication rounds (T), but ignore sample ratio (q) as another important factor.Kang et al. <cit.> manipulate the noise level (σ) to simultaneously enhance privacy, minimize utility and efficiency. Other works <cit.> put forth different strategies to search for noise level (σ) and communication rounds (T) with aim of achieving optimal privacy-utility trade-off in DPFL. These methods neglects sample ratio (the ratio of participating clients in each round among all the K clients, denoted as q), which has significant influence on both the utility loss <cit.> and privacy leakage <cit.>. To look for the optimal trade-off by considering the influence of sample ratio (q), a naive expansion of current methods as iterating over q suffers from really high computation cost. In this work, we formulate a constrained bi-objective optimization problem in Sect. <ref> with aim of ensuring acceptable training efficiency and reducing optimal parameter searching computation cost. This formulation (Eq. <ref>) focuses on minimizing the privacy leakage and utility loss, and include an upper constraint of training efficiency to ensure acceptable training time in DPFL. It identifies the Pareto front encompassing the noise level (σ), the communication rounds (T) and sample ratio (q). Moreover, we conduct theoretical analysis to offer insights into the interplay among various parameters in DPFL. Detailed theoretical findings can be found in Thm. <ref> of Sect. <ref>. Notably, the Pareto optimal solutions for the constrained bi-objective optimization problem in DPFL adheres to the relationship kσ^2T = qK (K denotes the total number of clients; k is a constant), which serves as a powerful tool to help with low cost parameter design in DPFL discussed in Sec. <ref>.Finally, experimental results in Sect. <ref> also verify the theoretical analysis on Pareto optimal solutions[The definition of Pareto optimal solutions are given in Part C: Bi-Objective Optimization, II. Related Work and Background] in different model architectures (logistic regression and LeNet <cit.>) on MNIST <cit.> dataset.Our main contribution is summarized as following: * We formulate the utility loss and privacy leakage with training efficiency upper constraint in differential privacy federated learning (DPFL) as constrained bi-objective optimization problem with respect to noise level (σ), communication rounds (T) and sample ratio (q). * We theoretically elucidate the analytical Pareto optimal solutions of the constrained bi-objective optimization problem in DPFL w.r.t. σ, T and q under different participant numbers K.* The experiments on LeNet<cit.> and logistic regression architecture using MNIST<cit.> dataset verify the correctness of our theoretical analysis of the constrained bi-objective optimization problem in DPFL. Moreover, it illustrates the theoretical analysis can guide clients to design the effective parameters in DPFL with much lower computation cost than traditional methods. § RELATED WORK§.§ Federated Learning With the aim of privacy protection and further improve the model performance, federated learning is proposed, where model is trained on distributed data with different privacy protection mechanisms. Clients update the model locally and periodically communicate with the central server to synchronize the model. Typically, federated learning deals with a single optimization problem where K clients collaboratively trains the model parameters w_fed <cit.>.min_w_fedϵ_u(w_fed) ≜∑_k=1^Kn_k/n F_k(w_fed)F_k(w_fed)=𝔼_ξ∼ D_kF(w_fed;ξ)n_k represents the size of dataset D_k kept within client k and n is the total size of dataset as n=∑_k=1^K n_k. The local model with the k^th client F_k is the expectation of the loss function regarding sampling from local dataset D_k.The most well studied algorithm is federated average (FedAVG) and federated stochastic gradient descent(FedSGD) <cit.>.FedAVG parallel optimizes the local objective function by using stochastic gradient descent in each client and use trivial average to aggregate the model parameters at the server.FedSGD calculates the model update using randomly sampled local data, and upload the model update to central server. FedAVG and FedSGD is equivalent under the scenario that the number of local epochs E equals one. §.§ Differential Privacy Differential privacy serves as a strong standard privacy guarantee, which is firstly introduced to protect single instance privacy in terms of adjacent databases <cit.>. Treating the image-label pair as a instance in database, the (ϵ, δ)-differential privacy is defined as follows.A randomized mechanism ℳ: 𝒟→ℛ with domain 𝒟 and range ℛ satisfies (ϵ, δ)-differential privacy if for any two adjacent inputs d, d^'∈𝒟 and for any subset of outputs 𝒮⊆ℛ it holds thatPr[ℳ(d)∈𝒮] ≤ e^ϵPr[ℳ(d^')∈𝒮]+δwhere we say that two of these sets are adjacent if they differ in a single entry. In deep learning scenario, a series of works under different assumptions are proposed to tighten the privacy leakage bound <cit.> and moments accountant is introduced to quantitatively measure the privacy leakage <cit.>.There exists constant r and constant s so that given the sampling probability q=B/N (N is the number of training set) and the number of total rounds T, for any ϵ < r q^2 T, the differentially private SGD algorithm <cit.> is (ϵ, δ)-differential private for any δ > 0 if we chooseσ≥ s q √(T log(1/δ))/ϵ In federated learning, differential privacy also serves as a golden benchmark which people design different algorithms to achieve <cit.>. §.§ Multi-objective Optimization In multi-objective optimization, the aim is to find a x in the decision space ℛ^d which can optimize a set of m objective functions as f_1(w), f_2(w), …, f_m(w)<cit.>.min_w ∈ℛ^d G(x)min(f_1(w), f_2(w), …, f_m(w)) In non-trivial case, all the objective functions cannot achieve their global optimum with the same x. The multi-objective optimization methods are used to deal with the contradictions and achieve different optimal trade-off among the objectives. From the point view of decision makers, the multi-objective optimization provides a set of optimal solutions based on different preference and requirements. For x_a, x_b ∈ℛ^d, we say x_a dominates x_b if and only if f_i(x_a) < f_i(x_b) for at least one i ∈ [1,2,…, n] and f_i(x_a) ≤ f_i(x_b) for all i ∈ [1,2,…, n].We say x^* is a Pareto optimal solution if x^* dominates all other x^'∈ℛ^d. Pareto set is the set of G(x_i)i ∈ [1,2,…,n], where x_ii ∈ [1,2,…,n] is all the Pareto optimal solutions. Pareto front is the plot of Pareto set G(x_i)i ∈ [1,2,…,n] in the objective space. Multi-objective optimization can be a challenging job in federated learning. People use different multi-objective optimization methods as evolutionary algorithms <cit.>, Bayesian optimization <cit.>, and gradient-based method <cit.> to deal with the multi-objective optimization problem in federated learning. Facing expensive black box scenarios as federated learning, evolutionary algorithms suffers from high computational cost especially facing expensive scenarios as federated learning.Bayesian optimization improves the computational efficiency, but its performance highly depends on the surrogate model and acquisition function. The gradient descent method improves the computational efficiency by finding the directions that simultaneously descend all the objective functions, but requires the gradient information of the objective function.§ MOO IN DPFL In this section, we formulate the constrained bi-objective optimization in Differential Privacy Federated Learning (DPFL) by optimizing the privacy leakage and utility loss simultaneously with training efficiency constraint. §.§ Setting and Threat ModelWe consider horizontal federated learning (HFL) in this paper, which involves K participating parties that each holds a private dataset D_k, k ∈ [K]. We assume the attacker to be semi-honest, i.e., it may launch privacy attacks on exchanged information to infer participants' private data. For instance, the semi-honest adversary may reconstruct the client's data via the exchanged model gradients <cit.>. To mitigate the privacy leakage, each participant applies a protection mechanism to the model information that will be shared with the server. This paper focuses on the differential privacy <cit.>, i.e., the local client adds noise on the model gradients. The training procedure involves at least three following steps in each round t (also see Algo. <ref>): * Each client k trains its local model using its private data set D_k for E local epochs, and obtains the local model w_k^t,E as in line 12-16 in Algo. <ref>. * In order to prevent semi-honest adversaries from inferring other clients' private information D_k, each client k clips model gradients Δ w_k^t and adds Gaussian noise n_k^t as shown in line 17-18 of Algo. <ref> The client sends protected model gradients Δ w_k^t to the server as line 19 of Algo. <ref>.* The server aggregates Δ w_k^t, k=1,⋯,K by average in line 8 of Algo. <ref>. The global model w^t+1 is updated to be w^t+1=w^t+ 1/K∑_i=1^KΔ w_k^t. w^t+1 w^t+ 1/|P_t|∑_k∈ |P_t|Δ w_k^t. Then the server distributes global model w^t+1 to all clients.§.§ Privacy Leakage, Utility Loss, and Training Efficiency in DPFLIn this paper, we consider two objectives in the DPFL, i.e., the privacy leakage and utility loss, which is defined as following.Privacy Leakage. We follow Thm. 1 of <cit.> and Thm. 3.2 of<cit.> to provide the definition of the differential privacy ϵ_p leakage of local client in federated learning as:ϵ_p = C c_clip√(qTlog(1/δ))/√(K)σ,where C is a constant, c_clip is the clipping constant in differential privacy, K is the total number of participants in federated learning and q is the the sample ratio representing the fraction of participating clients among all clients in each round.According to Eq. (<ref>), the privacy budget is influenced by three factors: q, T, and σ. Specifically, the privacy leakage is positively related to q and T while negatively related to σ.Utility Loss. The utility loss ϵ_u of a federated learning system is defined as follows:ϵ_u = U(w_fed^O) - U(w_fed^D), where U(w_fed^D) and U(w_fed^O) measure the utility of protected global model w_fed^D and unprotected global model w_fed^O, respectively.Moreover, the following Lemma <ref> (Cor. 3.2.1 of <cit.>) provides the theoretical upper bound for Algo. <ref>.For Algo. <ref>,assume F_k(x) satisfies || ∇ F_k(x) - ∇ F_k(y) | | ≤ L || x-y | |, ∀ k,x,y, min_xF(x) ≥ F^*, G is the bound on stochastic gradient, C_clip is the clipping constant with C_clip≥η EG. η/qK≤η as qK ≥ 1.By letting η≤ min{qK/6EL(P-1), qK/96E^2, 1/√(60)EL}, we have 1/T∑_t=1^T𝔼 [|| ∇ F(x^t) ||^2] ≤𝐎 (1/η E T + η^2E^2+η) + 𝐎 (σ^2/η qKE),where T and E are communication rounds and local training epochs, η is learning rate, K is the total number of clients, q is sample ratio, and σ is the noise level in differential privacy (standard derivation of added noise). Note that the upper bound of the utility loss goes up with the increase of σ and the decrease of q, T. K and E are always pre-defined, which is explored in the experimental ablation study part <ref>. c_clip is a given constant.Training Efficiency. We use the system training time to describe the training efficiency in DPFL as follows:ϵ_e = c_t T where T is the communication rounds and c_t is the per-round training time in DPFL, which is treated as a constant due to the nearly uniform training time per round.§.§ Constrained Bi-Objective Optimization in DPFLConventionally, existing work aims to minimize the utility loss given the privacy budget ϵ_0 <cit.> , which can be formulated as:min_T, σϵ_u(T, σ), where ϵ_u(T, σ) = ∑_k=1^Kp_kF_k(T, σ) subject toϵ_p(T, σ) ≤ϵ_0where ϵ_p, ϵ_u and ϵ_0 represent privacy leakage, utility loss and the upper constraint of privacy leakage respectively.However, the existing work only focuses on utility loss and privacy leakage, but ignore the training efficiency. It means that existing work cannot ensure the acceptable training time in DPFL.Moreover, both the privacy leakage and utility loss are influenced by the training epoch (T), noise level (σ) and sample ratio (q) illustrated in the former section. It means Eq. (<ref>) is insufficient to obtain a complete Pareto optimal as it only considers the influence of σ and T.In this work, we reformulate the optimization problem as the following definition.The utility loss and privacy leakage bi-objective optimization problem with training efficiency constraint w.r.t. noise level (σ), communication round (T), and sample ratio (q) in DPFL is:min_T, σ, q ( ϵ_u(T, σ, q), ϵ_p(T, σ, q)), where ϵ_u(T, σ, q) = ∑_k=1^Kp_kF_k(T, σ, q) ϵ_p(T, σ, q) = C c_clip√(qTlog(1/δ))/√(K)σsubject toϵ_e(T,σ,q) ≤ϵ_e,where ϵ_u represents utility loss, ϵ_p represents the privacy leakage by Eq. (<ref>), ϵ_e represents the training efficiency by Eq. (<ref>), p_k is the coefficient of F_k satisfying ∑_k=1^Kp_k=1,ϵ_e is the upper constraint of training efficiency. In the following sections, we provide the analysis the constrained bi-objective optimization problem both theoretically and experimentally. On one hand, we provide the theoretical analysis on the Pareto optimal solutions of Eq. (<ref>) in Sec. <ref>. On the other hand, we solve the Eq. (<ref>) by non-dominated sorting Algo. <ref> in Sect. <ref> to determine the Pareto optimal solutions.Moreover, the experimental results in Sect. <ref> can validate the theoretical conclusion.§ THEORETICAL ANALYSISIn this section, we commence by simplifying the constrained bi-objective optimization problem through the utilization of an upper boundary for the utility loss in Sect. <ref>.Subsequently, we conduct a comprehensive theoretical investigation into the constrained bi-objective problem, culminating in the derivation of an analytical expression for the Pareto solution involving three distinct parameters: communication rounds (T), noise level (σ), and sample ratio (q) in Sect. <ref>.Our approach entails initially analysing the Pareto front and Pareto solution encompassing all three parameters with unconstrained q and σ.Moreover, with given q and constrained σ, we focus on T and σ and detailed analyze the Pareto solution in different cases. §.§ Simplified FormulationWe obtain a simplified version of the constrained bi-objective optimization formulation by using the differential privacy leakage ϵ_p(T, σ, q), upper bound of utility loss ϵ_u(T, σ, q), and training efficiency ϵ_e illustrated in Sect. <ref>.min(f_1(T,σ, q), f_2(T,σ, q)) wheref_1(T,σ,q) = 𝐎 (1/η E T + η^2E^2 + η) + 𝐎 (σ^2/η qKE) f_2(T, σ, q) = C c_clip√(qTlog(1/δ))/√(K)σsubject tof_3(T,σ,q) = c_t T ≤ϵ_eAs the number of clients (K), the local training epochs (E), the learning rate (η) and the clipping constant (c_clip) are usually pre-decided in the real scenario,and the per-round training time c_t is approximately uniform each round,we keep K, E, c_clip, and c_t as constants. Focusing on parameter σ, T and q, the constrained bi-objective formulation can be further simplified as follows according to Lemma <ref>. min(f_1(T,σ, q), f_2(T,σ, q)) wheref_1(T, σ, q) = 1/T + kσ^2/q K, kis constantf_2(T, σ, q) = √(qT)/σsubject tof_3(T,σ,q) = c_t T ≤ϵ_eWith aim of simplifying the optimization objective functions, we provide the following lemma.The Pareto optimal solutions of Eq. (<ref>) and Eq. (<ref>) are equivalent, where constants a_1, a_2 ∈𝐑^+ and constants m_1, m_2 ∈𝐑.{f_1(x) = a_1 × h_1(x) + m_1 f_2(x) = a_2 × h_2(x) + m_2 .{f_1^'(x) = h_1(x) f_2^'(x) = h_2(x) . Suppose that x_0 is a Pareto optimal solution of Eq. (<ref>). We have Eq. (<ref>) according to the definition of Pareto optimal solution.∀ x_i ≠ x_0,∀ j ∈ [1,2],f_j(x_0) ≤ f_j(x_i)∃ j ∈ [1,2],f_j(x_0) < f_j(x_i) ∀ x_i ≠ x_0, ∀ j ∈ [1,2],a_j h_j(x_0) + m_j ≤ a_j h_j(x_i) + m_j∃ j ∈ [1,2],a_j h_j(x_0) + m_j < a_j h_j(x_i) + m_j ∀ x_i ≠ x_0,∀ j ∈ [1,2],h_j(x_0) ≤ h_j(x_i)∃ j ∈ [1,2],h_j(x_0) < h_j(x_i) ∀ x_i ≠ x_0,∀ j ∈ [1,2],f_j^'(x_0) ≤ f_j^'(x_i)∃ j ∈ [1,2],f_j^'(x_0) < f_j^'(x_i)It has been proved that x_0 is still a Pareto optimal solution of optimization problem Eq. (<ref>). The Pareto optimal solutions of Eq. (<ref>) and Eq. (<ref>) are equivalent.§.§ Pareto Optimal Solutions In this section, we derive the analytical Pareto optimal solutions w.r.t T, σ and q in Thm. <ref> and the the analytical Pareto optimal solutions with pre-defined q under different constraint cases in Cor. <ref>.(Analytical Solutions w.r.t. 𝐪, 𝐓, σwith unconstrained σ and 𝐪) Assuming σ∈ [0,+∞) andq ∈ (0,1], we have the Pareto optimal solutions for Eq. (<ref>) as follows.k σ^2 T = q K,where k, t_c is constant and T ∈ [1, …, ⌊ϵ_r/t_c⌋]. Set X = 1/T + k/Kσ^2/q with constant k. The bi-objective optimization objective functions f_1 and f_2 in Eq. (<ref>) is converted to the following form:f_1(X, σ, q) = X,f_2(X, σ, q) = √(q/σ^2(X-k σ^2/qK)),where X ∈ (0, +∞), q ∈ (0,+∞), σ∈ (0,+∞). For a specific X, f_2 reaches minimum 2√(k)/√(K)X when σ^2/q = KX/2k according to inequality of arithmetic means. Specifically, f_2(σ^2, q|X) =√(1/σ^2/q(X-k/Kσ^2/q))≥ f_2(σ^2/q=K X/2k|X) = 2√(k)/√(K)X.Therefore,given X, (X, 2√(k)/√(K)X) Pareto dominates the set {(X, √(q/σ^2(X-kσ^2/(qK)))| σ^2/q≠KX/2k, ∀σ,q }.As a result, the Pareto set of Eq. (<ref>) must be a subset of {(X, 2√(k)/√(K)X), ∀ X }≜ S. Since each points in S non-dominates each other, the set S is the Pareto set of Eq. (<ref>).Moreover, by substituting X via Eq. σ^2/q = K X/2k, we have proven that the Pareto optimal solutions follows:σ^2/q = KX/2k σ^2/q = K(1/T+kσ^2/qK)/2k k σ^2 T= q K.where k, t_c is constant and T ∈ [1, …, ⌊ϵ_r/t_c⌋].We have finished the proof.In Proof <ref>, the bi-objective optimization objective functions as Eq. (<ref>) can be written w.r.t. σ^2/q and T as follows:f_1(X, σ^2/q) = X, f_2(X, σ^2/q) = √(1/σ^2/q(X-k/Kσ^2/q)).where X = 1/T + k/Kσ^2/q.The Pareto optimal solutions as Eq. (<ref>) can also be written w.r.t. σ^2/q and T as follows:σ^2/q T=K/k. Therefore, the fraction σ^2/q can be regraded as a single parameter in Proof <ref>. Based on that, keeping one of σ, q as a constant and iterating over the other one can still achieve the whole Pareto front with unconstrained σ and q. In the real scenario, the sample ratio (q) is usually decided by the server and distributed to the clients. Based on this common setting, we keep sample ratio (q) as a constant and further provides the analytical solutions w.r.t. σ and T as Theorem <ref> under different cases. We also demonstrate the different cases analytical solutions in Fig. <ref>.With a pre-defined sample ratio (q), the Pareto solution for Eq. (<ref>) is as follows in the three cases:* Case-I (Unconstrained σ). Assuming σ∈ [0,+∞), we have the Pareto optimal solution as follows:k σ^2 T = qK, kis constant,where k, t_c, q is constant and T ∈ [1, …, ⌊ϵ_r/t_c⌋].* Case-II (Constrained σ with k σ_max^2 ⌊ϵ_r/t_c⌋ > qK). Assuming σ∈ [0,σ_max] and k σ_max^2 ⌊ϵ_r/t_c⌋ > qK, we have the Pareto optimal solutions as follows:{ σ = σ_maxwhenT ∈{1,…,⌊qK/kσ_max^2⌋}σ = √(qK/kT)when T ∈{⌈qK/kσ_max^2⌉, …, ⌊ϵ_r/t_c⌋-1}σ = [0,√(qK/k ⌊ϵ_r/t_c⌋)]when T = ⌊ϵ_r/t_c⌋..* Case-III (Constrained σ with k σ_max^2 ⌊ϵ_r/t_c⌋≤ qK). Assuming σ∈ [0,σ_max] and k σ_max^2 ⌊ϵ_r/t_c⌋≤ qK, the Pareto optimal solutions are as follows.{ σ = σ_maxwhenT ∈{1,2,…,⌊ϵ_r/t_c⌋-1}σ = [0,σ_max]when T = ⌊ϵ_r/t_c⌋ .Set X = 1/T + k/Kσ^2/q with constant k and q. The bi-objective optimization objective functions f_1 and f_2 in Eq. (<ref>) is converted to the following form Eq. (<ref>) and Eq. (<ref>): f_1(X, σ) = X f_2(X, σ) = √(q/σ^2(X-kσ^2/qK))f_1(X, T) = X f_2(X, T) = √(kT^2/K(XT-1)) Set T_max=⌊ϵ_r/t_c⌋. The constraint as f_3(T,σ,q) = c_t T ≤ϵ_e in Eq. <ref> can be converted to the form:f_3(T,σ,q) = c_t T ≤ϵ_e, T ∈𝒵^+ T ≤ϵ_e/c_t, T ∈𝒵^+ T ∈ [1,2,…,T_max].Case-I (Unconstrained σ) In Case-I, we have σ∈ [0,+∞), given q, and T ∈ [1,2,…, T_max]. For a specific X, f_2 reaches minimum value 2√(k)/√(K)X when σ^2 = qKX/2k according to inequality of arithmetic means. Specifically,f_2(σ^2|X) =√(q/σ^2(X-k σ^2/qK))≥ f_2(σ^2=q K X/2k|X) = 2√(k)/√(K)X.Therefore, given X, (X, 2√(k)/√(K)X) Pareto dominates the set {(X, √(q/σ^2(X-kσ^2/(qK))))| σ^2 ≠qKX/2k, ∀σ}. As a result, the Pareto set of Eq. (<ref>) must be a subset of {(X, 2√(k)/√(K)X), ∀ X }≜ S. Since each points in S non-dominates each other, the set S is the Pareto set of Eq. (<ref>).Moreover, by substituting X via Eq. σ^2 = q K X/2k, we have proven that the Pareto solution follows:σ^2 = qKX/2k σ^2 = qK(1/T+kσ^2/qK)/2k k σ^2 T= q K.Case-II (Constrained σ with 𝐤 σ_𝐦𝐚𝐱^2 ⌊ϵ_𝐫/𝐭_𝐜⌋ > 𝐪𝐊)As T_max=⌊ϵ_r/t_c⌋, we have T ∈ [1,…,T_max] and k σ_max^2 T_max > qK with σ∈ [0,σ_max] and given q. We separate the value of X into three parts.* Given X ∈ [1/T_max,2/T_max), we have X/2 < 1/T_max≤1/T, ∀ T ∈ [1,2,…, T_max]. f_2 monotonically decrease when T < 2/X. f_2(T|X) = √(kT^2/K(XT-1)) ≥ f_2(T=T_max|X)= √(kT_max^2/K(XT_max-1))Therefore, given X ∈ [1/T_max,2/T_max), (X, √(kT_max^2/K(XT_max-1))) Pareto dominates the set {(X, √(kT^2/K(XT-1)))| T ≠ T_max, ∀ T }≜ S_1. * Given X ∈ [2/T_max, 2kσ_max^2/q], f_2 reaches minimum 2√(k)/√(K)X when σ^2 = qKX/2k according to inequality of arithmetic means.f_2(σ^2|X) =√(q/σ^2(X-k σ^2/qK))≥ f_2(σ^2=q K X/2k|X) = 2√(k)/√(K)X. Therefore, given X ∈ [2/T_max, 2kσ_max^2/q], (X, 2√(k)/√(K)X) Pareto dominates the set {(X, √(q/σ^2(X-kσ^2/(qK))))| σ^2 ≠qKX/2k, ∀σ}≜ S_2. * Given X ∈ (2kσ_max^2/qK, 1+kσ_max^2/qK], we have qKX/2k > σ_max^2 ≥σ^2.f_2 monotonically decrease when σ^2 < qKX/2kf_2(σ^2|X) =√(q/σ^2 (X-kσ^2/(qK)))≥ f_2(σ=σ_max|X) = √(q/σ_max^2(X-kσ_max^2/(qK)))Therefore, given X ∈ (2kσ_max^2/qK, 1+kσ_max^2/qK], (X, √(q/σ_max^2(X-kσ_max^2/(qK)))) Pareto dominates the set {(X, √(q/σ^2(X-kσ^2/(qK))))| σ^2 ≠σ_max, ∀σ}≜ S_3. As a result, the Pareto set of Eq. (<ref>) must be a subset of S_1 ∪ S_2 ∪ S_3 ≜ S. Since each points in S non-dominates each other, the set S is the Pareto set of Eq. (<ref>). We derive the Pareto solution set based on the Pareto front in the following. * For X ∈ [1/T_max,2/T_max), it reaches Pareto front when T=T_max. We have the following Pareto solutions.∀σ∈ [0,√(qK/k T_max)], T=T_max * For X ∈ [2/T_max, 2kσ_max^2/qK], it reaches Pareto front when σ=√(qK X/2k). As σ=√(q K X/2k) is equivalent to σ^2 T= qK/k, we have the following Pareto solutions.σ = √(qK/kT)when T ∈{⌈qK/kσ_max^2⌉, …, T_max-1} * For X ∈ (2kσ_max^2/qK, 1+kσ_max^2/qK], it reaches Pareto front when σ=σ_max. We have the following Pareto solutions.σ = σ_maxwhenT ∈{1,…,⌊qK/kσ_max^2⌋}As T_max = ⌊ϵ_r/t_c⌋, we have done the proof of Case-II.Case-III (Constrained σ with 𝐤 σ_𝐦𝐚𝐱^2 ⌊ϵ_𝐫/𝐭_𝐜⌋≤ 𝐪𝐊) As T_max=⌊ϵ_r/t_c⌋, we have T ∈ [1,…,T_max] and k σ_max^2 T_max≤qK with σ∈ [0,σ_max] and given q We separate the value of X into two parts.* Given X ∈ [1/T_max,1/T_max+k σ_max^2/qK), we have X < 1/T_max+k σ_max^2/qK < 1/T_max+k/qKqK/kT_max,which means T ≤ T_max < 2/X. f_2 monotonically decrease when T < 2/X. f_2(T|X) = √(kT^2/K(XT-1)) ≥ f_2(T=T_max|X)= √(kT_max^2/K(XT_max-1))Therefore, given X ∈ [1/T_max,2/T_max), (X, √(kT_max^2/K(XT_max-1))) Pareto dominates the set {(X, √(kT^2/K(XT-1)))| T ≠ T_max, ∀ T }≜ S_1. * Given X ∈ [1/T_max+k σ_max^2/qK, 1+k σ_max^2/qK], we have X ≥1/T_max+k σ_max^2/q > k σ_max^2/q + k σ_max^2/q,which means σ^2 < qKX/2k. f_2 monotonically decrease when σ^2 < qKX/2kf_2(σ^2|X) =√(q/σ^2 (X-kσ^2/(qK)))≥ f_2(σ=σ_max|X) = √(q/σ_max^2(X-kσ_max^2/(qK)))Therefore, given X ∈ [1/T_max+k σ_max^2/qK, 1+k σ_max^2/qK], (X, √(q/σ_max^2(X-kσ_max^2/(qK)))) Pareto dominates the set {(X, √(q/σ^2(X-kσ^2/(qK))))| σ^2 ≠σ_max, ∀σ}≜ S_2. As a result, the Pareto set of Eq. (<ref>) must be a subset of S_1 ∪ S_2 ≜ S. Since each points in S non-dominates each other, the set S is the Pareto set of Eq. (<ref>). We derive the Pareto solution set based on the Pareto front in the following.* For X ∈ [1/T_max,1/T_max+k σ_max^2/qK), it reaches Pareto front when T=T_max. We have the following Pareto solutions.∀σ∈ [0,σ_max], T = T_max * For X ∈ [1/T_max+k σ_max^2/qK, 1+k σ_max^2/qK], it reaches Pareto front when σ = σ_max. We have the following Pareto solutions.σ = σ_max, when T ∈{1,2,…,T_max-1}As T_max = ⌊ϵ_r/t_c⌋, we have done the proof of Case-III.We have done the proof of the relationship between Pareto optimal σ, T, and q of all the three cases. The theoretical analysis provides a theoretical guarantee for guiding optimal parameter design with low cost in DPFL. § EXPERIMENTIn this section, we use experiments to verify our theoretical analysis.Firstly, we display the Pareto solutions of the efficiency constrained utility-privacy bi-objective optimization problem in DPFL regarding noise level (σ), communication rounds (T), and sample ratio (q) in Sec.<ref>. Secondly, we further investigate the influence of the total number of participating clients (K) and the local training epochs (E) on Pareto solutions in Sec.<ref>. Finally, we demonstrate the process of low cost parameter design guiding by Thm. <ref> and Cor. <ref> in Sec.<ref>. §.§ Experimental SetupWe use both LR (logistic regression) and LeNet <cit.> on the MNIST <cit.> dataset to verify our theoretical analysis.The MNIST dataset includes 10000 testing samples and 60000 training samples, which are identically divided to K parts and kept locally within each client.The communication rounds T_max is set to be 200. For LR model, the range of σ is set to be within [0.010, 0.150]. For LeNet model, the range of σ is set to be within [0.010, 0.050].Having batch size B=64, we use stochastic gradient descent optimizer with learning rate η = 0.01. To better estimate the test loss, we use multiple random seeds (30 for LR, 50 for LeNet) and take average among the test loss of different seeds for different given σ, T, and q.§.§ Pareto Solution in DPFLWe present the Pareto solutions from both theoretical and experimental perspectives, as depicted in Fig. <ref>. In our approach, we employ non-dominated sorting, as outlined in Algo. <ref>, to identify the experimental Pareto solution represented by the black points. We then compare this experimental solution with the theoretical solution derived in Thm. <ref>, illustrated separately in Fig. <ref> and Fig. <ref>. Notably, Fig. <ref> reveals a strong alignment between the experimental and theoretical Pareto solutions, signifying the accuracy of our theoretical model. Furthermore, to provide additional confirmation of the Pareto solutions derived in Cor. <ref>, we present a comparison of the theoretical and experimental Pareto solutions for different cases with a fixed q, as illustrated in Fig. <ref>.Across all the three cases mentioned in Thm. <ref>, the Pareto solutions obtained through Algo. <ref> (depicted as black points) align closely with the theoretical solutions (represented by green surface) established in Cor. <ref>. This consistency serves as strong evidence of the validity and reliability of our analytical model in predicting the Pareto front under various conditions. §.§ Ablation Studyon K, E In this subsection, we investigate the impact of local epochs (E) and the number of clients (K) on the Pareto solutions with a fixed sample ratio (q), as discussed in Cor. <ref>. Our analysis yields the following three key conclusions: * Local Epochs (E) Exhibit Minimal Influence: We find that the local epochs (E) have a negligible impact on the Pareto solutions. This observation is consistent across cases where E is set to different values, such as E=5, 10, 20, as demonstrated in Fig. <ref>. This similarity in Pareto solutions indicates that the choice of E does not significantly alter the parameters for achieving the optimal trade-off between privacy and utility.* Number of Clients (K) Affects Pareto Solution directly: We observe that changes in the experimental Pareto solution concerning σ and T (as indicated by the black points in Figure <ref>) exhibit an direct relationship with the number of participating clients, denoted as K. Specifically, the relationship between σ and T follows the equation k σ^2 T = qK, which is represented by the green line in Figure <ref>. This relationship implies that as the number of clients increases, the values of σ and T must adjust accordingly to maintain consistent Pareto solutions.These findings provide valuable insights into the factors that influence the Pareto solutions in the context of federated learning with differential privacy, enabling practitioners to make informed decisions when designing privacy-preserving algorithms. §.§ Demonstration of low cost Parameter DesignIn the real scenario, due to the privacy preserved limitation and training efficiency constraint, it is not realistic to do a large number of experiments to search for the best parameters as communication rounds (T), noise level (σ) and sample ratio (q). In other words, making sure the privacy leakage and utility loss reach the Pareto front with acceptable training efficiency in DPFL can be challenging job. To deal with the challenge, the theoretical analysis (Thm. <ref>) can serve as an important guidance for the low cost parameter design of DPFL framework. Specifically, the parameter design process is structured into two distinct stages: * In the first step, using a small portion of a public dataset and keeping the sample ratio fixed at q_0, given the acceptable training efficiency constraint, along with the total number of clients set to K_0, clients undertake pre-experiments, following the procedure outlined in Algo. <ref>. These experiments aim to identify the Pareto optimal solutions corresponding to the noise level (σ) and the global epoch (T), represented as the black points in Fig. <ref> and <ref>. Subsequently, clients extrapolate the behavior of these black points by utilizing the relationship kσ^2T = q_0K_0 (depicted as the red line in Fig. <ref> and <ref>). This extrapolation further enables the estimation of the constant k[It's important to note that this constant k remains consistent whether considering two parameters, σ and T, or three parameters, namely, q, T, and σ, as indicated in Cor. <ref>.].* In the second step, the server uniformly distributes the common sample ratio q_r and global training epoch (T_r) to all participating clients. Each client then calculates their respective noise level, denoted as σ_r, based on the formula σ_r = √(q_rK/kT_r). Take two examples for LeNet and LR model, let the number of clients (K_r) and sample ratio (q_r) be 40 and 0.5 respectively. For the public dataset, we randomly sample 10% of original MNIST dataset.With q_0=1.0 and K_0=10, the constant k is estimated to be 2.5 and 15.0 for LR and LeNet model separately as illustrated in Fig. <ref> and <ref>.Then, to validate the correctness of the estimated constant k, we compare the theoretical solutions and experimental solutions of LR and LeNet model shown in Fig. <ref> and <ref> respectively.We derive the theoretical solutions by using the consistent constant k and represent the solutions by green points. The black points represents the experimental solutions, which are obtained via Algo. <ref> using all MINTS data. Figure <ref> and <ref> demonstrate that these black points align with the theoretical Pareto front obtained using the estimated values of k.We compare the computation cost (the time of guiding parameter design and the time of achieving Pareto set) of our method with the existing two type of methods. The first method named asTraining with Budget <cit.> is to minimize the utility loss when the privacy leakage is below the specified privacy budget. The second method named as Training until Convergences <cit.> is to minimize the utility loss until the model convergence with different noise level and identify the optimal noise level with the least privacy leakage. Table <ref> show the time to get the optimal parameter design with given T and q from server. For our proposed method, the process of guiding parameter design can be divided into two parts: 1) The pre-experiment time denoted as t_0 is much smaller than the formal training time since the public dataset using in pre-experiment is much smaller; 2) The main training time is global training epoch T_r since the clients directly calculate the optimal σ with given q_r and T_r according to Theorem <ref>. For the method Training with Budget and Training Until Convergence, they needs to search the optimal σ, thus, its computation complexity is Θ(n_σT_r), where n_σ is the number of different σ. Similarly, Table <ref> contrasts the time required to locate the entire Pareto set under different methods. It reveals that our proposed method reduces the search time by n_q times compared to the two methods, Training with Budget and Training Until Convergence, i.e., it eliminates the need to search over q. This efficiency is attributed to Theorem <ref>, which aids in determining the sample ratio q given σ and T.§ CONCLUSION AND DISCUSSION Bearing in mind the aim of achieving optimal privacy-utility trade-off within an acceptable training efficiency constraint, we formulate the constrained bi-objective optimization formulation in Differential Privacy Federated Learning (DPFL). The theoretical analysis of the constrained bi-objective optimization problem can serve as an important guidance of the parameter design in DPFL, which could help us get rid of the expensive neural network training and federated system evaluation <cit.>.By using a small proportion of public data, we can get an approximate estimation of the exact relationship among T, σ, and q.In future, we can also do similar theoretical analysis for different protection mechanisms in the federated learning framework, as long as it has or could be given a good enough upper-bound of utility loss.IEEEtran
http://arxiv.org/abs/2312.16554v1
{ "authors": [ "Hanlin Gu", "Xinyuan Zhao", "Yuxing Han", "Yan Kang", "Lixin Fan", "Qiang Yang" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231227123755", "title": "A Theoretical Analysis of Efficiency Constrained Utility-Privacy Bi-Objective Optimization in Federated Learning" }
^1Department of Computer Science, Northwestern, Evanston, IL, USA ^2Argonne National Laboratory, Lemont, IL, USA ^3Department of Electrical and Computer Engineering, Northwestern, Evanston, IL, USA [email protected] Dec. 2023Recent advances in event-based shape determination from polarization offer a transformative approach that tackles the trade-off between speed and accuracy in capturing surface geometries. In this paper, we investigate event-based shape from polarization using Spiking Neural Networks (SNNs), introducing the Single-Timestep and Multi-Timestep Spiking UNets for effective and efficient surface normal estimation. Specificially, the Single-Timestep model processes event-based shape as a non-temporal task, updating the membrane potential of each spiking neuron only once, thereby reducing computational and energy demands. In contrast, the Multi-Timestep model exploits temporal dynamics for enhanced data extraction. Extensive evaluations on synthetic and real-world datasets demonstrate that our models match the performance of state-of-the-art Artifical Neural Networks (ANNs) in estimating surface normals, with the added advantage of superior energy efficiency. Our work not only contributes to the advancement of SNNs in event-based sensing but also sets the stage for future explorations in optimizing SNN architectures, integrating multi-modal data, and scaling for applications on neuromorphic hardware. § INTRODUCTIONPrecise surface normal estimation can provide valuable information about a scene’s geometry and is useful for many computer vision tasks, including 3D Reconstruction <cit.>, Augmented Reality (AR) and Virtual Reality (VR) <cit.>, Material Classification <cit.>, and Robotics Navigation <cit.>. Depending upon the requirements of the application, surface normal estimation can be carried out using a variety of methods <cit.>. In this work, we are interested in estimating surface normal from polarization images – shape from polarization <cit.>. In particular, shape from polarization leverages the polarization state of light to infer the shape of objects. When light reflects off surfaces, it becomes partially polarized. This method uses this property to estimate the surface normals of objects, which are then used to reconstruct their 3D shape. Compared to other 3D sensing methods, shape from polarization has many advantages <cit.>, such as its suitability for capturing fine details on a variety of surface materials, including reflective and transparent ones, and its reliance on passive sensing, which eliminates the need for external light sources or emitters. Additionally, shape from polarization can provide high-precision data with relatively low-cost and low-energy equipment, making it an efficient and versatile option for 3D imaging in various applications.Typically, a polarizing filter is used in conjunction with a camera to capture the polarization images and infer the polarization information. Generally, there are two ways to capture the polarization images and estimate the surface normals from them, one is Division of Time (DoT) <cit.> and the other one is Division of Focal Plane (DoFP) <cit.>. The DoT approaches add a rotatable linear polarizer in front of the lens of an ordinary camera. The filter is rotated to different orientations, and full-resolution polarization images are captured for each orientation at different times. By analyzing the changes in the polarization state of light across these images, the surface normals of objects can be estimated. The DoT methods use the full resolution of the sensor but trade-off against acquisition time. On the other hand, the DoFP methods place an array of micro-polarizers in front of the camera <cit.>. This allows the camera to capture polarization information at different orientations in a single shot. Despite the reduced latency, this system is limited by the low resolution of polarization images, as each pixel only captures polarization at a specific orientation. This can result in lower accuracy compared to the DoT methods.To bridge the accuracy of DoT with the speed of DoFP, researchers propose event-based shape from polarization following the DoT design scheme <cit.>. Specifically, a polarizer is rotating in front of an event camera <cit.> and this creates sinosoidal changes in brightness intensity. Unlike traditional DoT methods utilize standard cameras to capture full-resolution polarization images at fixed rates, event-based shape from polarization employs event cameras to asynchronously measure changes in brightness intensity for each pixel within the full-resolution scene and trigger the events with microsecond resolution if the difference in brightness exceeds a threshold. The proposed event-based method uses the continuous event stream to reconstruct relative intensities at multiple polarizer angles. These reconstructed polarized images are then utilized to estimate surface normals using physics-based and learning-based methods <cit.>. Due to the DoT-driven characteristic and low latency event cameras provide, the event-based shape from polarization mitigates the accuracy-speed trade-off in the traditional shape from polarization field.Although the event-based shape from polarization brings many advantages, we still need to carefully choose models that process the data from event cameras. With the prevalence of Artifical Neural Networks (ANNs), one recent method <cit.> employs ANNs to process event data and demonstrates the better surface normal estimation performance compared to physics-based methods. However, ANNs are not compatiable with the working mechanism of event cameras and incur the high energy consumption. To be more compatiable with event cameras and maintain the high energy efficiency, research on Spiking Neural Networks (SNNs) <cit.> starts to gain momentum. Similar to event cameras that mimic the human retina's way of responding to changes in light intensity, SNNs are also bio-inspired and designed to emulate the neural dynamics of human brains. Unlike ANNs employing artificial neurons <cit.> and conducting real-valued computation, SNNs adopt spiking neurons <cit.> and utilize binary 0-1 spikes to process information. This difference reduces the mathematical dot-product operations in ANNs to less computationally summation operations in SNNs <cit.>. Due to such the advantage, SNNs are always energy-efficient and suitable for power-constrained devices. Although SNNs demonstrate the higher energy efficiency and much dedication has been devoted to SNN research, ANNs still present the better performance and dominate in a wide range of learning applications <cit.>. Recently, more research efforts have been invested to shrink the performance gap between ANNs and SNNs. And SNNs have achieved comparable performance in various tasks, including image classification <cit.>, object detection <cit.>, graph prediction <cit.>, natural language processing <cit.>, etc. Nevertheless, we have not yet witnessed the establishment of SNN in the accurate surface normal estimation with an advanced performance. To this end, this naturally raises an issue: could bio-inspired Spiking Neural Networks estimate surface normals from event-based polarization data with an advanced quality at low energy consumption? In this paper, we investigate the event-based shape from polarization with a spiking approach to answer the above question. Specifically, inspired by the feed-forward UNet <cit.> for event-based shape from polarization <cit.>, we propose the Single-Timestep Spiking UNet, which treats the event-based shape from polarization as a non-temporal task. This model processes event-based inputs in a feed-forward manner, where each spiking neuron in the model updates its membrane potential only once. Although this approach may not maximize the temporal processing capabilities of SNNs, it significantly reduces the computational and energy requirements. To further exploit the rich temporal information from event-based data and enhance model performance in the task of event-based shape from polarization, we propose the Multi-Timestep Spiking UNet. This model processes inputs in a sequential, timestep-by-timestep fashion, allowing each spiking neuron to utilize its temporal recurrent neuronal dynamics to more effectively extract information from event data. We extensively evaluate the proposed models on the synthetic dataset and the real-world dataset for event-based shape from polarization. The results of these experiments, both quantitatively and qualitatively, indicate that our models are capable of estimating dense surface normals from polarization events with performance comparable to current state-of-the-art ANN models. Additionally, we perform ablation studies to assess the impact of various design components within our models, further validating their effectiveness. Furthermore, our models exhibit superior energy efficiency compared to their ANN counterparts, which highlights their potential for application on neuromorphic hardware and energy-constrained edge devices. The remainder of this paper is structured as follows: Section II provides a comprehensive review of existing literature on shape from polarization and SNNs. In Section III, we detail our proposed SNN models for event-based shape from polarization, including their structures, training protocols, and implementation details. Section IV showcases the effectiveness and energy efficiency of our proposed models on different benchmark datasets. The paper concludes with Section V, where we summarize our findings and outline potential avenues for future research.§ RELATED WORKIn the following, we will first give an overview of the related work on shape from polarization, including the traditional shape from polarization and event-based shape from polarization. Then, we will give a comprehensive review of SNNs and their applications in 3D scenes.§.§ Shape from PolarizationShurcliff proposed the method of shape recovery by polarization information in 1962 <cit.>. Essentially, when unpolarized light reflects off a surface point, it becomes partially polarized. And the observed scene radiance varies with changing the polarizer angle, which encodes some relationship with surface normals. Therefore, by analyzing such relationship at each surface point through Fresnel equations <cit.>, shape from polarization methods can measure the azimuthal and zenithal angles at each pixel and recover the per-pixel surface normal with high resolution. Generally, two schemes are utilized to collect polarization images. One is Division of Time (DoT) <cit.> that provides full-resolution polarization images but increases the acquisition time significantly, while the other one is Division of Focal Plane (DoFP) <cit.> that trade-offs spatial resolution for low latency. After collecting the polarization images, various physical-based or learning-based methods <cit.> can be utilized to estimate the surface normals. However, since a linear polarizer cannot distinguish between polarized light that is rotated by π radians, this results in two confounding estimates for azimuth angle at each pixel <cit.>. To solve such the ambiguity, we have to carefully design the estimation methods by exploring additional constraints from various aspects, such as geometric cues <cit.>, spectral cues <cit.>, photometric cues <cit.>, or priors learned from deep learning techniques <cit.>.Recently, with the prevalence of bio-inspired neuromorphic engineering, researchers begin to shift their focus to high-speed energy-efficient event cameras and propose solutions that combine polarization information with event cameras. Specifically, inspired by the polarization vision in the mantis shrimp eye <cit.>, <cit.> proposed the PDAVIS polarization event camera. The researchers employed the DoFP scheme to design such the camera. This involved fabricating an array of pixelated polarization filters and strategically positioning them atop the sensor of an event camera. While this camera is adept at capturing high dynamic range polarization scenes with high speeds, it still faces challenges with low spatial resolution, a common issue inherent in the DoFP methods. To bridge the high resolution of DoT with the low latency of DoFP, <cit.> adopted the DoT scheme and collected polarization events by placing a rotating polarizing filter in front of an event camera. Due to the high resolution of DoT and the low latency of event cameras, this method facilitates shape from polarization at both high speeds and with high spatial resolution. Typically, the captured polarization events are transformed into frame-like event representations <cit.>, which are then processed using ANN models <cit.> to estimate surface normals. While these learning-based methods demonstrate superior performance over traditional physics-based methods, they significantly increase the energy consumption of the overall system, primarily due to the lower energy efficiency of ANNs. Through processing event polarization data collected by the promising DoT scheme, this paper aims to address this challenge by conducting event-based shape from polarization using SNNs, presenting a more energy-efficient alternative in this domain. §.§ Spiking Neural NetworksWith the development of ANNs, artificial intelligene models today have demonstrated extraordinary abilities in many tasks, such as computer vision, natural language processing, and robotics. Nevertheless, ANNs only mimic the brain's architecture in a few aspects, including vast connectivity and structural and functional organizational hierarchy <cit.>. The brain has more information processing mechanisms like the neuronal and synaptic functionality <cit.>. Moreover, ANNs are much more energy-consuming compared to human brains. To integrate more brain-like characteristics and make artificial intelligence models more energy-efficient, researchers propose SNNs, which can be executed on power-efficient neuromorphic processors like TrueNorth <cit.> and Loihi <cit.>. Like ANNs, SNNs are capable of implementing common network architectures, such as convolutional and fully-connected layers, yet they distinguish themselves by utilizing spiking neuron models <cit.>, such as the Leaky Integrate-and-Fire (LIF) model <cit.> and the Spike Response Model (SRM) <cit.>. Due to the non-differentiability of these spiking neuron models, training SNNs can be challenging. However, progress has been made through innovative approaches such as converting pre-trained ANNs to SNNs <cit.> and developing methods to approximate the derivative of the spike function <cit.>. Thanks to the developement of these optimization techniques, several models have been proposed recently to tackle the complex tasks in 3D scenes. Notably, StereoSpike <cit.> and MSS-DepthNet <cit.> have pioneered the development of deep SNNs for depth estimation, achieving performance on par with the state-of-the-art ANN models. Additionally, SpikingNeRF <cit.> has successfully adapted SNNs for radiance field reconstruction, yielding synthesis quality comparable to ANN baselines while maintaining high energy efficiency. In this paper, our emphasis is on employing SNNs to tackle event-based shape from polarization, aiming to establish a method that is not only effective but also more efficient for event-based surface normal estimation.§ METHODS In this paper, we focus on building SNNs to estimate surface normals through the use of a polarizer paired with an event camera. In this setup, the polarizer is mounted in front of the event camera and rotates at a constant high speed driven by a motor. This rotation changes the illumination of the incoming light. Event cameras generate an asynchronous event e_i = (x_i, y_i, t_i, p_i) when the illumination variation at a given pixel reaches a given contrast threshold C:L(x_i, y_i, t_i) - L(x_i, y_i, t_i - Δ t_i) = p_iC,where L≐ log(I) is the log photocurrent ("brightness"), p_i ∈{-1, +1} is the sign of the brightness change, and Δ t_i is the time since the last event at the pixel (x_i, y_i). The surface normal vector can be represented by its azimuth angle α and zenith angle θ in a spherical coordinate system. And the proposed models predict the surface normal 𝐍 as a 3-channel tensor 𝐍 = (sinθcosα, sinθsinα, cosθ) through the event steam. §.§ Input Event Representation To ensure a fair comparison between our proposed methods and those utilizing ANNs for event-based shape from polarization, we transform the sparse event stream into frame-like event representations, which serve as the input for our methods. Specifically, similar to <cit.>, we take the CVGR-I representation due to its superior performance. The CVGR-I representation combines the Cumulative Voxel Grid Representation (CVGR) with a single polarization image (I) taken at a polarizer angle of 0 degrees. The CVGR is a variation of the voxel grid <cit.>. Similar to previous works on learning with events <cit.>, the CVGR first encodes the events in a spatio-temporal voxel grid V. Specifically, the time domain of the event stream is equally discretized into B temporal bins indexed by integers in the range of [0, B-1]. Each event e_i = (x_i, y_i, t_i, p_i) distributes its sign value p_i to the two closest spatio-temporal voxels as follows:V(x, y, t) = ∑_x_i = x, y_i = y p_i max(0, 1-|t-t_i^*|), t_i^* = B-1/Δ T (t_i - t_0),where (x, y, t) is a specific location of the spatio-temporal voxel grid V, Δ T is the time domain of the event stream, and t_0 is the timestep of the initial event in the event stream. Then, the CVGR calculates the cumulative sum across the bins and multiplies this total by the contrast threshold:E(x, y, b) = C∑_i=0^b V(x, y, i), b = {0, 1, 2, 3, ..., B-1},Finally, to enhance surface normal estimation in areas with insufficient event information, a single polarization image of 0 polarizer degree is incorporated, resulting in E = I[0] + E, thereby providing additional context. This resulting event representation E will serve as the input of our models. Its dimensions are B× H× W, where H and Wrepresent the height and width of the event camera, respectively. We present a concrete input example of “cup” in Fig. <ref>.§.§ Spiking Neuron Models Spiking neuron models are mathematical descriptions of specific cells in the nervous system. They are the basic building blocks of SNNs. In this paper, we primarily concentrate on using the Integrate-and-Fire (IF) model <cit.> to develop our proposed SNNs. The IF model is one of the earliest and simplest spiking neuron models. The dynamics of the IF neuron i is defined as: u_i(t) = u_i(t-1) + ∑_jw_ijx_j(t),where u_i(t) represents the internal membrane potential of the neuron i at time t, u_i(t-1) is the membrane potential of the neuron i at the previous timestep t-1, and ∑_jw_ijx_j(t) is the weighted summation of the inputs from pre-neurons at the current time step t. When u_i(t) exceeds a certain threshold u_th, the neuron emits a spike, resets its membrane potential to u_reset, and then accumulates u_i(t) again in subsequent time steps.In addition to the IF model, we also build our proposed models with the Leaky Integrate-and-Fire (LIF) model <cit.>. Compared to the IF model, LIF model contains a leaky term to mimic the diffusion of ions through the membrane. The dynamics of the LIF neuron i can be expressed as:u_i(t) = α u_i(t-1) + ∑_jw_ijx_j(t),where α is a leaky factor that decays the membrane potential over time. Drawing inspiration from previous work <cit.>, we also construct models using the Parametric Leaky Integrate-and-Fire (PLIF) model, which enables automatic learning of the leaky factor. In our experiments, we demonstrate that the IF model can offer better performance as it retains more information by not incorporating the leaky factor, thus striking a balance between high performance and biological plausibility. §.§ SNNs for Event-based Shape from PolarizationIn this section, we propose two SNNs that take the CVGR-I event representation as the input and estimate the surface normals 𝐍. Both of them can process the information through the spiking neuron models mentioned above. Due to the potential of IF neurons in event-based shape from polarization, we will present the proposed models based on the dynamics of IF neurons.§.§.§ Single-Timestep Spiking UNet In this work, we have chosen a UNet <cit.>, a commonly utilized architecture in semantic segmentation, as the backbone for surface normal estimation. Specifically, we propose the Single-Timestep Spiking UNet as shown in Fig. <ref>. This model is composed of several key components: an event encoding module, an encoder, a decoder, and a final layer dedicated to making surface normal predictions. As a Single-Timestep feed-forward SNN, this model processes the entire B× H× W CVGR-I representation as its input and updates the membrane potential of its spiking neurons once per data sample. The event encoding module utilizes two spiking convolutional layers to transform the real-valued B× H× W CVGR-I representation to the binary spiking representation with the size of N_c× H× W. Based on Eq. <ref>, the membrane potential u_i and output spiking state o_i of IF neuron i in the spiking convolutional layer are decided by:u_i = Conv(X), o_i=1 for u_i ≥ u_th 0for otherwise ,where Conv(X) is the weighted convolutional summation of the inputs from previous layers and t in Eq. <ref> is ignored since the model only updates once. After spiking feature extraction, there are N_e encoder blocks to encode the spiking representation. Each encoder employs a max pooling layer and multiple spiking convolutional layers to capture surface normal features. The neuronal dynamics of IF neurons in these layers are still controlled by Eq. <ref>. The encoded features are subsequently decoded using N_d decoder blocks, where N_d=N_e. Since transposed convolutions are often associated with the creation of checkerboard artifacts <cit.>, each decoder consists of an upsampling layer followed by multiple spiking convolutional layers, where the IF neurons are governed by Eq. <ref>. For the upsampling operations, we have two options: nearest neighbor upsampling and bilinear upsampling. Through our experiments, we will show that nearest neighbor upsampling can achieve performance comparable to bilinear upsampling in event-based surface normal estimation while preserving the fully spiking nature of our proposed model. As suggested in the UNet architecture, to address the challenge of information loss during down-sampling and up-sampling, skip connections are utilized between corresponding encoder and decoder blocks at the same hierarchical levels. To preserve the spiking nature and avoid introducing non-binary values, the proposed model utilizes concatenations as skip connections. Lastly, the final prediction layer employs the potential-assisted IF neurons <cit.> to estimate the surface normals. Unlike traditional IF neurons generate spikes based on Eq. <ref>, the potential-assisted IF neurons are non-spiking neurons which output membrane potential driven by:u_i = Conv(X), o_i = u_i,where o_i denotes the real-valued output of the neuron i. These potential-assisted dynamics can be extended to both LIF and PLIF neurons, facilitating the construction of a Single-Timestep Spiking UNet using these types of neurons. By producing real-valued membrane potential outputs, potential-assisted neurons retain rich information that enhances surface normal estimation and boosts the expressivity of SNNs, especially for large-scale regression tasks.§.§.§ Multi-Timestep Spiking UNet To take the advantage of temporal neuronal dynamics of spiking neurons and extract rich temporal information from event-based data, we propose the Multi-Timestep Spiking UNet for event-based shape from polarization. Figure <ref> shows the network structure of the Multi-Timestep Spiking UNet. Similar to the Single-Timestep Spiking UNet, the Multi-Timestep Spiking UNet also consists of an event encoding module, an encoder, a decoder, and a final surface normal prediction layer. However, unlike the Single-Timestep Spiking UNet processing the CVGR-I representation as a whole and updating the membrane potential of its spiking neurons only once per data sample, the Multi-Timestep Spiking UNet processes the B× H× W CVGR-I representation for each data sample along its temporal dimension B. At each time step, a 1× H× W CVGR-I representation is fed in to the event encoding module and transformed as the size of N_c× H× W, followed by N_e encoder blocks, N_d decoder blocks, and a final prediciton layer. Based on Eq. <ref>, the membrane potential u_i(t) and output spiking state o_i(t) of IF neuron i in the spiking convolutional layers of the Multi-Timestep Spiking UNet are decided by:u_i(t) = u_i(t - 1)(1 - o_i(t-1)) + Conv(X(t)), o_i(t)=1 for u_i(t) ≥ u_th 0for otherwise ,where Conv(X(t)) is the weighted convolutional summation of the inputs from previous layers at the time step t. The final prediction layer continues to use potential-assisted IF neurons, but with temporal dynamics as outlined below:u_i(t) = u_i(t - 1) + Conv(X(t)), o_i(t) = u_i(t),where the potential-assisted IF neuron i accumulates its membrane potential to maintain the rich temporal information, o_i(t) is the output of the neuron i at time step t, and we use the outputs at the last time step as the final surface normal predictions. §.§ Training and Implementation DetailsWe normalize outputs from spiking neurons into unit-length surface normal vectors 𝐍̂ and then apply the cosine similarity loss function:ℒ = 1/H× W∑_i^H∑_j^W(1-<𝐍̂_i,j, 𝐍_i,j>),where <·,·> indicates the dot product, 𝐍̂_i,j refers to the estimated surface normal at the pixel location (i, j), while 𝐍_i,j denotes the ground truth surface normal at the same location. The objective is to minimize this loss, which is achieved when the orientations of 𝐍̂_i,j and 𝐍_i,j align perfectly.To optimize the Single-Timestep Spiking UNet, we utilize the backpropagation method <cit.> to calculate the weight updates:Δ w^l = ∂ℒ/∂ w^l = ∂ℒ/∂ o^l∂ o^l/∂ u^l∂ u^l/∂ w^l,where w^l is the weight for layer l, o^l is the output of spiking neurons in layer l, and u^l is the membrane potential of spiking neurons in layer l. Similarly, to optimize the Multi-Timestep Spiking UNet, we utilize the BackPropagation Through Time (BPTT) <cit.> to calculate the weight updates. In BPTT, the model is unrolled for all discrete time steps, and the weight update is computed as the sum of gradients from each time step as follows:Δ w^l = ∑_t=0^B-1∂ℒ/∂ o_t^l∂ o_t^l/∂ u_t^l∂ u_t^l/∂ w^l,where w^l is the weight for layer l, o_t^l is the output of spiking neurons in layer l at the time step t, and u_t^l is the membrane potential of spiking neurons in layer l at the time step t. Based on the Heaviside step functions in Eq. <ref> and Eq. <ref>, we can see that both ∂ o^l/∂ u^l and ∂ o_t^l/∂ u_t^l cannot be differentiable in spiking convolutional layers. To overcome the non-differentiability, we use the differentiable ArcTan function g(x) = 1/πarctan(π x) + 1/2 as the surrogate funciton of the Heaviside step function <cit.>. For the final prediction layer with potential-assisted spiking neurons, since they output membrane potential instead of spikes, we have ∂ o^l/∂ u^l = 1 and ∂ o_t^l/∂ u_t^l = 1 for these layers' weight updates.§ EXPERIMENTS AND RESULTSIn this section, we evaluate the effectiveness and efficiency of our proposed SNN models on event-based shape from polarization. We begin by introducing the experimental setup, datasets, baselines, and performance metrics for event-based shape from polarization. Then, extensive experiments on these datasets showcase the capabilities of our models, both in quantitative and qualitative terms, across synthetic and real-world scenarios. Lastly, we analyze the computational costs of our models to highlight their enhanced energy efficiency compared to the counterpart ANN models.§.§ Experimental SetupOur models are implemented with SpikingJelly <cit.>, an open-source deep learning framework for SNNs based on PyTorch <cit.>. To fairly compare with the counterpart ANN models, we ensure our models have the similar settings like the ANN models in <cit.>. Specifically, we set B=8 for the input event representation. In addition, our models have N_e=4 encoder blocks and N_d=4 decoder blocks. And the event encoding module outputs the binary spiking representation with the channel size of N_c=64. For the spiking-related settings, all the spiking neurons in the spiking convolutional layers are set with a reset value (u_reset) of 0 and a threshold value (u_th) of 1. Following <cit.>, normalization techniques are applied after each convolution (Conv) operation for faster convergence. We train our models for 1000 epochs with a batch size of 2 on Quadro RTX 8000. We use the Adam <cit.> with a learning rate of 1e-4 to optimize our models. §.§ DatasetsWe evaluate our proposed models on two latest large-scale datasets for event-based shape from polarization, including the ESfP-Synthetic Dataset and ESfP-Real Dataset. The ESfP-Synthetic Dataset was generated using the Mitsuba renderer <cit.>, which created scenes with textured meshes illuminated by a point light source. For each scene, a polarizer lens, positioned in front of the camera, was rotated through angles ranging from 0 to 180 degrees with 15 degrees intervals, producing a total of 12 polarization images. With these images, events were simulated using ESIM <cit.> with a 5% contrast threshold. Therefore, each scene in the dataset is accompanied by rendered polarization images, simulated events, and groundtruth surface normals provided by the renderer. The ESfP-Real Dataset is the first large-scale real-world dataset for event-based shape from polarization. It contains various scenes with different objects, textures, shapes, illuminations, and scene depths. The dataset was collected using a Prophesee Gen 4 event camera <cit.>, a Breakthrough Photography X4 CPL linear polarizer <cit.>, a Lucid Polarisens camera <cit.>, and a laser point projector. Specificially, the polarizer rotated in front of the event camera that captured the events for each scene in the dataset. The Lucid Polarisens camera was used to collect polarization images of the same scene at 4 polarization angles {0, 45, 90, 135}. And the groundtruth surface normals were generated using Event-based Structured Light <cit.>, a technique that involves integrating the laser point projector with the event camera.§.§ Baselines and Performance Metrics We evaluate our models against the state-of-the-art physics-based and learning-based methods in the field of shape from polarization. Smith et al. <cit.> combine the physics-based shape from polarization with the photometric image formation model. The method directly estimates lighting information and calculates the surface height using a single polarization image under unknown illumination. Mahmoud et al. <cit.> present a physics-based method to conduct shape recovery using both polarization and shading information. Recently, Muglikar et al. <cit.> are pioneers in addressing event-based shape from polarization, employing both physics-based and learning-based approaches. Their models are notable for directly using event data as inputs. In this paper, our focus is on comparing our proposed models with the learning-based model developed by Muglikar et al. We aim to demonstrate that our SNN-based models can match their performance while offering greater energy efficiency.To evaluate the accuracy of the predicted surface normals, we employ four metrics: Mean Angular Error (MAE), % Angular Error under 11.25 degrees (AE<11.25), % Angular Error under 22.5 degrees (AE<22.5), and % Angular Error under 30 degrees (AE<30). MAE is a commonly used metric that quantifies the angular error of the predicted surface normal, where a lower value indicates better performance <cit.>. The latter three metrics, collectively referred to as angular accuracy, assess the proportion of pixels with angular errors less than 11.25, 22.5, and 30 degrees, respectively, with higher percentages indicating better accuracy <cit.>. §.§ Performance on ESfP-Synthetic We thoroughly evaluate our proposed models on the ESfP-Synthetic Dataset, using both quantitative metrics and qualitative analysis. Specifically, Table <ref> presents the performance of both baselines and our methods in surface normal estimation on the ESfP-Synthetic Dataset. In addition, Figure <ref> showcases the qualitative results of our models and the ANN counterpart on the ESfP-Synthetic Dataset.From Table <ref>, we can see that our proposed models significantly outperform the physics-based methods. The reason why our model can achieve the better performance is that our models benefit from the large-scale dataset and utilize the spiking neurons to extract useful information for event-based shape from polarization. Despite this success, our models do not quite match the overall performance of their ANN counterpart on this dataset, likely due to the limited representation capacity of spiking neurons. However, as Fig. <ref> illustrates, our Multi-Timestep Spiking UNets still manage to achieve comparable, and in some cases superior, results in shape recovery across various objects in the test set, compared to the ANN models. Table <ref> clearly demonstrates that the temporal dynamics inherent in spiking neurons enable the Multi-Timestep Spiking UNets to surpass the Single-Timestep versions in surface normal estimation. Additionally, nearest neighbor sampling, as compared to bilinear upsampling, shows comparable performance while preserving the binary nature and compatibility with SNNs.Recognizing the effectiveness of Multi-Timestep Spiking UNets, we undertook an ablation study aimed at identifying the ideal spiking neurons to fully leverage their temporal dynamic capabilities. The results, detailed in Table <ref>, indicate that IF neurons offer superior performance. This is largely due to their ability to retain more extensive temporal information, as they operate without the influence of a leaky factor. §.§ Performance on ESfP-RealWe also compare these methods on the ESfP-Real Dataset. Specifically, we show the quantitative performance in Table <ref> and illustrate the qualitative results in Fig. <ref>. Similar to the results on the ESfP-Synthetic Dataset, our models demonstrate superior performance compared to physics-based methods on the real-world dataset. Moreover, as indicated by Table <ref> and Fig. <ref>, our models not only match the overall performance of the ANN counterpart but also excel in qualitative results across diverse scenes in the test dataset. This enhanced performance on the ESfP-Real Dataset can be attributed to the sparser nature of this real-world dataset <cit.>. In addition, compared to the ANN counterpart, our model is more compatible with the sparse events and better maintains the sparsity to prevent overfitting on this dataset. Mirroring the outcomes observed on the ESfP-Synthetic Dataset, results from Table <ref> and Fig. <ref> also show that the Multi-Timestep Spiking UNet slightly outperforms the Single-Timestep Spiking UNet. Additionally, nearest neighbor upsampling is on par with bilinear upsampling in terms of surface normal estimation performance. §.§ Energy Analysis In earlier sections, we demonstrated that our models, employing nearest neighbor upsampling, can achieve performance comparable to those using bilinear upsampling in event-based shape from polarization. To delve deeper into the advantages of these fully spiking models, we will now estimate the computational cost savings they offer compared to their fully ANN counterpart <cit.> on the ESfP-Real Dataset. Commonly, the number of synaptic operations serves as a benchmark for assessing the computational energy of SNN models, as referenced in studies like <cit.> and <cit.>. Moreover, we can approximate the total energy consumption of a model using principles based on CMOS technology, as outlined in <cit.>.Unlike ANNs, which consistently perform real-valued matrix-vector multiplication operations regardless of input sparsity, SNNs execute computations based on events, triggered only upon receiving input spikes. Therefore, we initially assess the mean spiking rate of layer l in our proposed model. In particular, the mean spiking rate for layer l in an SNN is calculated as follows:F^(l) = 1/T∑_t∈ TS^(l)_t/K^(l)where T is the total time length, S^(l)_t is the number of spikes of layer l at time t, and K^(l) is the number of neurons of layer l. Table <ref> shows the mean spiking rates for all layers in our fully spiking models, including the Single-Timestep Spiking UNet and Multi-Timestep Spiking UNet. Notice that we do not consider the components without trainable weights, such as max pooling and nearest neighbor upsampling layers. From the table, we can see that the Multi-Timestep Spiking UNet exhibits a higher average spiking rate across all its layers compared to the Single-Timestep Spiking UNet. This increased spiking rate aids in preserving more information, thereby enhancing the accuracy of surface normal estimation.With the mean spiking rates, we can estimate the number of synaptic operations in the SNNs. Given M is the number of neurons, C is the number of synaptic connections per neuron, and F indicates the mean spiking rate, the number of synaptic operations at each time in layer l is calculated as M^(l)× C^(l)× F^(l). Thus, the total number of synaptic operations in an SNN is calculated by: #OP = ∑_lM^(l)× C^(l)× F^(l)× T.In contrast, the total number of synaptic operations in the ANNs is ∑_lM^(l)× C^(l). Due to the binary nature of spikes, SNNs perform only accumulation (AC) per synaptic operation, while ANNs perform the multiply-accumulate (MAC) computations since the operations are real-valued. Based on these, we estimate the number of synaptic operations in the our proposed models and their ANN counterpart. Table <ref> illustrates that, in comparison to ANNs, our models primarily perform AC operations with significantly fewer MAC operations that transform real-valued event inputs into binary spiking representations. Furthermore, the Multi-Timestep Spiking UNet executes more AC operations than the Single-Timestep Spiking UNet due to its higher average spiking rate and the utilization of temporal dynamics across multiple timesteps.In general, AC operation is considered to be significantly more energy-efficient than MAC. For example, an AC is reported to be 5.1× more energy-efficient than a MAC in the case of 32-bit floating-point numbers (0.9pJ vs. 4.6pJ, 45nm CMOS process) <cit.>. Based on this principle, we obtain the computational energy benefits of SNNs over ANNs in Table <ref>. From the table, we can see that the SNN models are 3.14× to 28.80× more energy-efficient than ANNs on the ESfP-Real Dataset. These results are consistent with the fact that the sparse spike communication and event-driven computation underlie the efficiency advantage of SNNs and demonstrate the potential of our models on neuromorphic hardware and energy-constrained devices. § CONCLUSION AND FUTURE WORK In this work, we explore the domain of event-based shape from polarization with SNNs. Drawing inspiration from the feed-forward UNet, we introduce the Single-Timestep Spiking UNet, which processes event-based shape from polarization as a non-temporal task, updating the membrane potential of each spiking neuron only once. This method, while not fully leveraging the temporal capabilities of SNNs, significantly cuts down on computational and energy demands. To better harness the rich temporal data in event-based information, we also propose the Multi-Timestep Spiking UNet. This model operates sequentially across multiple timesteps, enabling spiking neurons to employ their temporal recurrent neuronal dynamics for more effective data extraction. Through extensive evaluation on both synthetic and real-world datasets, our models demonstrate their ability to estimate dense surface normals from polarization events, achieving results comparable to those of state-of-the-art ANN models. Moreover, our models present enhanced energy efficiency over their ANN counterparts, underscoring their suitability for neuromorphic hardware and energy-sensitive edge devices. This research not only advances the field of spiking neural networks but also opens up new possibilities for efficient and effective event-based shape recovery in various applications.Building on this foundation, future work could focus on several promising directions. One key area is the further optimization of SNN architectures to enhance their ability to process complex, dynamic scenes, potentially by integrating more sophisticated temporal dynamics or learning algorithms. Additionally, exploring the integration of our models with other sensory data types, like depth information, could lead to more robust and versatile systems. Moreover, adapting these models for real-time applications in various fields, from autonomous vehicles to augmented reality, presents an exciting challenge. Finally, there is significant potential in further reducing the energy consumption of these networks, making them even more suitable for deployment in low-power, edge computing scenarios. Through these explorations, we can continue to push the boundaries of what's possible with SNNs in event-based sensing and beyond. § ACKNOWLEDGEMENTWe are grateful to Chenghong Lin for her proofreading and advice on the paper writing. § REFERENCES unsrt
http://arxiv.org/abs/2312.16071v1
{ "authors": [ "Peng Kang", "Srutarshi Banerjee", "Henry Chopp", "Aggelos Katsaggelos", "Oliver Cossairt" ], "categories": [ "cs.NE", "cs.AI", "cs.GR", "cs.LG" ], "primary_category": "cs.NE", "published": "20231226144326", "title": "Event-based Shape from Polarization with Spiking Neural Networks" }
1]Arash Dehghan [email protected]]Mucahit Cevikcor1 fn1 [email protected]]Merve Bodur [email protected][cor1]Corresponding author [fn1]Toronto Metropolitan University, Toronto, ON, Canada [1]Toronto Metropolitan University, Toronto, ON, Canada [2]University of Edinburgh, Edinburgh, UK This paper explores the integration of Automated Guided Vehicles (AGVs) in warehouse order picking, a crucial and cost-intensive aspect of warehouse operations. The booming AGV industry, accelerated by the COVID-19 pandemic, is witnessing widespread adoption due to its efficiency, reliability, and cost-effectiveness in automating warehouse tasks. This paper focuses on enhancing the picker-to-parts system, prevalent in small to medium-sized warehouses, through the strategic use of AGVs. We discuss the benefits and applications of AGVs in various warehouse tasks, highlighting their transformative potential in improving operational efficiency. We examine the deployment of AGVs by leading companies in the industry, showcasing their varied functionalities in warehouse management. Addressing the gap in research on optimizing operational performance in hybrid environments where humans and AGVs coexist, our study delves into a dynamic picker-to-parts warehouse scenario. We propose a novel approach Neural Approximate Dynamic Programming approach for coordinating a mixed team of human and AGV workers, aiming to maximize order throughput and operational efficiency. This involves innovative solutions for non-myopic decision making, order batching, and battery management. We also discuss the integration of advanced robotics technology in automating the complete order-picking process. Through a comprehensive numerical study, our work offers valuable insights for managing a heterogeneous workforce in a hybrid warehouse setting, contributing significantly to the field of warehouse automation and logistics. Warehouse management Dynamic task allocation Approximate dynamic programming Deep learning§ INTRODUCTIONThe Automated Guided Vehicle (AGV) industry is flourishing within the warehousing sector, with over 100 manufacturers competing in this growing market <cit.>. Financial investment in AGVs continues to soar, projected to hit US$2.74 billion by 2023, propelled further by the COVID-19 pandemic which accelerated investments in this domain <cit.>. A significant 53% of third-party logistics providers see warehouse automation as a prime opportunity for 2023, showcasing a strong trend towards embracing these technologies <cit.>. This pivot towards automation is not only a reaction to persistent labor shortages, but also a strategic step to stay competitive against major retail conglomerates <cit.>. Retail behemoth Walmart anticipates that about 65% of its stores will incorporate automation by 2026, emphasizing the critical role of AGV technology in shaping the retail sector's logistical future <cit.>. This collective transition indicates that warehouse automation, especially through AGV adoption, is swiftly becoming an industry norm, addressing modern logistics and supply chain challenges effectively.The adoption of AGVs is driven by their significant advantages, which include improved efficiency that speeds up warehouse operations, as well as consistent and reliable workflow management which reduces human error and enhances predictability <cit.>. Financially, they provide notable direct labor cost reductions and indirect savings through improved accuracy and speed, while safety enhancements and reductions in workplace accidents bolster their value proposition further <cit.>. Furthermore, the agility of AGVs in adapting to varied tasks, their scalability for business growth, their contribution to better space utilization, and the relative ease of integration into existing systems fortify their appeal <cit.>. AGVs also enable a more strategic allocation of labor, allowing workers to focus on more complex or customer-centric tasks <cit.>. Subsequently, the breadth of AGV applications within warehouses continues to expand, encompassing tasks such as item delivery, stock replenishment, order picking, and loading and unloading, which are all essential to efficient warehouse operations <cit.>. This wide-ranging functionality of AGVs showcases their transformative potential and their rising status as a staple in warehouse management and logistics.Given the numerous advantages and the versatility in application, several companies have already made significant strides in integrating AGVs into their warehouse workflows. Amazon, a trailblazer in warehouse automation, has deployed over 100,000 robots across its fulfillment centers, while Seegrid has introduced robust pallet trucks and tow tractors which can handle massive loads of up to 8,000 and 10,000 pounds, respectively <cit.>. Additionally, Linde Material Handling caters to a broad range of customer needs, offering a diverse portfolio of AGVs designed for the multifaceted demands of daily warehouse operations <cit.>, and Dematic has positioned its AGVs as essential components for material transport, bridging the gap between production areas and storage facilities, as well as executing various other logistical tasks <cit.>. Our work investigates the role of AGVs in the order picking aspects of warehouse operations, which involve individual items being selected and collected from storage locations to fulfill customer orders. Addressing this aspect of warehouse operations is crucial as it is the most expensive and time-consuming. More specifically, the picking process accounts for up to 55% of the total warehouse operating costs <cit.>, and as much as 60% of all labor activities within the warehouse <cit.>. Two common methods exist in manual order picking: picker-to-parts systems, where workers travel to item locations, collect them, and move to the next location; and parts-to-picker systems, which bring goods from storage to designated picking areas for worker selection. Our paper primarily focuses on the picker-to-parts systems as they are employed by a vast majority of small to medium-sized retail and logistics firms <cit.>, and up to 90% of warehouses in the grocery sector <cit.>.The integration of robotics into order picking operations presents a compelling opportunity for enhancements in operational efficiency. Manual picker-to-parts tasks are among the most laborious in warehouses, often resulting in musculoskeletal disorders, low back pain, and other physical ailments which can diminish the efficiency of the picking system <cit.>. Implementing robots not only addresses these human-centric issues but also offers a substantial reduction in costs related to human labor, such as insurance, salaries, and benefits <cit.>. Numerous studies have explored the integration of Autonomous Mobile Robots (AMRs) in the picking process to aid human pickers with the transportation of items within a hybrid environment <cit.>. In such setups, humans are responsible for picking the items from shelves, while AMRs are utilized solely for transportation, thereby automating only a portion of the picking process. However, thanks to advancements in object picking technology, there are now companies which automate the entirety of the robot-to-parts process. For instance, Magazino's TORU <cit.> is a sophisticated logistics robot that adeptly navigates to shelves to retrieve items directly or to pull cartons toward itself. When dealing with mixed cartons, its integrated 2D and 3D cameras employ 3D computer vision to scan the shelf contents. After matching the items with its database, it selectively extracts the targeted goods. These are then stored internally in the robot's adaptable compartments. Equipped with numerous sensors, TORU safely operates alongside humans, enabling the flexible automation of tasks that were previously done manually. This technology can reduce picking costs by up to 40% when compared to traditional methods. Similarly, Fetch Robotics has introduced its Fetch and Freight robots <cit.>. These machines navigate through ADA-compliant buildings and are equipped with arms capable of reaching down to retrieve items from the floor, ensuring efficient item recovery. Their comprehensive sensor array allows for object perception, navigation, and manipulation in dynamic settings.As the concept of a hybrid order-picking environment –where robots and humans work together in a shared warehouse space– is relatively new, few studies explored the optimization of operational performance in such settings. That is, the collaboration between humans and robots in warehouses remains under-explored, leaving several unanswered questions about how to most effectively pair orders with this joint workforce to enhance efficiency. Our research specifically focuses on this emerging paradigm. We explore the coordination of a mixed team of humans and AGVs navigating the warehouse space, with the objective of intelligently pairing incoming orders to these operatives, taking into account the battery management of AGVs. Our specific modeling objective is to enhance operational efficiency and maximize order throughput. To that end, we develop a novel Markov Decision Process (MDP) model and devise a Neural Approximate Dynamic Programming (NeurADP) framework <cit.> to handle efficient batching and assignment of incoming orders while concurrently managing the battery life of the robotic workers. While there has been some recent research in this specific area within the AGV literature, it has mostly concentrated on static, predictable scenarios, often employing basic heuristics or rule-based approaches for assigning incoming orders to human and robot teams, and has also neglected the charging aspects of the robots. A summary of contributions to the existing literature is provided as follows. * We formulate the order picking problem as an MDP to account for the uncertainty of stochastic order arrivals in a hybrid environment with human and AGV workers. Our model extends beyond previous research for this problem by being the first to incorporate charging stations within the hybrid warehouse setting and introduce battery management decision-making for AGVs. * We implement a NeurADP solution methodology, advancing beyond traditional myopic- and heuristic-based solutions used in previous work. Our experimental results demonstrate that NeurADP significantly outperforms myopic and heuristic-based methods, both in terms of the quantity of orders processed, as well as in the efficiency achieved in executing picking tasks. * We provide managerial insights related to the hybrid order picking setting as well as analysis on the impact of various factors such as the number and types of workers, worker speed, delay time allowance, worker capacity, and the availability of orders for both humans and AGVs, along with the incorporation of order deadlines. The remainder of the paper is organized as follows. Section <ref> provides a comprehensive review of the relevant AGV literature, better positioning our research within the existing body of work. Section <ref> provides a formal description of the problem setting for our problem. In Section <ref>, we describe thesolution methodology. Details regarding the datasets and benchmark policies used in the experiments are provided in Section <ref>. The results of the computational experiments are presented in Section <ref>, followed by a conclusion in Section <ref> that summarizes the research findings and suggests avenues for future research. § LITERATURE REVIEW AGV control problems may be broken down into five core tasks: task allocation, in which the goal is to optimally assign a set of tasks to a set of AGVs, localization, where the goal is to locate the exact location on a map, path planning, which seeks to generate an obstacle-free path from two locations, motion planning, which requires real time modifications of a planned path according to dynamic obstacles, and vehicle management, which focuses on the management of vehicles battery, error, and maintenance statuses. Our research particularly relates to task allocation and vehicle management aspects. Battery management is an important part of AGV operations. While many studies highlight the importance of integrating battery management into AGV decision-making, emphasizing its significant influence on system performance <cit.>, the literature still largely overlooks this area <cit.>. <cit.> investigate the importance of battery management in AGV systems for reducing costs and improving efficiency. They specifically focus on Valve-regulated Lead-Acid batteries, commonly used in AGVs, highlighting the need for appropriate charging intervals to prevent battery deterioration and extend battery life. Similarly, <cit.> examine the effects of different routing techniques for battery management on the performance of AGVs and analyze how routing to charging stations can impact the overall productivity of a manufacturing facility. <cit.> study how adjusting the battery charging durations of AGVs can enhance manufacturing capacities in the short term, and <cit.> introduce an advanced decentralized method for optimizing the integration of charging stations into the existing optimal tour routes of AGVs. Different from these studies, which focus primarily on the battery management aspects of AGV operations, our work extends the scope by integrating battery management considerations with order-picking task allocation in a hybrid warehouse setting.Solution approaches for AGV task allocation may be broken down into two: optimization-based and market-based. In optimization-based methods, an algorithm searches for an optimal solution in a solution space which maximizes a profit or minimizes a cost using global information and considering all constraints. Whereas, in market-based methods, an economic principle is used to solve the task allocation problem. In this literature review, we focus on optimization-based methods given that they are most relevant to our work; a breakdown of further market-based solutions may be found in <cit.>. Task allocation problems span many domains such as manufacturing, healthcare, and robotics and can be solved using a wide range of methodologies such as exact algorithms <cit.>, heuristics <cit.>, dynamic programming <cit.>, and many others. Several papers have looked at the incorporation of hybrid warehouse settings for order picking task allocation systems.Recent studies have examined the use of AMRs in hybrid environments to assist human pickers, where humans pick items from shelves and AMRs handle transportation <cit.>. On the other hand, innovations in object picking technology, such as Magazino's TORU robot and Fetch Robotics' Fetch and Freight robots, have enabled full automation of the picking process <cit.>. The idea of a hybrid order-picking environment, combining human and robot collaboration in shared warehouse spaces, is relatively new, leading to a research gap in optimizing operational performance in these contexts. However, recent studies have begun to explore and address this emerging field. We highlight these relevant studies in Table <ref>, which is comprised of seven indicators that provide information about the problem setting and solution methodology. These are: “Solution Technique”, which describes the approach used to solve the problem, “Non-Myopic”, which indicates whether a myopic solution technique is employed, “No Prior Knowledge”, which indicates whether orders are not known at the beginning of each work day, “Deadline”, which indicates whether specific deadlines are set for fulfilling incoming orders, “Shared Area”, which indicates whether human and AGVs workers share a workspace in the hybrid setting, “Batching”, which indicates whether orders are able to be batched together, and finally “Charging”, which indicates whether charging decisions and battery management is considered for AGVs.<cit.> introduce a robotic picker designed for pallet retrieval and formulate a strategy for allocating products between two distinct warehouse areas, which are designated respectively for human workers and robots. This approach involves a dual-objective optimization model aimed at reducing the labor intensity for human workers while simultaneously enhancing the uniformity of product categories assigned to each zone. To achieve these objectives, the authors employ the non-dominated sorting genetic algorithm to balance the minimization of human workload and the maximization of product category similarity within the designated zones. <cit.> develop a simulation model to assess the energy expenditure of human pickers in a collaborative environment with picking robots, analyzing the operational costs, efficiency, and ergonomic impact. The model defines distinct roles for human pickers and robots based on a set of assignment rules that are contingent on different item classes. Under these rules, items belonging to specific classes are allocated either to human pickers or robots, depending on the nature of the item and the predefined criteria. The study's scenarios are then categorized and evaluated based on varying combinations of these assignment rules, leading to different distributions of tasks between humans and robots. <cit.> investigate the effects of aisle widths and layout variations on human-robot interaction in order picking systems, focusing on enhancing performance efficiency. Their study contrasts zoning strategies with traditional order picking systems, emphasizing the complexity and coordination demands of hybrid systems involving both humans and robots. Their approach uses a heuristic to determine picking routes, and assigning tasks to either humans or robots without considering individual traits. <cit.> present a simulation model for evaluating the performance of hybrid order picking systems, incorporating variables such as picker blocking. In their model, order assignments are based on predetermined workloads for each team, categorized by item classes (e.g., A, B, or C) and their turnover rates. Each item is pre-assigned to a specific team according to its class. When an item is required for an order, the designated team member, either a human or a robot, is responsible for picking it. <cit.> propose an agent-based simulation model to explore how hybrid order picking systems can reduce the daily workload of human pickers. Their model operates under the assumption that all customer orders are known before each shift, eliminating idle time due to late-arriving orders. Orders are assigned following a “first-come-first-served” principle. The assignment rules stipulate that human pickers handle `A' items, while robots are responsible for `B' and `C' items.Our work explores the combined use of humans and AGVs in a warehouse to process orders. It extends previous research by not only pairing orders with workers but also by managing AGV charging. Unlike previous heuristic-based methods, we employ a non-myopic NeurADP approach, which is shown to be effective in such complex scenarios <cit.>.Additionally, our model incorporates order deadlines and our empirical study includes a detailed sensitivity analysis on various key parameters such as worker types, speeds, delay allowances, capacity, and order availability. As such, our paper provides new managerial insights for effectively operating in this hybrid warehouse setting. § PROBLEM DESCRIPTION AND FORMULATION In our study, we introduce a dynamic picker-to-parts model tailored for a hybrid warehouse environment, integrating both human and AGV workers. This model aims to efficiently allocate workers to incoming order batches and determine optimal charging strategies for AGVs, including the timing and duration of charging sessions. It functions over a 24-hour decision horizon, adapting to the variable demand patterns of orders in a grid-layout warehouse equipped with strategically placed charging stations and a designated drop-off area for picked items. Orders in our system are generated stochastically, each with its own delivery deadline influenced by its arrival time. Upon assignment to a worker, an order becomes an incorporated part of the system, with a guarantee to meet its designated drop-off deadline. Moreover, the model takes into account a predetermined group of heterogeneous workers available during the planning horizon, considering their capacity constraints and, in the case of AGVs, their battery levels.Our model incorporates several key problem specifications related to the hybrid order-picking problem setting. First, workers may be matched with multiple orders per time-step, wherein each order is associated with its own pick-up location. A worker is then expected to traverse the warehouse to pick up the orders at each of their respective locations and deliver them to the designated drop-off area. Workers maintain a queue of their assigned orders to track which ones need to be picked up and which have been collected, before they are delivered to the drop-off area. The queue is dynamically rearranged whenever a new order or batch is assigned to a worker, optimizing their route within the warehouse for order collection and return to the drop-off area. Once an order is placed in a worker's queue, it cannot be transferred to another worker's queue. Unmatched orders that surpass their arrival period are removed from the system, reflecting the expectation of customers for timely confirmation of their requests. However, new orders arriving in subsequent time steps can be assigned to workers and added to their queues, provided several considerations are taken into account.First, each worker has a designated capacity corresponding to the size of their storage bin, which is used to hold orders. Subsequently, a batch of orders is only eligible to be matched with a worker if the additional capacity assigned to them does not exceed their bin capacity. Furthermore, a batch of orders may only be assigned to a worker and added to their queue if by adding the batch of orders, the worker is still able to pick up all remaining orders and drop off all orders at the drop-off area prior to each orders deadline. For AGVs, battery levels are additionally a crucial factor in matching them with order batches. Specifically, an AGV is only assigned a batch if it can complete all orders in that batch, as well as those already in its queue, and still reach the nearest charging station before its battery depletes completely. This policy ensures AGVs maintain sufficient charge throughout the operational period. Additionally, it is assumed that AGVs cannot charge while serving orders. Once a decision to charge is made, the AGV will charge uninterrupted until the start of the next decision epoch. The main goal of our model is to maximize the fulfillment of online orders within the decision horizon. We factor in uncertainties of future order arrivals and the downstream effects of current choices, including those related to charging decisions. To manage these complex decisions, we develop an MDP model and incorporated a NeurADP solution framework. This approach facilitates efficient real-time decision-making, even amid uncertainty. We present the MDP model components in Table <ref>.To begin, the planning horizonis divided into discrete time intervals of (e.g., five minutes), such that decisions are made at the beginning of each interval, while exogenous information, denoted by , is observed continuously throughout. The state of the system _ is characterized by the attributes of workers and orders, defined byand , respectively. An individual worker's state is captured by a five-dimensional attribute vector . This vector includes _, indicating their position in the warehouse, and _, a binary attribute distinguishing between human workers and AGVs. _ reflects the worker's current order capacity, _ shows their battery level (applicable only to AGVs), and _ details the worker’s task queue, including assigned orders. This queue, optimized for minimal travel time, is updated with each new batch of orders assigned to the worker. Moreover, the state of each order is represented by a three-dimensional vector . This includes _, specifying the order's storage location in the warehouse, _, a binary value indicating if the order requires handling exclusively by humans, and _, the drop-off deadline, calculated as + _. Here, γ_ denotes the permissible delay time for the order .The set of feasible decisions may be defined by _∈_(_). An individual action _ for a worker may encompass one of the following: allocating a batch of orders, designating a charging task, or assigning a null action. The assignment of order batches depends on the worker's capacity and ability to deliver both new and existing orders before their deadlines. For AGVs, this also includes the need to have sufficient battery to complete deliveries and potentially reach a charging station complete depletion. The charging action, specific to AGVs, is only feasible if the AGV is not actively serving orders. When assigned a charging action, the worker heads to the nearest charging station in the warehouse. If they arrive at the charging station before the next decision epoch, they use the remaining time until that epoch to recharge their battery. Conversely, if the worker doesn't reach the charging station by the next decision epoch, a new decision will be made for them in the subsequent epoch, based on their updated state. Finally, the null action implies that the worker continues their current activity, whether that is progressing with order pick-up and delivery or remaining idle in the absence of new orders. Furthermore, the immediate reward for an action _ is determined by multiplying the total orders served byand then subtracting the time fromuntil the worker can deliver their assigned orders to the drop-off area.is introduced to prioritize maximizing the number of orders completed per interval, ensuring this aspect outweighs the time factor. When actions serve an equal number of orders, preference is given to the one that minimizes task completion time, thereby freeing workers sooner for new order batches. This aspect is calculated by multiplying the worker's maximum capacity length by . In MDP models, the system evolution involves the transition from an initial state _0, via a set of actions _0, to a subsequent state _1. This recursive evolution is continued until the final decision epoch of the horizon at time T. However, as illustrated in Figure <ref>, the system evolution in ADP is more granularly defined to explicitly denote pre-decision and post-decision states, as well as the arrival of exogenous information. Given an initial state _0, and the arrival of exogenous information _0, a set of actions _0 may be taken at time =0, corresponding to subsequent rewards of _0. The system then progresses to a post-decision state, denoted as _0^=(_0,_0). This represents the system's state after implementing actions on _0, but before the arrival of new exogenous information in the next time step. The subsequent state, _1, emerges through the receipt of exogenous uncertainty _1 and the state transition function (_0^,_1). This recursive process again repeats up to the final decision epoch at time T. In our order picking problem, given that orders exit the system at the end of each decision epoch, the post-decision state may be represented as ^_=(_^, _^ = ∅). The state transition is then expressed by _ + 1=(^_,_ + 1), where _ + 1 denotes the arrival of orders betweenand + 1.Given that V_(_) denotes the value of being in a state _ at decision epoch , we may define the Bellman optimality equation as such:V_(_) = max{_(_) + 𝔼__+1 [V_+1(_+1) | _, _, _ + 1] : _∈_(_) }.Incorporating the post-decision state, we may break down and rewrite the Bellman equation as follows:V_(_) = max{_(_) + V_^(_^) : _∈_(_) }V_^(_^) = 𝔼__+1 [V_+1(_+1) | _^, _ + 1] To simplify calculations and avoid the computational burden of examining each possible outcome for future values, Equation (<ref>) formulates a deterministic optimality equation based on the post-decision state, while Equation (<ref>) defines the value of the post-decision state as the expected total of future rewards. Given the impracticality of calculating this expected value, an approximate value of the post-decision state value function is taken instead. This approach facilitates a more manageable and efficient estimation of future rewards, streamlining the optimization process in large-scale problems such as the one in this paper. To determine the optimal policy which maximizes the total expected reward, we must maximize the initial value function V_0, given the initial state _0. As such, the objective of our MDP model can be defined as max V_0(_0).§ SOLUTION METHODOLOGY We adapt the NeurADP framework <cit.> to our problem. NeurADP is a novel ADP-based algorithm designed for large-scale decision-making problems. It employs neural network value function approximations and utilizes deep reinforcement learning techniques for improved stability and efficiency. Furthermore, its ability to learn from integer programming-based assignments to manage complex combinatorial challenges makes it useful for large-scale problems. In what follows, we explain the details of the adaptation of the general NeurADP framework to the dynamic AGV task allocation problem, to derive high-quality approximate solutions to our proposed MDP model. We start with the building blocks and then provide the overall NeurADP algorithm. Given system state S_t = (W_t,O_t), we mainly follow the below steps.* Enumerate the set of feasible order batches for workers: Let Γ_t: W_t →^2(O_t) wheredenotes the power set and ^2(·) = ((·)). Given a worker w ∈ W_t, this function returns the collection of order batches (created from the orders in O_t) feasible to be assigned to the worker. That is, Γ_t(w) is a set with each element 𝒢∈Γ_t(w) corresponding to a batch of orders (i.e., a subset of O_t) such that the worker of attributes w can handle all of its currently assigned tasks combined with all the orders in 𝒢 in a feasible manner. Algorithm <ref> defines the enumeration of the matching feasibility set Γ_(). The algorithm accepts as input the attribute state of a worker, , and the set of orders, _. It begins by initializing an empty set for the feasible matchings. Next, provided that the worker has available capacity, the set of potential feasible batchings (_) is iterated. If the worker is an AGV and there exists an order within potential batching which may only be handled by humans, said batching is ignored. Otherwise, the feasibility of matching potential batching 𝒢 to workeris derived through Algorithm <ref>. The algorithm begins by combining the set of potential batch orders, 𝒢, with the orders already assigned to a worker, _. This results in a new set of assigned orders, '_. Additionally, '_ is initialized to denote the new remaining set of pick-up locations the worker must visit prior to dropping orders off at the drop-off area. The algorithm then proceeds to iterate through all possible permutations of routes for collecting each order in the set '_ and dropping them off. If a path exists where all orders are delivered before their respective deadlines, and the worker is either human or an AGV with adequate battery life to complete the route and recharge afterwards, then the algorithm confirms the feasibility of the path. Otherwise, it returns false. If Algorithm <ref> returns a value of true, then the potential batching 𝒢 in Algorithm <ref> is added to the matching feasibility set Γ_(), otherwise, the algorithm iterates to the next potential batching. Algorithm <ref> ends by returning the matching feasibility set Γ_().* Define decision variables: For worker w ∈ W_t, in addition to the possible order batch assignment, we have possible actions of going to the nearest charging station (for AGVs) and taking the null action, i.e., just continue with the previously assigned tasks. In that regard, we define the following binary decision variables: x_w ↔𝒢 which takes the value of 1 if the order batch 𝒢 is assigned, and 0 otherwise; y_w which takes the value of 1 if the null action at the current state is instructed, and 0 otherwise; and z_w which takes the value of 1 if the charging action is taken, and 0 otherwise.* Create task allocation model: Using the above-defined decision variables, we can obtain a binary programming representation of the feasible set of the Bellman optimality equation, previously stated asV_t(S_t) =max R_t(a_t) + V_t^(S_t^)s.t. a_t ∈ A_t(S_t),which yields the following task allocation model: maxR_t(x,y,z) + V_t^(W_t,x,y,z) s.t.∑_𝒢 ∈Γ_t(w) x_w ↔𝒢 + y_w = 1∀w ∈W_t : w_human = True ∑_𝒢 ∈Γ_t(w) x_w ↔𝒢 + y_w + z_w = 1∀w ∈W_t : w_human = False ∑_w ∈W_t ∑_𝒢 ∈Γ_t(w) : o ∈𝒢 x_w ↔𝒢 ≤1∀o ∈O_tz_w = 0∀w ∈W_t : w_human = Truex,y,zbinary The model assigns one feasible action to each worker (namely order batch, null action, or charging; the last being eligible only for AGVs), ensuring that each order is assigned to at most one worker. With a slight abuse of notation, we parametrized the immediate and expected future reward of these actions by the binary decision variables as well as the workers' state vector for the latter.* Approximate the objective function: We use a linear approximation for the objective function of the task allocation model:max ∑_w ∈ W_t∑_𝒢∈Γ_t(w)α^_w ↔𝒢 x_w ↔𝒢 + ∑_w ∈ W_t (α^_w y_w + α^_w z_w)where the coefficients (α^,α^,α^) are predicted via a neural network (NN). Note that since the charging action is not relevant to human workers, we can ignore those decisions in the objective, i.e., treat α^_w = 0 as fixed for any human worker index w; just for the ease of presentation we use the full coefficient vector.As a result, we solve the approximated task allocation model given by (<ref>)-(<ref>) for decision making. We note that having this separable objective form, the NN is only providing estimates for the post-decision value function per worker, not for the joint value of all the workers. More specifically, instead of approximating V_t^(S_t^), it helps with the approximation of the form ∑_w ∈𝒲 V^≈_t ({x_w ↔𝒢}_𝒢∈Γ_t(w),y_w,z_w,W_t ).The function V^≈_t basically takes the input of the action selected for a fixed worker w (since only one the corresponding x, ,y, and z variables should be selected in the task allocation model) along with the current (i.e., pre-decision) state of all the workers. As a result, it has access to the post-decision state of only worker w – that is, S_tw^– and uses the pre-decision state of the other workers – that is, W_t from S_t– as auxiliary information to potentially improve the prediction for worker w.* Learn objective coefficient prediction: We begin by sampling an experience, containing the state of workers _, the associated feasible action set, and the post-decision state of the workers from the previous time step ^_ - 1. The experience is then evaluated by applying each feasible action to the state of workers and scoring the post-decision state reached. This scoring for each post-decision state is carried out using a target neural network. The task allocation model is then employed to determine the actions of each worker. Subsequently, the post-decision states of the workers ^_ - 1 from the previous time step are updated through gradient descent. This update utilizes the supervised target scores obtained from the task allocation model.PUT A VERY BRIEF ALGORITHM USING THE ABOVE STEPS TRAINING* Until a stopping criterion is met: * Simulate the system for one planning horizon and update the NN: * Initialize workers' and orders' states, S_0 * For each decision epoch, t=0, 1, , T: * Sample the new orders, W_t * Enumerate the set of feasible order batches for workers, Γ_t(·); store them as an experience * Obtain the decision-making model objective coefficients from the currently trained version of the NN, (α^,α^,α^) * Create the approximated task allocation model (using the predicted coefficients) and obtain its optimal solution * (Optional) Sample from previously collected experiences and update the NN * Update the system state implementing the obtained task allocation solution, S_t+1 TESTING Algorithm <ref> defines the NeurADP training algorithm for the AGV task allocation problem. The algorithm takes as input the initial state of the system _0, as well as an initialized neural network function. It begins by initializing the states of the workers, as well as the initial set of incoming orders, for the initial state _0. The algorithm then iterates over each time step of the planning horizon. Each time step involves first the sampling of a new set of orders _, followed by an enumeration of the feasible order batchings for workers. The state space information and feasible actions are stored as experience, and neural network is utilized to obtain the decision-making model objective coefficients. The task allocation model is then utilized to obtain the optimal solution. Experiences may then be sampled to update the neural network weights, and the network system is updated by implementing the task allocation solution. The algorithm commences by outputting the neural network with the updated weights after simulation. § EXPERIMENTAL SETUP We implement our methods using Python 3.6.13 and run the numerical experiments on Compute Canada Cedar servers <cit.>. The linear programming (LP) models are solved using IBM ILOG CPLEX Optimization Studio, version 12.10.0. In what follows, we provide a detailed overview of the experimental settings, which are important for evaluating the effectiveness of our  policy. This includes an in-depth description of the warehouse environment and the data generation, followed by an explanation of the myopic benchmark policies that are used in our comparative analysis. §.§ Dataset DescriptionOur dataset is based on a grid-patterned warehouse which features 9 shelve corridors, each with 20 pick-up locations, amounting to a total of 180 pick-up locations. The warehouse layout includes a single drop-off zone located in the bottom-left corner, along with two charging stations situated in the bottom-right and top-left corners. The warehouse workers and AGVs navigate through aisles spaced between these shelves. In our base-case scenario, the time taken to travel from one movement node to another, including to and from the charging and drop-off points, is uniformly set at 30 seconds. A visual representation of a representative warehouse layout is provided in Figure <ref>.We use synthetic order arrival data in our numerical experiments. Specifically, we assume that orders arrive to the system in a stochastic manner, adhering to a left-skewed beta distribution characterized by parameters α=5 and β=2. This distribution, illustrating the average quantity of orders received at each decision epoch, is depicted in Figure <ref>, accompanied by a band representing a standard deviation of one order. Consequently, during each decision epoch, the mean number of arriving orders is utilized as the central value for a normal distribution with a standard deviation of one. The resulting value, rounded to the nearest integer, represents the count of orders arriving at that specific time step for a given simulation iteration. Additionally, the likelihood of an order being requested from any of the 180 pick-up locations at a particular time is determined by sampling from a Poisson distribution with a mean of 1. These probabilities are subsequently normalized to sum to one, transforming them into a proportional distribution. This normalized distribution effectively mirrors the comparative likelihood of order arrivals at each location for every time slot, ensuring a balanced and realistic representation of order frequencies across the network. §.§ Benchmark PoliciesMyopic strategies typically prioritize immediate outcomes, neglecting the potential future implications of choices. Such methods are advantageous in complex, rapidly changing environments where developing a comprehensive, optimal strategy is not practical. While myopic policies can be useful for quick decision-making, they might not be ideal for achieving the best long-term results. As a result, they are frequently employed as basic reference points or benchmarks in most cases. Previous studies have predominantly employed simplistic rule-based methodologies for allocating incoming orders among different types of workers. This often involves categorizing orders and then designating these categories to specific worker types <cit.>. We consider two distinct sets of myopic policies in our experimental study:   and . Each policy is designed with the primary goal of maximizing the number of orders served in the immediate time step. The   policy employs a linear programming approach which adheres to constraints (<ref>)-(<ref>), ignoring the expected future rewards and only maximizing immediate rewards. Conversely, the   approach involves a two-step decision-making process. The first step determines whether orders should be first allocated to human workers or AGVs. The second step involves deciding the optimal time for AGVs to recharge their batteries. Within this framework,   policies are further extended: Myopic-HF, which prioritizes assigning orders to humans before AGVs, and Myopic-RF, which does the opposite, giving preference to AGVs for order fulfillment. Additionally, these policies include a battery management component, exemplified by a policy like Myopic-HF-20, which dictates that AGVs not currently assigned to orders should proceed to recharge if their battery levels fall below 20%. By incorporating both  and  approaches in task allocation, we establish a comprehensive baseline in our comparative analysis with our  policy, which allows us to provide an in-depth analysis of the performance of both methods. It also aids in understanding how different strategies for order/task allocation can impact the overall performance of a hybrid workforce in warehouse operations.We note that the recharging strategies adopted in the two sets of myopic policies are inspired by the charging schemes outlined in existing literature (e.g., see <cit.>). Specifically, the  policies employ an “opportunity” charging approach, allowing AGVs to charge during periods of inactivity. In contrast, the  policies adopt an “automatic” charging strategy, where AGVs are directed to charge only when their battery levels drop below a predefined threshold. Although AGVs in industrial settings typically operate under the latter scheme <cit.>, our exploration of both strategies aims to establish a thorough baseline against which the efficacy of our  policies can be assessed. Hence, this approach allows for a more comprehensive understanding of how different charging strategies impact AGV performance in a warehouse environment. § RESULTSWe evaluate the results from our numerical experiments with respect to five primary inputs: the number of workers, the allowed delay time, the worker capacity, the speed at which each type of worker performs their tasks, and the availability of orders to both human and AGV workers. The number of workers is obtained by accounting for all the workers operating within a 24-hour period, while the delay time represents the maximum duration of time a worker has from an order's entry into the system to their drop-off at the drop-off area. This duration is used to determine the order deadline.The worker capacity specifies the maximum number of orders a worker can carry simultaneously, while the speed of workers represents the speed at which they are able to perform their tasks. Finally, the availability of orders specifies the percentage of incoming orders which are able to be handled by both humans and robots. Due to the limitations of AGVs being able to handle items of certain shapes and sizes, we find it important to consider scenarios where only humans are able to handle certain orders, adding complexity to the batching and matching of orders together and to workers. In our baseline configuration, we include 5 human workers and 5 AGV workers, set a maximum allowable delay time of 15 minutes, and a maximum worker capacity of 2 orders for both humans and AGVs. Furthermore, we set the travel time of all workers between edges to be 30 seconds and consider all orders to be handleable by both types of workers. Finally, we consider a battery deterioration rate of 0.5% per minute for AGVs, as well as a charging replenishment rate of 5% per minute. We begin by examining the baseline configuration to identify the most suitable benchmark policies from the  and  policy classes, which are later used in the comparative analysis with the  policy. We then provide the numerical results for alternative configurations.§.§ Baseline Configuration Our primary benchmark policies exhibit several variations in the matching process between orders and workers. In the  policy, we utilize an ILP to make decisions on assigning batches of incoming orders with workers, as well as making charging decisions for AGVs. However, in the  policy, we utilize a heuristic-based mechanism for deciding on which worker to assign orders to first, and then deciding on when to assign unassigned AGVs to recharge their batteries. We consider six variations of the  policies. Once again, “HF” represents the matching of orders with humans first, while “RF” represents the matching of orders with AGVs first. Additionally, we consider the battery threshold at which AGVs, which are not currently assigned to orders, are directed to the charging areas to recharge their batteries. We consider the cases where humans are assigned to the orders first, and where robots are assigned to recharge their batteries at thresholds of 20%, 40%, and 60%, as well as the scenarios where robots are first assigned orders, and again where the thresholds are 20%, 40%, and 60%.Table <ref> provides the outcomes for the considered policy variants for the baseline configuration, which are obtained as the average statistics when the policies are evaluated on 50 test days. The table includes the “Orders Seen” value, which indicates the average number of daily orders seen, as well as the number of orders fulfilled by each policy, denoted as “Orders Filled”, with a standard deviation provided for each policy. Furthermore, the percentage increase of the  policy compared to the other benchmark policies is included in the right-most column, labeled “% Incr. NeurADP”. This metric is calculated by subtracting the average number of orders fulfilled by the benchmark policies from the average number of orders fulfilled by the  policy, then dividing the outcome by the “Orders Seen” value, multiplied by 100. We observe that the  policy noticeably outperforms all benchmark policy variants in both the  and  scenarios. This superior performance is largely due to its more nuanced approach in effectively pairing order batches with workers and strategically deciding when to assign AGVs for battery recharging, taking into account the downstream effects of its actions. Additionally, the  policy demonstrates notable superiority over -based policies. Its use of ILP for simultaneous decision-making in order matching and charging seems more adept for intricate strategies compared to the simplistic assumptions underlying heuristic-based myopic strategies, which do not vary according to scenarios in order matching and charging decisions. Moreover, we find that strategies that give precedence to humans over AGVs for order assignments lead to more favorable results. This effectiveness primarily stems from the reduced frequency of task assignments to AGVs in human-first policies. In scenarios where orders are assigned to the AGVs first, they tend to engage more often in delivery tasks, leading to shorter and less effective charging periods. This is evidenced by the comparison in average AGV battery life: 37.97% in the Human-First (HF-20) scenario versus 30.83% in the Robot-First (RF-20) scenario. Moreover, the average count of AGVs charging at any point is notably lower in the HF scenario (1.76) compared to the RF scenario (1.96). This suggests that AGVs in the latter scenario undergo more frequent charging cycles, which are truncated due to commitments to incoming order delivery tasks. Furthermore, we note that among the various benchmark policy variations,  and Myopic-HF-20 consistently yield the best performance. As a result, we utilize  and Myopic-HF-20 as our benchmark policies for the remainder of our experiments.§.§ Impact of Number of WorkersWe examine the impact of varying the composition of human and AGV workers on the order picking process to better assess the operational effectiveness of our proposed approach. Specifically, we explore three scenarios: all ten workers being humans, a split of five humans and five AGVs, and all ten workers being AGVs. The results, detailed in Table <ref>, demonstrate that the  policy consistently outperforms all baseline policies across different worker configurations. Figure <ref> presents the number of orders fulfilled in the scenario with an equal split of human and AGV workers. We observe that  maintains superiority over the benchmark policies irrespective of the worker type mix. Notably, there is a trend of decreasing order fulfillment efficiency as the proportion of human workers diminishes. This decline can be attributed to the inherent limitations of AGVs, such as the need for battery charging, which is not a constraint for human workers. Consequently, humans are generally more efficient in handling orders under the given settings. Moreover, there is a noticeable reduction in the performance gap between  and baseline policies as the number of human workers increases. This narrowing of the gap could be linked to the simplification of decision-making processes when AGVs are less involved. In environments dominated by human workers, decision-making becomes less complex, reducing the disparity with simpler myopic policies that may not consider long-term consequences or rely on basic heuristics. In contrast, scenarios including AGVs demand more nuanced decision-making, especially regarding charging strategies, thereby highlighting the advanced decision-making capabilities of  more prominently.§.§ Impact of Worker SpeedWe next assess the impact of worker speed on order fulfillment and charging efficiency. Recognizing that AGVs operate at different speeds in various environments, we investigate how these variations in worker efficiency influence the performance of different policies. Specifically, we explore the effects of differing speeds for traversing the warehouse, including scenarios where humans complete an edge in 30 seconds and AGVs in one minute (humans faster than AGVs), both humans and AGVs taking 30 seconds (equal speed), and humans taking one minute while AGVs take 30 seconds (AGVs faster than humans). The results from this experiment are summarized in Table <ref>. Across these scenarios,  consistently outperforms both  and Myopic-HF-20 policies. This indicates that its neural network-based decision-making adeptly handles the complex dynamics of the warehouse, encompassing variations in worker speed, order batching, and charging schedules. Moreover,  not only serves more orders but also achieves quicker deliveries. Specifically, as depicted in Figure <ref>, in the three scenarios,  delivers orders in an average of 9.19, 8.43, and 9.31 minutes, compared to  at 10.84, 9.51, and 10.48 minutes, and Myopic-HF-20 at 9.36, 9.57, and 10.66 minutes, respectively. We also observe that the policies generally perform better when humans are more efficient than AGVs. This may be attributed to the fact that slower AGVs take longer to reach charging stations, reducing their availability for order assignments. In contrast, slower humans do not encounter this specific issue, leading to a lesser impact on order service rates. Particularly notable is the marked decrease in performance of the Myopic-HF-20 policy when AGVs are slower, possibly due to its threshold-based battery management leading to frequent and inefficient charging trips. Conversely,  seems adept at managing the increased complexity and diversity in worker speeds, thereby making better decisions. §.§ Impact of Delay TimeWe also explore how varying the acceptable delay time for order delivery affects the throughput of orders served. For this purpose, we analyze three scenarios: a 10-minute deadline for order drop-off post-entry into the system, an extended deadline of 15 minutes, and a further extension to 20 minutes. We present the results for these experiments in Table <ref>. In all these scenarios,  consistently outperforms the benchmark policies regarding allowed delay time. A significant increase in the number of orders served is observed when the delay time is extended from 10 to 15 minutes. This suggests that the initial 10-minute window was overly restrictive, preventing the effective batching and assignment of orders to workers due to time constraints. However, extending the deadline further from 15 to 20 minutes does not yield any notable improvements in order throughput for any of the policies. This plateau in performance improvement may be attributed to other limiting factors on workers, such as capacity constraints, which imply that the additional delay allowance does not translate into the capability to handle more orders or to fulfill orders that were previously unmanageable.§.§ Impact of Worker CapacityThe carrying capacity of the workers during pickups can also have a significant impact on the effectiveness of the task allocation policies.In this experiment, we adjust the capacities of human and AGV workers in the following configurations: AGVs with a capacity of 2, humans with a capacity of 3, both AGVs and humans with a capacity of 2, and finally AGVs with a capacity of 3 and humans with 2. The results from this experiment are summarized in Table <ref>. Initially, we observe that the  policy surpasses benchmark policies across all capacity variations. Further, increasing the carrying capacity of either AGVs or humans boosts the number of orders fulfilled by all policies, indicating that current order servicing is constrained by the carrying capacity of the workers. Moreover, increasing the capacity of human workers appears to be more advantageous for order servicing than doing the same for AGVs. This discrepancy arises because AGVs are limited by the need to periodically recharge their batteries, which can restrict their ability to efficiently utilize additional capacity. Notably, the  policy shows a more pronounced improvement when worker capacities are increased. This improvement can be attributed to the policy's enhanced ability to make more strategic decisions. With fewer constraints on matching order batches with workers,  has greater flexibility to optimize decisions, whereas, in scenarios with stricter capacity limitations, the scope for significant decision-making improvements is more limited.§.§ Impact of Order AvailabilityLastly, we examine how the availability of orders for different worker types affects their ability to fulfill orders. Considering the limited capacity of certain AGVs to handle specific items or orders, we evaluate scenarios where AGVs can only manage a certain percentage of incoming orders. Specifically, we assess cases where 0%, 20%, and 40% of orders can exclusively be handled by humans, with results detailed in Table <ref>. This limitation challenges AGVs in processing order batches, as they can only be assigned orders within their handling capacity. Consequently, we observe a decrease in the number of orders served across all policies as the proportion of human-exclusive orders increases. However, the  policy, thanks to its more complex decision-making capabilities, consistently outperforms the baseline policies, especially in optimally utilizing AGVs under these constraints. This is particularly evident in the 40% scenario, where the humans operating under the  policy serve a comparable number of orders to those in benchmark policies, but the AGVs in the  system manage a significantly higher number of orders. This leads to an overall better performance relative to the benchmark policies. For instance, in the  policy under the 40% scenario, AGVs serve an average of 866.98 orders, markedly more than the 798.3 and 624.94 orders served under the  and Myopic-HF-20 policies, respectively. This demonstrates 's proficiency in leveraging AGVs effectively, even when faced with order availability restrictions. § CONCLUSIONIn this study, we investigate the AGV integration within picker-to-parts warehouse systems, and we model a dynamic warehouse environment where human workers and AGVs work in tandem. For the considered problem setting, we develop a NeurADP-based solution approach, which enables non-myopic decision-making in order allocation and battery management for AGVs. The detailed numerical study underscores the NeurADP approach's efficacy, yielding a significant improvement over traditional myopic and heuristic-based methods. Specifically, the NeurADP policies exhibit superior performance in fulfilling a higher number of orders and enhancing the efficiency of executing order picking tasks, which is particularly evident under varying problem parameters such as worker speed, delay time, and order availability. Furthermore, our analysis provides valuable managerial insights into operating a hybrid warehouse and the importance of intelligent decision-making in a mixed workforce environment. There exist several study limitations that can be remedied in future research. First, our experiments are conducted with a synthetic dataset that is generated based on real-life warehouse configurations and order arrival patterns. However, extending our numerical study to alternative warehouse configurations and order arrival patterns can help further validate our proposed methods. Moreover, the complexity of human and AGV interaction is modeled under simplifying assumptions that may not capture the full range of behaviors in a real-world setting. Exploring the psychological and behavioral aspects of human-AGV interaction may provide deeper insights into optimizing the collaborative workspace for both efficiency and worker satisfaction. In terms of modeling and methodological extensions, the learning capacity of the NeurADP can be potentially enhanced for the AGV task allocation problem by considering more complex NN designs and other algorithmic enhancements such as extending individual worker-based value function approximation to the worker sets. Another valuable future research direction in this regard is to extend our proposed approach to solve the integrated AGV management problem that also involves localization, motion planning and path planning steps. Ultimately, as the landscape of warehouse automation continues to evolve, the pursuit of innovative solutions such as NeurADP will remain pivotal in driving the industry forward.§ DISCLOSURE STATEMENTNo potential conflict of interest was reported by the authors.elsarticle-harv
http://arxiv.org/abs/2312.16026v1
{ "authors": [ "Arash Dehghan", "Mucahit Cevik", "Merve Bodur" ], "categories": [ "math.OC", "cs.AI" ], "primary_category": "math.OC", "published": "20231226122825", "title": "Dynamic AGV Task Allocation in Intelligent Warehouses" }
While altermagnetic materials are characterized by a vanishing net magnetic moment, their symmetry in principle allows for the existence of an anomalous Hall effect (AHE). Here we introduce a model with altermagnetism in which the emergence of an AHE is driven by interactions. This model is grounded in a modified Kane-Mele framework with antiferromagnetic (AFM) spin-spin correlations. Quantum Monte Carlo simulations show that the system undergoes a finite temperature phase transition governed by a primary AFM order parameter accompanied by a secondary one of Haldane type. The emergence of both orders turns the metallic state of the system, away from half-filling, to an altermagnet with a finite anomalous Hall conductivity. A mean field ansatz corroborates these results, which pave the way into the study of correlation induced altermagnets with finite Berry curvature. Altermagnetic anomalous Hall effect emerging from electronic correlations Jeroen van den Brink January 14, 2024 =========================================================================Introduction.— Ferromagnetic conductors are generally endowed with an observable anomalous Hall effect (AHE), where an electric current perpendicular to the magnetization gives rise to a transversal Hall potential <cit.>. Here the magnetization of the ferromagnet takes over the role of an external magnetic field that is the root cause of the standard Hall response in time-reversal invariant (i.e. non-magnetic) conductors. On this basis one might expect that in a fully compensated antiferromagnet the lack of a net magnetization forces the AHE response to vanish. Interestingly, a symmetry analysis shows that this is not necessarily the case – only time-reversal symmetry breaking by itself is a sufficient condition to allow for an AHE, also in the absence of a net ferromagnetic moment. However, the combination of time-reversal and translation symmetry, which characterizes most antiferromagnetic materials, implies a vanishing AHE.Altermagnets comprise the class of compensated collinear antiferromagnets without this combined symmetry <cit.> and a large set of altermagnetic materials have been identified by first principles electronic structure calculations <cit.>. They are characterized by a fully compensated magnetic order and therefore a zero net magnetic moment, but their symmetry properties reveal that this type of compensated magnetic order may induce an anomalous Hall effect <cit.>.In spite of the clear ground-state symmetry considerations so far no interacting models have been proposed in which an altermagnetic AHE emerges. The latter requires a time-reversal invariant Hamiltonian in which time-reversal symmetry (TRS) is spontaneously broken with zero total moment and still finite anomalous Hall conductivity.Here we show that precisely such an altermagnetic order emerges in a modified Kane-Mele model with broken inversion symmetry and antiferromagnetic spin-spin interactions. Quantum Monte Carlo (QMC) calculations reveal that, at finite temperature, the primary antiferromagnetic (AFM) order parameter gives rise to a secondary altermagnetic one, inducing a finite AHE. The occurrence of these two order parameters is in full agreement with the recently developed Landau theory of altermagnetism <cit.>.The smoking gun of this emergent altermagnetic phase is a spin-split electronic band structure with an anomalous Hall conductivity that can be tuned by doping. Interacting altermagnetic Kane-Mele model.— We start with a modified Kane-Mele (KM) model on a honeycomb lattice with a unit cell containing two sites denoted by A and B. In contrast to the canonical KM model, the sign of the complex phase of the hopping integrals between next nearest neighboring (NNN) sites is opposite on the two sublattices, as in Ref. <cit.>. The corresponding Hamiltonian isĤ_0=t ∑_⟨ i,j⟩,s ĉ_i,s^†ĉ^_j,s+ λ∑_⟨⟨ i,j⟩⟩ ,s e^i s Φ_i,jĉ_i,s^†ĉ^_j,s- μ∑_in̂_i,where ĉ_i,s is the annihilation operator of an electron of spin s on a honeycomb lattice site, n̂_i≡∑_sĉ_i,s^†ĉ^†_i,s is the corresponding electronic density, and μ is the chemical potential. t and λ are the hopping integrals between the nearest neighboring (NN) and NNN sites, respectively. Φ_i,j= ±π/2 is the complex phase gained by an electron during a NNN hopping process according to the pattern shown in Fig. <ref> (a).Effectively the λ term introduces a pseudoscalar potential that offsets the Dirac cones for the two spin components, thus generating anisotropic spin-split electronic bands (see Fig. <ref> (b)). For any finite value of λ, the SU(2) total spin symmetry is reduced to a U(1) one, and the inversion symmetry is broken, while TRS remains unbroken. For an altermagnetic state to emerge, spin-spin interactions must maintain zero net magnetization while they spontaneously break TRS.Accordingly we consider an AFM interaction term, ensuring that each spin on one sublattice is coupled exclusively to spins on the opposing sublattice. The total Hamiltonian, taking into account an appropriate interaction that realizes the aforementioned physical characteristics, readsĤ=Ĥ_0-J_z/2∑_⟨ i,j⟩(Ŝ^z_i- Ŝ^z_j)^2,where Ŝ^z_i=1/2∑_σσ'ĉ_iσ^†σ^z_σσ'ĉ^_iσ' is the fermion spin operator and σ corresponds to the vector of Pauli spin-1/2 matrices. We consider,J_z >0, suchthatthe Ŝ^z_iŜ^z_j term harbors the potential for the development of long-range AFM order in thespin-z direction. In addition an effective on-site repulsion U=3J_z/4is generated which preempts any local pairing instability.Symmetry analysis.— To establish that the effective Landau theory of our model is of altermagnetic nature <cit.>, we consider the continuumlimit where the low energy effective Hamiltonian is H^eff_0=∑_kΨ^†_k(k_x τ^x μ^z -k_y τ^y )Ψ^†_k+λ∑_kΨ^†_kμ^z σ^z Ψ^†_k.Here Ψ^†_k= Ψ^†_k,τ,μ,σ and τ,μ,σaccountforthesub-lattice,valleyand spinindicesrespectivelyonwhichthePaulimatricesτ^α,μ^α,σ^α act.Thefirsttermcorrespondsto theDiracHamiltonian ofgraphenewithunit Fermivelocity and the secondtermto the nextnearestneighborhopping of thelatticemodel. Importantly,wenote thatthis termdiffers from the quantum spin Hallmass, which takes the form ∑_kΨ^†_kμ^zτ^z σ^z Ψ^†_k <cit.>.Itdoesnot openagap in the spectrum, but shiftsthevalleys in energybyλμ_z σ_z, as is seen in Fig. <ref>(b). The Hamiltonianisinvariantunder global U(1)spin rotationsaroundthez-axis, while discretesymmetriesincludetimereversal:T^-1αΨ^†_k T^=αΨ^†_-kμ^x i σ^y. Importantly,theλ-term breaks inversion symmetry:T^-1_I αΨ^†_k T^_I=αΨ^†_-kτ^xμ^x. TheinteractiontermgeneratesH_I =- J_z∫ d^2x( Ψ^†(x)σ^z τ^z Ψ^†(x))^2,which doesnotlower thesymmetry ofthe model,andclearlypromotesantiferromagnetism. Toproceed, wenowconsider a symmetry brokenantiferromagneticstate inthespin-zdirectionand an orderparameterN( x ) = < Ψ^†(x)σ^z τ^z Ψ^†(x) > .Thisorder parametersolely breaks TRS suchthat aGinzburg-Landautheory accounting forthis state can include evenpowersof N(x). However, termsoftheform N(x) M(x) arealsoallowedintheGinzburg-Landau functionalprovidedthatM(x) is oddundertimereversalandthat N(x) M(x)shares thesamesymmetriesastheHamiltonian. This requirement is fulfilled by theHaldanemass <cit.>M ( x )= < Ψ^†(x)τ^z μ^zΨ^†(x) > which indeedisoddunder time reversal and generates the AHE.Sinceatλ 0theHamiltoniandoes notenjoyinversionsymmetry, N(x) M(x) is allowedintheGinzburg-Landau theory.Asaconsequence,assoon asN(x)acquiresa non-trivialexpectation value,theHaldanemassisgeneratedas asecondary order parameter <cit.>. Wenotethatathalf-band filling, characterizedbytheparticle-holesymmetryP^-1αΨ^†_k P =αΨ_k^Tτ^z σ^x,thelinearcouplingbetweenthe twoorder parametersisforbidden sinceN isevenandMisoddunderthissymmetry. On the lattice, it is clear that the model harbors magnetism beyond simple AFM order as the collinear AFM ordered spin state on the two sublattices cannot be connected by a translation or inversion symmetry combined with time-reversal, due to the specific phase patterns in the NNN hopping Hamiltonian. However, the collinear AFM ordered spin state is invariant under time-reversal with a C_6 rotation around hexagon centers, a type of symmetry typical for altermagnets <cit.>. Indeed, we will show that this results into a finite total Berry curvature away from half-filling, in sync with the finite Haldane mass in the continuum description. Quantum Monte Carlo results.— The Hamiltonian (<ref>) was simulated using the ALF (Algorithms for Lattice Fermions) implementation <cit.> of the grand-canonical, finite-temperature, auxiliary-field QMC method <cit.>. Results were obtained on lattices with L× L unit cells (2L^2 sites) and periodic boundary conditions.Henceforth, we use t=1 as the energy unit, set λ=0.1,J_z=2 andset theTrotter imaginarytimestep to Δτ=0.1. The negative sign problem is absent at half-fillingsince inaBogoliubovbasis,(γ̂^†_𝐢,↑, γ̂^†_𝐢,↓ ) = (ĉ^†_𝐢,↑, (-1)^𝐢ĉ^_𝐢,↓ ) ,andafter decouplingtheperfectsquaretermwith a Hubbard-Stratonovitch transformation,time-reversal andU(1)chargesymmetriesare present for each field configuration <cit.>.Doping breaksthissymmetryandthe sign problemsetsin. Nevertheless,for ourspecificimplementation it turnsout tobe mild (see Fig. <ref> (c))such that largelatticesandlowtemperatures canbereached.In particular, weareabletoreveal an altermagnetic phase,asshown in Fig. <ref>.To verify the above low-energy theory, we measure equal-time correlation functions C^α(𝐪)≡1/L^2∑_𝐫, 𝐫'⟨Ô^α_𝐫Ô_𝐫'^α⟩ e^i 𝐪· (𝐫-𝐫')of fermion spinÔ_𝐫^S_z(S_xy) = Ŝ^z(xy)_ 𝐫,A - Ŝ^z(xy)_𝐫,B with Ŝ^α_ 𝐫= 1/2ĉ^†_ 𝐫σ^αĉ^†_ 𝐫, and the operator corresponding to the Haldane mass, Ô^AQH_r = ∑_⟨⟨δ,δ'⟩⟩, σ e^i Φ_r + δ, r + δ' ĉ^†_r+ δ,σĉ^_r+ δ',σ. Here, r specifies a unit cell,orhexagon, and ⟨⟨δ,δ'⟩⟩ runsoverthenext-nearest-neighbor sitesof this hexagon. After computing the correlation functions, we extracted the renormalization-group invariant correlation ratios <cit.>R^α=1-C^α(𝐪_0+δ𝐪) /C^α(𝐪_0).Here 𝐪_0 is the ordering wave vector and q_0 + δ q the longest wave-length fluctuation of the ordered state for a given lattice size. Long-range AFM and AQH orders here imply a divergence of the corresponding correlation functions C^α(𝐪_0=0). Accordingly, R^α→ 1 for L→∞ in the ordered phase, whereas R^α→ 0 in the disordered phase. At the critical point, R^α becomes scale-invariant for sufficiently large system size L, leading to a crossing of results for different L. We find the altermagnetic phase in the range of parameters, specifically temperatures T and electron densities n, shown in Fig. <ref>. Figure <ref> (a) shows the results as a function of T at n=0.95. The onset of the AFM in spin-z (x) direction, termed z-AFM (x-AFM), is detected from the crossing of R^S_z (R^S_x), whereas the onset of the AQH order can be detected from R^AQH. Asthetemperature decreases, we observe the onset of the z-AFM order. Wenotethatz-AFM onlybreakstime reversal symmetrysuch that orderingatfinitetemperature isallowed.A key feature is that its appearance coincides with the onset of the AQH order. Indeed, the data for R^AQH are consistent with a finite-temperature phase transition to the z-AFM phase. Furthermore, within the range of parameters we investigated, we do not observe the emergence of the x-AFM order. Results as a function of n for T=1/20 are presented in Fig <ref>(b). The consequence of the simultaneous emergence of both z-AFM and AQH orders persists against changes in electron density away from half filling. At half-filling the z-AFMorderingpersists whileAQH correlations aresuppressed.Thisstands in agreement with oursymmetry analysisthatshouldtheparticle-holesymmetryforbidsalinearcouplingbetween theAQHand z-AFM orders in theGinzburg-Landau functional. AHE in mean field approximation.— Within a mean field approximation the AHE that emerges in the interactingmodified Kane-Mele model can be accessed directly. Using the spin-dependent sublattice basis (ψ_A↑,ψ_B↑,ψ_A↓,ψ_B↓) it, the corresponding Hamiltonian, given byEq. <ref>, reduces toH^MF(𝐤)=(ϵ-μ)σ_0τ_0+a_𝐤τ_0 σ_z+ 𝐡(𝐤,σ_z)·τ,where h(𝐤)=(b_𝐤,c_𝐤,h_z), h_z=- Δ M^AFM_zσ_z, Δ=3/2 J_z+3/4 |J_z|, μ is the chemical potential, ϵ=3/8 |J_z|(n-1), n=n_a=n_b is the electron density of an atom A or B, with n_l=⟨n̂_l↑⟩+⟨n̂_l↓⟩, l=a, b being the sublattice index. The antiferromagnetic M^AFM_z order parameter along the z direction is expressed in terms of the on-site magnetization as M^AFM_z=1/2(m^a_z -m^b_z ), where m^l_z=1/2(⟨n̂_l↑⟩-⟨n̂_l↓⟩). The hopping terms a_𝐤, b_𝐤 and c_𝐤 are given by a_𝐤=2λ∑_i=1^3 sin( 𝐤·𝐚_i),b_𝐤=t∑_i=1^3 cos( 𝐤·δ_i),c_𝐤=t∑_i=1^3 sin(𝐤·δ_i),δ_i and 𝐚_i are, respectively, the vectors connecting NNand NNN sites; 𝐚_1=√(3)a(-1/2,√(3)/2), 𝐚_2=√(3)a(-1/2,-√(3)/2),𝐚_3=√(3)a(1,0), δ_1=a(√(3)/2,1/2), δ_2=a(-√(3)/2,1/2), δ_3=a(0,-1), where a is the distance between NN sites. τ_0 (σ_0) is the 2×2 identity matrix in the sublattice (spin) space. The energies of the mean field Hamiltonian (Eq. <ref>) are given byE^MF_α,σ(𝐤)=ϵ-μ+ a_𝐤σ+α√(b^2_𝐤+c^2_𝐤+(Δ M^AFM_z)^2),where α=± (σ=±) is the band (spin) index. The spin-dependent mass term h_z in Eq. (<ref>) breaks TRSand a non-vanishing Berry curvature (BC) is expected away from half-filling.For a given spin projection σ, the BC of a band α can be expressed as <cit.>Ω^α_z(σ,𝐤)=-α/2|𝐡(𝐤)|^3𝐡(𝐤)·[ ∂_k_x𝐡(𝐤)×∂_k_y𝐡(𝐤)]and theanomalous Hall (AH) conductivity <cit.> is given by the integralσ_ Hall=-e^2/ħ1/(2π)^2∑_σ,α∫_BZd𝐤 Ω^α_z(σ,𝐤) f_α(𝐤,μ),f_α(𝐤,μ) being the Fermi-Dirac distribution function  <cit.>.σ_ Hall can be computed using the values of the chemical potential and the AFM magnetization M^AFM_z extracted from the mean field calculations. The results are depicted in Fig. <ref> showing σ_ Hall as function of the electronic density n. The occurrence of a non-vanishing AH conductivity can be understood from the spin-valley dependence of the BC and the offset of the Dirac points induced by the modified Kane-Mele term a_𝐤σ_zτ_0 (Eq. <ref>) which shifts oppositely the spin-split bands. To gain insights into these features, let us focus on the states 𝐤=μ_z𝐊+𝐪, (𝐪≪ |𝐊|) in the vicinity of the Dirac points μ_z𝐊, where μ_z=± is the valley index. These states contribute mainly to the BC which reduces, around μ_z𝐊, to Ω^α_z(S_z,𝐪)=v_F ΔM^AFM_z/(ħ v_F q)^2+(ΔM^AFM_z)^2,where =-αμ_z σ is the sign of the BC and ħ v_F=3/2 a t Given the spin-dependent offset of the Dirac points, a spin-split sub-band E^MF_α,σ(𝐤) (E^MF_α,-σ(𝐤)) of an occupied band α, contributes to the BC by the states around the valley μ_z (-μ_z), which results into a non-vanishing total BC, away from half filling.The doping dependence of the AH conductivity (see Fig. <ref>) reflects the behavior of the secondary order parameter of Haldane type N_ AQH.At half-filling (n=1), σ_ Hall vanishes as the bands below the Fermi level are fully occupied, resulting into a zero BC. By decreasing n from half-filling, σ_ Hall increases up to a maximum value, at a doping n_c, then it drops and vanishes below a critical doping value n_c0. The larger the coupling constant J_z, the smaller n_c0.For a fixed J_z, the increase of σ_ Hall can be understood from the BC contribution which is enhanced as the area of the spin-polarized Fermi surface increases.The drop of σ_ Hall is a consequence of the sharp decrease of M^AFM_z below n_c, inducing a decrease of the BC The doping regime with a non-zero AH conductivity gets wider asJ_z increases, which results from the enhancement of the mass term M^AFM_z.Conclusions and Outlook —We have shown how electronic interactions induce altermagnetism in a non-centrosymmetric system by stabilizing a primary AFM ordering that hosts a secondary one that directly induces the altermagnetic anomalous Hall effect.Specifically, our quantum Monte Carlo calculation on interacting modified Kane-Mele lattice model reveal a finite temperature phase transition into an altermagnetically ordered state, whose physical features are captured by its effective continuum theory.The interacting model may in principle be implemented in cold atoms in optical lattices which have been proposed to realize altermagnetism <cit.>. Interestingly, recently an experimental realization of a non-centrosymmetric 3D altermagnet has been reported in GdAlSi,a collinear antiferromagnetic Weyl semimetal <cit.>.This suggests generalization of the lattice model to higher dimensions to determine how electronic correlations generate Berry curvature in altermagnetic semi-metals that may as well harbor 3D electronic topology. Weacknowledgefruitful discussions with Paul McClarty, Olena Gomonay, and Libor Šmejkal. Wegratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SUPERMUC-NG at Leibniz Supercomputing Centre (www.lrz.de), (project number pn73xu) aswellasthe scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) under the NHR project b133ae. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) – 440719683. FA, TS, JvdB and ICFthankthe Würzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter ct.qmat (EXC 2147, project-id 390858490).S. H. acknowledges the Institute for Theoretical Solid State Physics at IFW (Dresden) and the Max Planck Institute for the Physics of Complex Systems (Dresden) for kind hospitality and financial support. 31 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Nagaosa et al.(2022)Nagaosa, Sinova, Onoda, MacDonald, and Ong]Macdo-AHE authorN. Nagaosa, authorJ. Sinova, authorS. Onoda, authorA. H. MacDonald, and authorN. P. Ong, journalRev. Mod. Phys. volume82, pages1539 (year2022).[Yuan et al.(2020)Yuan, Wang, Luo, Rashba, and Zunger]Yuan20 authorL.-D. Yuan, authorZ. Wang, authorJ.-W. Luo, authorE. I. Rashba, and authorA. Zunger, journalPhys. Rev. B volume102, pages014422 (year2020).[^^c5^^a0mejkal et al.(2022a)^^c5^^a0mejkal, Sinova, and Jungwirth]Libor22 authorL. ^^c5^^a0mejkal, authorJ. Sinova, and authorT. Jungwirth, journalPhys. Rev. X volume12, pages031042 (year2022a).[^^c5^^a0mejkal et al.(2022b)^^c5^^a0mejkal, Sinova, and Jungwirth]Libor22-2 authorL. ^^c5^^a0mejkal, authorJ. Sinova, and authorT. Jungwirth, journalPhys. Rev. X volume12, pages040501 (year2022b).[^^c5^^a0mejkal et al.(2022c)^^c5^^a0mejkal, MacDonald, J. Sinova, and Jungwirth]Macdo22 authorL. ^^c5^^a0mejkal, authorA. H. MacDonald, authorS. N. J. Sinova, and authorT. Jungwirth, journalNat. Rev. Mater. volume7, pages482 (year2022c).[Guo et al.(2023)Guo, Liu, Janson, Fulga, van den Brink, and Facio]Yaqian authorY. Guo, authorH. Liu, authorO. Janson, authorI. C. Fulga, authorJ. van den Brink, and authorJ. I. Facio, journalMater. Today Phys. volume32, pages100991 (year2023).[Mazin et al.(2023)Mazin, Gonz^^c3^^a1lez-Hern^^c3^^a1ndez, and ^^c5^^a0mejkal]AM-2D authorI. Mazin, authorR. Gonz^^c3^^a1lez-Hern^^c3^^a1ndez, and authorL. ^^c5^^a0mejkal, journalarXiv:2309.02355(year2023).[^^c5^^a0mejkal et al.(2020)^^c5^^a0mejkal, Gonzalez-Hernandez, Jungwirth, and Sinova.]Libor20 authorL. ^^c5^^a0mejkal, authorR. Gonzalez-Hernandez, authorT. Jungwirth, and authorJ. Sinova., journalSci. Adv. volume6, pageseaaz8809 (year2020).[Gonz^^c3^^a1lez-Hern^^c3^^a1ndez et al.(2021)Gonz^^c3^^a1lez-Hern^^c3^^a1ndez, ^^c5^^a0mejkal, V^^c3^^bdborn^^c3^^bd, Yahagi, Sinova, Jungwirth, and ^^c5^^bdelezn^^c3^^bd]Libor21 authorR. Gonz^^c3^^a1lez-Hern^^c3^^a1ndez, authorL. ^^c5^^a0mejkal, authorK. V^^c3^^bdborn^^c3^^bd, authorY. Yahagi, authorJ. Sinova, authorT. Jungwirth, and authorJ. ^^c5^^bdelezn^^c3^^bd, journalPhys. Rev. Lett. volume126, pages127701 (year2021).[Feng et al.(2022)Feng, Zhou, ^^c5^^a0mejkal, Wu, Zhu, Guo, Gonz^^c3^^a1lez-Hern^^c3^^a1ndez, Wang, Yan, Qin et al.]RuO-22 authorZ. Feng, authorX. Zhou, authorL. ^^c5^^a0mejkal, authorL. Wu, authorZ. Zhu, authorH. Guo, authorR. Gonz^^c3^^a1lez-Hern^^c3^^a1ndez, authorX. Wang, authorH. Yan, authorP. Qin, et al., journalNat. Electron volume5, pages735 (year2022).[Betancourt et al.(2023)Betancourt, Zub^^c3^^a1^^c4^^8d, Gonzalez-Hernandez, Geishendorf, ^^c5^^a0ob^^c3^^a1^^c5^^88, Springholz, Olejn^^c3^^adk, ^^c5^^a0mejkal, Sinova, Jungwirth et al.]Betancourt23 authorR. D. G. Betancourt, authorJ. Zub^^c3^^a1^^c4^^8d, authorR. Gonzalez-Hernandez, authorK. Geishendorf, authorZ. ^^c5^^a0ob^^c3^^a1^^c5^^88, authorG. Springholz, authorK. Olejn^^c3^^adk, authorL. ^^c5^^a0mejkal, authorJ. Sinova, authorT. Jungwirth, et al., journalPhys. Rev. Lett. volume130, pages036702 (year2023).[Bai et al.(2022)Bai, Han, Feng, Zhou, Su, Q. Wang, Zhu, Chen, Pan, Fan et al.]Bai22 authorH. Bai, authorL. Han, authorX. Y. Feng, authorY. J. Zhou, authorR. X. Su, authorL. Y. L. Q. Wang, authorW. X. Zhu, authorX. Z. Chen, authorF. Pan, authorX. L. Fan, et al., journalPhys. Rev. Lett. volume128, pages197202 (year2022).[Reimers et al.(2023)Reimers, Odenbreit, ^^c5^^a0mejkal, Strocov, Con-stantinou, Hellenes, Ubiergo, Campos, K.Bharadwaj, Chakraborty et al.]AM-obs authorS. Reimers, authorL. Odenbreit, authorL. ^^c5^^a0mejkal, authorV. N. Strocov, authorP. Con-stantinou, authorA. B. Hellenes, authorR. J. Ubiergo, authorW. H. Campos, authorV. K.Bharadwaj, authorA. Chakraborty, et al., journalarXiv:2310.17280 (year2023).[McClarty and Rau(2023)]Paul authorP. A. McClarty and authorJ. G. Rau, journalarXiv.2308.04484(year2023). [Colomés and Franz(2018)]Franz authorE. Colomés and authorM. Franz, journalPhys. Rev. Lett. volume120, pages086603 (year2018).[Kane and Mele(2005)]KaneMele05 authorC. L. Kane and authorE. J. Mele, journalPhys. Rev. Lett. volume95, pages226801 (year2005).[Haldane(1988)]Haldane98 authorF. D. M. Haldane, journalPhys. Rev. Lett. volume61, pages2015 (year1988).[Bercx et al.(2017)Bercx, Goth, Hofmann, and Assaad]ALF_v1 authorM. Bercx, authorF. Goth, authorJ. S. Hofmann, and authorF. F. Assaad, journalSciPost Phys. volume3, pages013 (year2017).[Assaad et al.(2022)Assaad, Bercx, Goth, Götz, Hofmann, Huffman, Liu, Toldin, Portela, and Schwab]ALF_v2 authorF. F. Assaad, authorM. Bercx, authorF. Goth, authorA. Götz, authorJ. S. Hofmann, authorE. Huffman, authorZ. Liu, authorF. P. Toldin, authorJ. S. E. Portela, and authorJ. Schwab, journalSciPost Phys. Codebases p. pages1 (year2022).[Blankenbecler et al.(1981)Blankenbecler, Scalapino, and Sugar]Blankenbecler81 authorR. Blankenbecler, authorD. J. Scalapino, and authorR. L. Sugar, journalPhys. Rev. D volume24, pages2278 (year1981). [White et al.(1989)White, Scalapino, Sugar, Loh, Gubernatis, and Scalettar]White89 authorS. White, authorD. Scalapino, authorR. Sugar, authorE. Loh, authorJ. Gubernatis, and authorR. Scalettar, journalPhys. Rev. B volume40, pages506 (year1989).[Assaad and Evertz(2008)]Assaad08_rev authorF. Assaad and authorH. Evertz, in booktitleComputational Many-Particle Physics, edited by editorH. Fehske, editorR. Schneider, and editorA. Weiße (publisherSpringer, addressBerlin Heidelberg, year2008), vol. volume739 of seriesLecture Notes in Physics, pp. pages277–356, ISBN isbn978-3-540-74685-0.[Wu and Zhang(2005)]Wu04 authorC. Wu and authorS.-C. Zhang, journalPhys. Rev. B volume71, pages155115 (year2005).[Binder(1981)]Binder1981 authorK. Binder, journalZ. Phys. B Con. Mat. volume43, pages119 (year1981), ISSN issn1431-584X.[Pujari et al.(2016)Pujari, Lang, Murthy, and Kaul]Pujari16 authorS. Pujari, authorT. C. Lang, authorG. Murthy, and authorR. K. Kaul, journalPhys. Rev. Lett. volume117, pages086404 (year2016).[Qi et al.(2008)Qi, Hughes, and Zhang]Qi08a authorX.-L. Qi, authorT. L. Hughes, and authorS.-C. Zhang, journalPhys. Rev. B volume78, pages195424 (year2008).[Graf and Piéchon(2021)]Fred authorA. Graf and authorF. Piéchon, journalPhys. Rev. B volume104, pages085114 (year2021).[BZ-()]BZ-square noteTo carry out the integration over the Brillouin zone (BZ), it is easier to use the mapping to a square BZ and write the Hamiltonian in terms of the variables q_x=𝐤·𝐚_3q_y=𝐤·𝐚_1. The Dirac points are at 𝐪_D=±(2π/3,2π/3). The δ_i vectors can be expressed as δ_i=1/3ϵ_ijk( 𝐚_k-𝐚_j) <cit.>.[Das et al.(2023)Das, Leeb, Knolle, and Knap]AM-cold authorP. Das, authorV. Leeb, authorJ. Knolle, and authorM. Knap, journalarXiv:2312.10151(year2023)[Nag et al.(2023)Nag, Das, Bhowal, Nishioka, Bandyopadhyay, Kumar, Kuroda, Kimura, Suresh, and Alam]GdAlSi authorJ. Nag, authorB. Das, authorS. Bhowal, authorY. Nishioka, authorB. Bandyopadhyay, authorS. Kumar, authorK. Kuroda, authorA. Kimura, authorK. G. Suresh, and authorA. Alam, journalarXiv:2312.11980v2(year2023).[Sticlet et al.(2012)Sticlet, Piéchon, Fuchs, Kalugin, and Simon]JNPRB authorD. Sticlet, authorF. Piéchon, authorJ.-N. Fuchs, authorP. Kalugin, and authorP. Simon, journalPhys. Rev. B volume85, pages165456 (year2012).
http://arxiv.org/abs/2312.16290v1
{ "authors": [ "Toshihiro Sato", "Sonia Haddad", "Ion Cosma Fulga", "Fakher F. Assaad", "Jeroen van den Brink" ], "categories": [ "cond-mat.str-el", "cond-mat.mes-hall", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.str-el", "published": "20231226190029", "title": "Altermagnetic anomalous Hall effect emerging from electronic correlations" }
Bayesian Sensor Placement for Multi-source Localization of Pathogens in Wastewater Networks Kalvik Jakkala^1 and Srinivas Akella^1 This document is the results of the researchproject funded in part by a legislative allocation to UNC Charlotte under the CARES Act. ^1The authors are with the Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, USA. Email:{kjakkala, sakella}@charlotte.edu Corresponding author: Kalvik Jakkala January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================== Wastewater monitoring is an effective approach for the early detection of viral and bacterial disease outbreaks. It has recently been used to identify the presence of individuals infected with COVID-19. To monitor large communities and accurately localize buildings with infected individuals with a limited number of sensors, one must carefully choose the sampling locations in wastewater networks. We also have to account for concentration requirements on the collected wastewater samples to ensure reliable virus presence test results. We model this as a sensor placement problem. Although sensor placement for source localization arises in numerous problems, most approaches use application-specific heuristics and fail to consider multiple source scenarios. To address these limitations, we develop a novel approach that combines Bayesian networks and discrete optimization to efficiently identify informative sensor placements and accurately localize virus sources. Our approach also takes into account concentration requirements on wastewater samples during optimization. Our simulation experiments demonstrate the quality of our sensor placements and the accuracy of our source localization approach. Furthermore, we show the robustness of our approach to discrepancies between the virus outbreak model and the actual outbreak rates. Wastewater-based epidemiology, Bayesian networks, Source localization, Sensor placement § INTRODUCTION There is growing evidence that the frequency of pandemics, such as the COVID-19 pandemic, is increasing <cit.>. It is, therefore, crucial to identify infected individuals before they show symptoms and treat them to curb the spread of such outbreaks. Wastewater-based epidemiology has proven to be an effective approach for detecting viral and bacterial outbreaks (<cit.>), including influenza, poliovirus, respiratory syncytial virus, and Escherichia coli infections.Recent studies have demonstrated that viruses can be detected in human waste days before an infected individual exhibits symptoms <cit.>. Regular monitoring of wastewater networks for active traces of harmful pathogens can aid in identifying outbreak locations and their scale. A few case studies (<cit.>) have shown that monitoring wastewater networks is a reliable and non-intrusive way to detect COVID-19 outbreaks. Although autosamplers can automate the collection of wastewater samples, they can be costly. Additionally, the sampled wastewater must be extracted from the autosamplers and processed in a lab. As such, to keep operating costs of wastewater monitoring in check, it is crucial to minimize the number of autosamplers. It is also important to develop non-intrusive ways to quickly locate the source location(s) of pathogens that are indicators of disease outbreaks. Moreover, it is challenging to accurately detect viruses in diluted wastewater samples. Thus, we must optimize the placement locations of sensors (i.e., autosamplers) to ensure good source localization accuracy and that the sensors only collect wastewater samples of sufficiently high concentration.However, this is a non-trivial problem that can be formulated in multiple ways. Indeed, only a few authors have recently begun to address automating sensor placement for wastewater-based epidemiology <cit.>. Moreover, these approaches predominantly formulate the problem as either an area coverage problem or a probabilistic inference problem. Some authors have also incorporated graph algorithms, and <cit.> even considered a binary Bayesian network formulation. Nonetheless, these approaches have yet to fully leverage the full potential of Bayesian networks.In addition, given the nature of the problem, it is an insurmountable challenge to obtain any significant amount of real-world data to optimize and validate the sensor placements. This is because one would have to deploy sensors during an actual outbreak at every possible sensing location to obtain the ground truth data. Even if such data were available, finding the optimal subset of sensor placement locations is still non-trivial, as the problem is NP-hard <cit.>. We model the automation of wastewater-based epidemiology as a sensor placement problem. Our key contributions are summarized below:* We present a novel Bayesian network-based formulation for the automation of wastewater-based epidemiology. Our approach can accommodate complex causal models with both discrete and continuous random variables while still being computationally feasible.* Our method presents computationally efficient optimization objectives that can be solved using greedy algorithms to obtain the solution sensor placements for accurate virus source localization while also ensuring high-concentration wastewater samples are collected at the sensing locations.* To validate our approach, we conduct experiments based on a real-world wastewater network with real water flow, population, and pathogen statistics, to demonstrate its accuracy and robustness.§ RELATED WORK There has been an active response to COVID-19 in automation and robotics, involving the use of robots for disinfection, patient monitoring, deliveries, lab automation, and telemedicine (see <cit.> for comprehensive surveys). However, few authors have addressed the automation of wastewater-based epidemiology.Our problem is akin to that studied by Kempe et al. <cit.>, focusing on influence maximization in social networks. In their work, the authors explored social influence networks and devised an approach to identify a subset of nodes. When influenced to take a specified action, this subset would yield the maximal number of nodes in the entire network adopting the same action. Similarly, our interest lies in garnering maximal information from the graph through the monitoring of a small number of nodes. However, our objective extends beyond mere monitoring; we aim to pinpoint information sources and model constraints on the flow of informationBerry et al. <cit.> developed a mixed-integer programming formulation for sensor placement in water distribution networks. This method identified sensor locations to detect contaminants in water as quickly as possible. Leskovec et al. <cit.> also investigated the sensor placement problem in water distribution networks, introducing a submodular objective. Their approach efficiently optimized this objective, providing a solution with provable approximation guarantees. However, both methodologies focused on water distribution networks, which have a significantly different structure compared to wastewater networks. Moreover, neither approach addressed contaminant source localization.Spinelli et al. <cit.> developed an approach for source localization in physical contact networks to curb the spread of epidemics. The method took into account disease transmission times between patients to identify patient zero. However, the approach is limited to scenarios involving a single source.Jiang et al. <cit.> proposed a probabilistic approach to disease outbreak detection and prediction by examining specific variables of interest that could indicate an outbreak in a given community. However, the method did not consider identifying the outbreak's source and was limited to predicting only the presence of an epidemic and its scale.In another work, Jiang et al. <cit.> developed a probabilistic spatial scan statistic that considered indicator variables to identify the presence and location of an outbreak. However, the localization approach assumed a rectilinear partitioning of the region of interest and required access to data from every sub-region. The source localization problem also appears frequently in other domains, including underwater localization <cit.>, wireless network user localization <cit.>, and EEG device signal localization <cit.>. However, most solutions to such problems involve the development of application-specific metrics and optimization objectives. We also note that our considered problem is closely related to tracking sources of pollutants in rivers <cit.>.Peccia et al. <cit.> studied the statistics of the COVID-19 virus in wastewater samples to identify general trends of community infection rates. Gibas et al. <cit.> examined the feasibility of sampling wastewater in a university to identify COVID-19 outbreaks and demonstrated the ability to detect asymptomatic individuals in a dormitory. However, these studies did not consider sensor placement optimization.Recently, sensor placement approaches specifically designed for wastewater-based epidemiology have been presented in various studies <cit.>. These approaches concentrate on optimizing sensor placement to maximize population coverage while minimizing the overlap in area coverage for each sensor, achieved through discrete integer programs.Nourinejad et al. <cit.> and Calle et al. <cit.> were among the first to develop sensor placement approaches for wastewater-based epidemiology. They presented probabilistic approaches to iteratively locate virus hotspots. Additionally, they separately utilized the graph structure of wastewater networks to optimize sensor placement locations through discrete optimization.More recently, Speziali et al. <cit.> formulated the wastewater sensor placement problem as an optimization task in binary Bayesian networks. However, they formulated their sensor placement approach to maximize mutual information, a computationally expensive task. Indeed, Speziali et al. resorted to quantum computing to address the computational challenges in solving their sensor placement problem.Despite these advances, researchers in wastewater-based epidemiology have yet to fully harness Bayesian approaches, especially Bayesian networks <cit.>, capable of leveraging graph structures to model causal relationships in the problem. By employing efficient message-passing-based inference methodsfor Bayesian networks, we can reduce computation costs and develop superior source localization methods.Furthermore, our approach is not confined to simple binary networks; instead, it accommodates complex models involving multiple random variables, including continuous variables. This formulation can be solved using easily accessible traditional computers. Additionally, we present a novel Bayesian network formulation which accommodates wastewater concentration threshold requirements effectively. §SENSOR PLACEMENT FOR SOURCE LOCALIZATION IN WASTEWATER NETWORKS: PROBLEM STATEMENTWe are given a wastewater network modeled as a graph G=(V, E) with buildings and utility access points[Also referred to as maintenance holes, cleanouts, and sewer holes.] modeled as vertices V and pipelines as directed edges E whose direction is identified by the wastewater flow. Wastewater networks have a (reverse) directed tree structure with wastewater flowing from the leaf nodes (i.e., the buildings) to the root node (i.e., the wastewater treatment plant). Given their inherent graph structure, only buildings are associated with leaf nodes l ∈ L, and only utility access points are associated with non-leaf nodes j ∈ J in the graph. We use upper case letters to represent a set of nodes, lower case letters to refer to the nodes in a set, and subscripts to indicate a specific node. We are given k sensors (i.e., the wastewater autosamplers) that can be deployed to monitor the wastewater network. We must identify a set of nodes W ⊂ V to monitor, i.e., select autosampler placements to accurately detect and localize virus outbreaks at one or more buildings in the wastewater network. Note that in the real world, the sensing nodes only collect the wastewater, which is then sent to a lab to be analyzed and determine if the samples are positive or negative for any viruses. Additionally, when wastewater from multiple sources is combined in pipelines, the wastewater is diluted, resulting in inaccurate virus detection test results. Therefore, we must also account for any sample concentration requirements on the wastewater while determining the solution autosampler placements.We define a virus outbreak as a positive test for the virus of interest from the wastewater of any building. Deploying sensors at every building in the wastewater network would make detecting and localizing virus outbreaks a trivial task. However, as we are limited to only k sensors (where k << |L|), the problem of optimizing their placement is NP-hard <cit.>, making it challenging to solve. § APPROACHWe first describe our method for reducing the wastewater network graph. Then, we explain how we construct a binary Bayesian network from the reduced wastewater network graph. Next, we present our approach for using Bayesian networks to localize virus sources and optimizing the sensor placements for accurate source localization. Finally, we detail our approach to modeling more complex Bayesian networks with continuous random variables and optimizing sensor placements to meet wastewater concentration requirements. Figure <ref> shows an illustration of our approach. §.§ Wastewater Network Graph ReductionAn inherent property of wastewater networks is that a virus outbreak can occur only at the graph's leaf nodes (i.e., the buildings). Furthermore, since the wastewater network graph has a (reverse) directed tree structure, we can leverage these two properties to reduce the number of nodes in the graph without losing any significant information. Indeed, we can exclude any non-leaf node with a single parent node, as shown in Figure <ref>. This is because placing a sensor at either the current node j_2 or its parent node j_1 would result in sensing the same wastewater that flows into node j_1 from its parent nodes (l_1 and l_2). Since choosing either node (j_1 or j_2) results in the same solution, and there is no benefit from sensing at both the nodes, we discard the current node j_2, which is a non-leaf node with a single parent. Note that after discarding the node j_2, we connect the parent node j_1 to the child node(s) of the discarded node j_2. The pseudocode for the graph reduction approach is shown in Algorithm <ref>.Such a reduction in the graph size reduces the size of the optimization problem that needs to be solved to find the ideal sensor placements without diminishing the final solution quality. Also, it would reduce the computation cost of our source localization approach. §.§ Binary Bayesian NetworkOne of the main insights of this article is that we can utilize the inherent graph structure of wastewater networks to construct Bayesian graph networks. Bayesian graph networks incorporate random variables associated with each node in the graph and utilize the graph structure to model causal relationships among the random variables. We can leverage Bayesian networks to calculate conditional probabilities that can be employed to establish an optimization objective for sensor placement and also predict the most probable source(s) of a virus outbreak.We build a Bayesian graph B from the wastewater network graph G. In the Bayesian graph, we associate a Boolean random variable b with each node. The value of b indicates whether the wastewater flowing through that node contains viruses or not, where True indicates the presence of viruses and False indicates their absence. To construct the Bayesian graph, we must also define the distribution associated with each random variable. For the leaf nodes l ∈ L, the Boolean random variables are treated as Bernoulli distributed variables, and we parametrize the distributions to model the probability of a virus outbreak at each building.The random variables b at the non-leaf nodes j ∈ J are computed using a deterministic OR-gate operation over the random variables of their parent nodes. Although the junction nodes j ∈ J in real-world wastewater networks might not behave as OR-gates, this simplifying assumption allows us to efficiently compute the conditionals using variable elimination and message passing techniques <cit.>. Indeed, given our directed tree graph structure and Boolean random variables, our Bayesian graph network is similar to the Noisy-OR Bayesian network <cit.>, which is amenable to efficient Bayesian inference. In addition, our experiments show that our approach works well despite our simplifying assumption. We assign a Conditional Probability Density (CPD) table <cit.> to each non-leaf node to store the result of the OR-gate operation. The CPD table of each node is filled by iterating over all possible instantiations of the states of the current node and its parent nodes. Each instantiation is assigned a probability of 100% if the instantiation is possible with an OR-gate and set to 0% otherwise, as shown in Figure <ref>. §.§ Source LocalizationWe can now use our Bayesian graph B to localize the source of virus outbreaks. Assuming we already selected a set of nodes W ⊂ V to deploy our sensors (i.e., the wastewater autosamplers), we can use Bayesian conditioning <cit.> to predict the virus outbreak state A_l(W_b) of each leaf node l ∈ L (i.e., each building) given the current virus presence state of the sensing nodes W_b: A_l(W_b) =TrueifP(b_l=True|W_b) > 50%Falseotherwise . Our Bayesian graph B is informed about the structure of the wastewater network from the directed graph G, and the prior virus outbreak statistics of each leaf node from the associated Bernoulli distributions. Therefore, evaluating the most likely state A_l(W_b) of each building by conditioning on the current state of the sensor nodes W_b leverages all the information available to us. In addition, since our Bayesian model is a binary graph, our source localization approach is still computationally feasible. §.§ Sensor PlacementOur sensor placement approach leverages the Bayesian graph B to formulate an optimization objective that maximizes the source localization accuracy for a given scenario S. This accuracy is defined as the fraction of buildings whose virus outbreak state is correctly predicted. In this context, a scenario S represents a hypothetical virus outbreak indicating which of the leaf nodes (i.e., buildings) in the wastewater network are currently experiencing an outbreak.However, we anticipate that in the real world, most scenarios involve only a small fraction of buildings experiencing a virus outbreak at any given time. As a result, even if our model predicts that all buildings are virus-free, the accuracy may still be high, as only a few buildings experiencing an outbreak are mislabeled. Therefore, optimizing and evaluating our sensor placement solutions using accuracy alone may not be the most suitable approach. As a result, we also consider precision, recall, and F1 scores as alternative evaluation metrics to determine the best optimization metric in the experiments section. In the remainder of this article, we will refer to our optimization metric as the score function, implying that it can be any of the aforementioned metrics.To ensure that our sensor placement solutions W are not biased towards a single virus outbreak scenario S, we sample multiple scenarios, each consisting of random samples from the Bernoulli distributions associated with the leaf nodes L. Each scenario consists of a random sample (True/False) from each of the leaf nodes. Optimizing the score function on the sampled scenarios ensures that our solution sensor placements W account for the probability of a virus outbreak in each building. We refer to this objective as the score objective:_W ⊂ V, |W|≤ k𝔼_S[ score(A(W_b), S) ]. Here, A(W_b) represents our predicted virus outbreak state for all the buildings, and S represents a virus outbreak scenario. Note that we can optimize the sensor placements by directly maximizing the score function since we have a discrete combinatorial optimization problem. Such optimization problems do not require differentiable operations. However, the above optimization problem is NP-hard <cit.>, making it challenging to find the globally optimal solution. As a result, we use the naive greedy algorithm <cit.> to find the solution sensor placements. The greedy algorithm selects one sensing node at a time until the cardinality constraint k is met. Each new sensing node is selected by computing the increments in the optimization objective upon adding each candidate node to the current solution set and selecting the node that results in the largest increment.W ← W ∪{_v ∈ V \ Wℱ(W ∪{v}) - ℱ(W) } . Here, ℱ is the objective function (Equation <ref>). A drawback of optimizing the score objective is that it results in sensors placed only at the leaf nodes with the highest probability of an outbreak. Since we have a limited number of sensors, usually k<<|L|, using such a solution entails ignoring outbreaks in buildings with a lower probability of an outbreak.To address the issue mentioned above, we have added an indicator function to the objective. This function, known as the coverage indicator function 1_cov, returns a value of 1 if a scenario S can be detected with the current sensor placements W, and 0 if the scenario cannot be detected. To evaluate the indicator function, we check if a path exists from the buildings experiencing a virus outbreak to the current sensor placements W. We refer to this objective as the Coverage objective, which can be formulated as follows: _W ⊂ V, |W|≤ k𝔼_S[ score(A(W_b), S) + 1_cov(W, S)],1_cov(W, S) = 1if scenario S can be detectedwith sensors at W0otherwise . By optimizing the Coverage objective, we can obtain solutions with sensing nodes capable of detecting outbreaks even at buildings with a low outbreak probability, although with a reduced source localization accuracy.Adding sensing nodes: The methods discussed so far have only considered scenarios where there are no preexisting sensors deployed in the wastewater network. However, in practice, one might want to add additional sensing nodes to improve the source localization accuracy. We can achieve this by maximizing the following objective function, where W_curr represents the set of preexisting nodes in the wastewater network, and V \ W_curr represents the set of nodes in V that are not in W_curr: _W ⊂ V\ W_curr,|W_curr∪ W|≤ k𝔼_S [ score (A({W_curr∪ W}_b), S)+ 1_cov(W_curr∪ W, S) ]. Removing sensing nodes: One might also want to remove a specified number of sensing nodes from a wastewater network. This can happen if we want to monitor fewer sensors to reduce costs. We can achieve this with our approach by maximizing the following objective: _W ⊂ W_curr,|W_curr\ W|≤ k𝔼_S [ score (A({W_curr\ W}_b), S)+ 1_cov(W_curr\W, S) ]. Note that both the problems of adding and removing sensing nodes can be solved using the greedy algorithm. However, to remove samplers, the greedy algorithm needs to be modified. In this variant, the algorithm selects the node that contributes the smallest increment to the total objective in each iteration, unlike the sensor addition variant, which picks the node with the largest increment. §.§ Concentration RequirementsThe methods discussed above assume that sensing nodes always report the correct state of the nodes, i.e., whether the wastewater passing through them contains viruses. However, this may not be the case in the real world. The qRT-PCR test, commonly used to analyze wastewater samples for traces of viruses, is sensitive to the concentration of the samples. Therefore, it is crucial to ensure that the concentration, measured by the number of RNA virus copies per liter of wastewater, is above a minimum threshold to obtain reliable sensing results. However, if one were to model all variables that influence the concentration of wastewater samples, such as the volume of wastewater and the number of virus copies, in the Bayesian graph B, the conditional probability P(b_l|W) used for sensor placement and source localization would be computationally intractable. Therefore, we construct a separate auxiliary Bayesian graph C, which models the variables that affect virus concentration (as shown in Figure <ref>). The auxiliary graph C allows us to efficiently compute the virus concentration at any node in the wastewater network, given the virus concentration at the leaf nodes l ∈ L that are experiencing an outbreak in a scenario S.The auxiliary graph C is initialized with the wastewater network's graph structure similar to the binary Bayesian graph B. However, instead of using Boolean random variables to model each node's virus outbreak state, the auxiliary graph C employs multiple random variables to represent the virus concentration at each node. Our approach considers the volume of wastewater flow f, the number of infected individuals n̂, and the total number of virus copies n for each building. Note that this approach can even use a more sophisticated concentration model.We modeled the wastewater flow volume f of each building using a truncated Gaussian distribution, truncated at 0 to ensure that only positive flows are sampled. The number of infected individuals n̂ was modeled using a Poisson distribution, while the number of total virus copies n at each building was computed by sampling the number of virus copies shed by each infected individual from a uniform distribution and then taking the sum. The wastewater virus concentration c_l at each leaf node l ∈ L can be computed using the ratio of the total number of virus copies n to the wastewater flow volume f. To determine the concentration c_j at each non-leaf node j ∈ J, we use the conservation of mass equation shown below: c_j = ∑_i ∈ parent(j)n_i/∑_i ∈ parent(j)f_i . The concentration rate c calculated at each node is then passed to an updated threshold indicator function 1_thresh, which enforces the minimum concentration requirement in our sensor placement optimization objective. We refer to this new optimization objective as the thresholded coverage objective. _W ⊂ V, |W|≤ k𝔼_S[ score(A(W_b), S) + 1_thresh(W, S)]1_thresh(W, S) = 1if scenario S can be detectedwith sensors at W and everywastewater sample satisfiesthe concentration threshold0otherwise The threshold indicator function 1_thresh (Eq. <ref>) is similar to the coverage indicator function 1_cov (Eq. <ref>), but it also considers the concentration requirements for detecting a scenario. That is, even if a path exists from the buildings experiencing a virus outbreak to the current sensor placements W, the threshold indicator function 1_thresh will return 1 (i.e., True) only if the concentrations at the sensing nodes that detect the scenario meet the required concentration threshold. By optimizing the thresholded coverage objective, we can obtain sensor placements that not only enable accurate source localization but also ensure that the concentration of viruses in wastewater samples collected by the autosamplers is above the required threshold for reliable virus detection. It is worth noting that our approach still benefits from the computational efficiency of the Boolean OR-gate Bayesian network B used to compute P(S|W), while also accounting for the complex concentration requirements through the auxiliary Bayesian network C and Eq. <ref>. Thus, our approach overcomes the limitations of a computationally intractable Bayesian graph that models all variables together.Note that the Bernoulli distributed random variable b_l in the Bayesian graph B, which indicates if an infection outbreak occurred at a building, is analogous to the Poisson distributed random variable n̂ in the auxiliary Bayesian graph C, which represents the number of infected individuals. Therefore, when using the auxiliary Bayesian graph C to model the virus concentration, if we need to sample outbreak scenarios S to optimize our sensor placements, we sample them using the Poisson distributions associated with the random variable n̂ instead of the Bernoulli distributions associated with the random variable b. But since the scenarios S sampled from Poisson distributions would indicate the number of infected individuals instead of the binary state of an outbreak, we apply the following min operation on each element of S, i.e., the sampled n̂ at each building to ensure that the scenario is binary:f(n̂) = min(1, n̂) . Converting each scenario S into a binary vector enables us to use the binary Bayesian graph B to evaluate P(b_l|W) for sensor placement and source localization. However, it is important to note that the threshold indicator function 1_thresh is still computed using only the auxiliary Bayesian graph C. Additionally, when computing the wastewater concentrations, we sample the number of virus copies n at a leaf node l only if the corresponding building is indicated to be experiencing a virus outbreak in the scenario S. However, we always sample the wastewater flow volume f from all buildings to account for wastewater dilution. § SIMULATION EXPERIMENTSWe tested our approach by analyzing a subgraph of our university's wastewater network consisting of residential buildings[Some aspects of the wastewater network have been obfuscated for security and confidentiality.]. Initially, we constructed the reduced graph representation of the wastewater network, denoted as G, by following the approach outlined in Section <ref>. The original wastewater network graph contained 35 vertices and 34 edges. However, we were able to reduce it to a 20 vertex and 19 edge graph using our graph reduction approach (shown in Figure <ref>). This corresponds to a 41% reduction in the number of vertices and a 44% reduction in the number of edges of the graph. Out of the 20 vertices, 12 represented leaf nodes which corresponded to buildings. Next, we utilized the wastewater network graph G to construct two Bayesian graph networks: B to model virus presence states, and C to model wastewater concentrations. Our methodology for constructing B and C is described in Section <ref> and Section <ref>, respectively.We modeled the wastewater flow volume f of each building in the Bayesian graph C using truncated Gaussian distributions, which were parameterized based on the buildings' historical monthly wastewater flow rates. The Poisson distribution was used to model the number of infected individuals, and the mean of this distribution was set to be proportional to the number of students assigned to the corresponding building. Lastly, we bounded the uniform distribution over the daily number of virus copies shed in the wastewater by each infected individual between 2.4×10^6 and 4×10^10, based on the findings of Foladori et al. <cit.>.To generate hypothetical scenarios, we utilized Poisson distributions in the auxiliary graph C. We generated a total of 1000 scenarios, some of which modeled simultaneous virus outbreaks in multiple buildings. These scenarios were binarized as described in Section <ref>. We used these scenarios in all of our experiments, unless specified otherwise, to maintain consistency in our benchmark results. §.§ Optimization Metric BenchmarkWe began by benchmarking various score functions in Equation <ref> which is used to determine sensor placements. As discussed in Section <ref>, accuracy is not the most effective optimization metric for our approach. Therefore, we employed the greedy algorithm to optimize sensor placements using three other score functions: precision, recall, and F1 in addition to accuracy. We set the virus concentration in the threshold indicator function (Equation <ref>) to 4.8×10^5 and generated solutions for 6 sensor placements. Figure <ref> displays the quality of the sensor placements obtained by optimizing with each score function. We evaluated the solution placements using all four score functions and reported them, along with the fraction of scenarios that could be covered (Equation <ref>) using the solution placements.As expected, we achieved the best results for each score function when the optimization metric matched the evaluation metric. In a real-world scenario, the appropriate optimization function can be selected based on the fraction of false positive and true negative source localization predictions that can be accommodated. For the remaining experiments, we utilized only the F1 score as the optimization score function since optimizing this score provided us with good precision and recall during evaluation, even though they were not directly optimized. §.§ Optimizer Benchmark and Submodularity In addition to using different score functions, our approach also allows for the use of different optimizers. We benchmarked four optimizers—naive <cit.>, lazy <cit.>, approximate-lazy <cit.>, and sample <cit.>. We generated solutions for six sensor placements using a virus concentration of 4.8×10^5 in the threshold indicator function (Equation <ref>), the results are shown in Figure <ref>.The lazy, approximate-lazy, and sampls optimizers offer faster solutions than the naive optimizer without sacrificing quality, and provide a near-optimal (1-1/e) approximation factor guarantee. This is because they assume that the optimization objective is submodular <cit.>. We can observe that all of these approaches perform similarly to the naive optimizer, which is possible only if our optimization objective is submodular. §.§ Weighted Sum Benchmark In our previous experiments, we used an unweighted sum of the score function and the indicator function value in our optimization objective (Equation <ref>). However, in a real-world scenario, it may be necessary to prioritize the score function or the scenario coverage (with the indicator function). As such, we introduce a weight term λ to take a weighted sum of the score and indicator function values during optimization: _W ⊂ V, |W|≤ k𝔼_S [(λ) score(A(W_b), S)+ (1 - λ) 1_thresh(W, S)]. To illustrate the impact of different λ values, we benchmarked it by generating solutions using various λ values, and the results are presented in Figure <ref>. We utilized the naive greedy optimizer <cit.>, set the virus concentration in the threshold indicator function (Equation <ref>) at 4.8×10^5, and produced solutions for 6 sensor placements.We observe from the plot that optimizing solely for coverage using the indicator function (left side) yields high score values, but the scenario coverage is somewhat reduced. Conversely, optimizing only for the score function on the right side of the plot improves the coverage, albeit with a decrease in the score values. §.§ Concentration Threshold Benchmark We also benchmarked our solution quality with different concentration values in the threshold indicator function (Equation <ref>). We used the naive greedy optimizer <cit.> and generated solutions for 6 sensor placements. Figure <ref> shows our results. As we had anticipated, using low concentration thresholds enables us to detect a greater portion of the outbreak scenarios. This is due to the fact that the sensor placements are closer to the root node, where all wastewater ultimately flows. Consequently, detecting most virus outbreaks is feasible as the low virus concentration is not an issue. However, as we increase the concentration threshold, sensors must be repositioned closer to the leaf nodes (i.e., buildings) to ensure that the wastewater samples contain high virus concentrations. As a result, we would require more sensors to cover all the buildings. Nevertheless, this has the added benefit of improved source localization as each sensor is allocated to a smaller subgraph with fewer leaf nodes.Figure <ref> illustrates the solution placements on our university's wastewater network graph. We observe that the solution generated without a concentration threshold (Figure <ref>(a)) provides inadequate coverage (55.28%) as the wastewater at the root node (i.e., node 19) is too diluted to produce reliable virus detection results. However, the solution generated with a threshold of 4.8×10^5 virus copies per liter (Figure <ref>(b)) delivers substantially improved coverage (74.51%). §.§ Detection Threshold Benchmark In another experiment, we evaluated the impact of varying the virus detection threshold. To determine the virus outbreak state of each building, we thresholded the state probabilities (Equation <ref>), i.e., label the state as True if the probability is above the threshold, even if the False probability is higher than the True probability. Figure <ref> illustrates the solution quality obtained using different detection thresholds. For all evaluations in this benchmark, we used the same sensor placements acquired in Section <ref> with the F1 score objective. The sensor placements consisted of 6 sensors and were optimized using the naive greedy optimizer <cit.>.Note that the zero threshold corresponds to using the original argmax operation (Equation <ref>) to determine the state of each building. The results demonstrate the classic inverse relation between precision and recall. Precision heavily penalizes the score if we predict a building with a False virus outbreak state as True(false positives). On the other hand, recall allows us to make the same prediction without being penalized, but instead, it penalizes the score if we predict a building with a True virus outbreak state as False (false negative). As such, we observe that increasing the detection threshold initially improves recall, as it increases the fraction of true positives at the cost of increased false positives. However, beyond a threshold of 0.25, it improves precision instead, as a larger fraction of predictions become true positives but with increased false negatives. §.§ Random Graph Benchmark The experiments we conducted previously focused on a subgraph of our university's wastewater network, allowing us to isolate and study the effects of each variable of interest. In this experiment, we aimed to test the generalizability of our approach to different wastewater networks. To do so, we generated 20 random wastewater network graphs and their corresponding outbreak scenarios, and then generated sensor placement solutions for each graph.Our random graphs consisted of 25 nodes each. The graphs were built by iteratively linking a new node to a randomly selected existing node. Additionally, we redirected new nodes to successor nodes of the randomly selected nodes, away from the root with a probability of 20%. We then sampled the building populations from a uniform distribution in the interval [0, 100], and the wastewater flows for each leaf node from a uniform distribution in the interval [1000, 3000].For each graph, we sampled 500 virus outbreak scenarios and optimized the placement of 6 sensors using the naive greedy algorithm. The thresholded coverage objective with the F1 score (Equation <ref>) was used as the optimization objective. We set the concentration threshold to 4.8×10^5 virus copies per liter.Figure <ref> and Figure <ref> (Unperturbed) show the average solution F1 score and coverage on the 20 random graphs. We compared our results to a baseline of randomly placed sensors (Random) and found that our approach produced substantial improvements in performance, comparable to those obtained on our university's wastewater network. Note that the random graphs have varying connectivity and different ratios of sensor placements to graph size when compared to our university's wastewater network. To test the limits of our approach further, we perturbed the virus outbreak probability of each building in the random graphs and generated a new set of scenarios for each random graph. We perturbed the virus outbreak probabilities by adding uniform-distributed noise to the populations of each building in the random graphs. We evaluated the previously obtained sensor placement solutions that were optimized on the unperturbed virus outbreak probability-based scenarios using the new scenarios.This experiment allowed us to test our approach in real-world scenarios where the available virus outbreak probability or the whole model itself may be inconsistent with the actual virus outbreak characteristics. The results, shown in Figure <ref> and Figure <ref> (Perturbed), demonstrate that our approach is robust to modeling inconsistencies. § CONCLUSIONWe presented the sensor placement for source localization problem in wastewater networks, and developed an approach that leverages graph Bayesian learning and discrete optimization to address the problem.We first presented an approach to reduce the size of wastewater network graphs, thereby making relatively large problems computationally feasible. We then showed how one can map any network graph to a Bayesian graph, which we can use to localize sources of information, i.e., buildings experiencing a virus outbreak in our case. We also developed optimization objectives that we can use to efficiently find ideal sensor placements for source localization, even when there are multiple pathogen sources and constraints such as wastewater concentration requirements.Our simulation experiments demonstrated the quality of our solution sensor placements and the accuracy of our source localization approach in a case study on our university's wastewater network. We also benchmarked different discrete optimization methods and score functions, and showed that our optimization objective can be efficiently optimized. We then established the accuracy of our approach on random graphs. In addition, our experiments established the robustness of our approach to inaccurate virus outbreak models.In a real-world scenario, our graph sensor placement approach coupled with virus outbreak models and information about the wastewater network structure can determine ideal wastewater sampling locations. Our source localization approach can quickly localize the source of virus outbreaks from regularly collected wastewater sample test results. Additionally, our graph reduction and source localization approaches can be used with existing wastewater network sensor placements approaches.Our experiments have demonstrated that our optimization objective is submodular on our university's wastewater network graph. We conjecture that this property holds for any wastewater network graph, and we plan to prove this theoretically in our future work.Furthermore, to validate our approach using real-world data, we require regular virus concentration measurements and the number of infected individuals for each building in the monitored wastewater network. We aim to collect such data and present additional validation of our approach in future work.§ ACKNOWLEDGMENTSWe thank Saurav Agarwal and Sayantan Datta for their helpful comments. We thank Rick Tankersley for his support and the UNC Charlotte COVID wastewater team—Cynthia Gibas, Wenwu Tang, Mariya Munir, Jacelyn Rice-Boayue, Kevin Lambirth, Greg Cole, Neha Mittal, Don Chen, Jessica Schlueter, Tianyang Chen, Zachery Slocum, and other students and staff for their assistance.IEEEtran § APPENDIX
http://arxiv.org/abs/2312.16750v1
{ "authors": [ "Kalvik Jakkala", "Srinivas Akella" ], "categories": [ "cs.SI", "cs.CE", "physics.soc-ph" ], "primary_category": "cs.SI", "published": "20231227235743", "title": "Bayesian Sensor Placement for Multi-source Localization of Pathogens in Wastewater Networks" }
Graphics/ [email protected]à di Camerino, Via Madonna delle Carceri, Camerino, 62032, Italy.INAF - Osservatorio Astronomico di Brera, Milano, [email protected]à di Camerino, Via Madonna delle Carceri, Camerino, 62032, Italy.INAF - Osservatorio Astronomico di Brera, Milano, Italy.SUNY Polytechnic Institute, 13502 Utica, New York, USA.Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Perugia, Perugia, 06123, Italy.Al-Farabi Kazakh National University, Al-Farabi av. 71, 050040 Almaty, Kazakhstan. We modify the symmetric-teleparallel dark energy through the addition of a further Yukawa-like term, in which the non-metricity scalar, Q, is non-minimally coupled to a scalar field Lagrangian where the phion acts as quintessence, describing dark energy. We investigate regions of stability and find late-time attractors. To do so, we conduct a stability analysis for different types of physical potentials describing dark energy,namely the power-law, inverse power-law, and exponential potentials. Within these choices, we furthermore single out particular limiting cases, such as the constant, linear and inverse potentials. For all the considered scenarios, regions of stability are calculated in terms of the signs of the coupling constant and the exponent, revealing a clear degeneracy among coefficients necessary to ensure stability. We find that a generic power-law potential with α > 0 is not suitable as a non-minimal quintessence potential and we put severe limits on the use of inverse potential, as well. In addition, the equations of state of each potential have been also computed. We find the constant potential seems to be favored than other treatments, since the critical point appears independent of the non-minimal coupling. 98.80.-k, 95.36.+x, 04.50.KdPhase-space analysis in non-minimal symmetric-teleparallel dark energy Orlando Luongo January 14, 2024 ======================================================================§ INTRODUCTION Exploring the nature of dark energy and dark matter represents a challenge for modern cosmology <cit.>. Accordingly, goingbeyond general relativity (GR) has recently acquired great importance <cit.>. Indeed, although GR successfully passed numerous experimental tests, its status as the ultimate theory of gravitational interaction is often questioned at both infrared and ultraviolet scales. Particularly, the cosmological concordance model, known as the ΛCDM paradigm, is still plagued by conceptual issues related to the physical interpretation of the cosmological constant, Λ<cit.> and by recent cosmological tensions <cit.>. While various approaches have been developed toward the nature of dark energy and dark matter, including dynamical scalar fields <cit.>, unified dark energy-dark matter models <cit.>, phenomenological scenarios <cit.>, etc., a definitive answer to the dynamical problem of the universe is still missing.On the one hand, Riemannian geometry, i.e., the geometrical structure underlying GR, is interestingly a special case of the more general metric-affine geometry and, specifically, there is no a priori need to favor it over other metric-affine descriptions of gravity.Indeed, the well-known three different, albeit physically equivalent, descriptions of gravity, employing either curvature, torsion or non-metricity, constitute the well-established[Even though appealing and formally equivalent, the torsion introduces the existence of a further spin, while the non-metricity violates the metric principle. Only GR does not show any further requirement, as it makes use of the curvature.]geometric trinity of gravity<cit.>.There, the action responsible for describing the universe large-scale structures is constructed through scalar curvature, R,scalar torsion, T, or non-metricity scalar, Q, respectively. The corresponding theories are known asGR, teleparallel equivalent of GR and symmetric-teleparallel equivalent of GR.In this respect, over the recent decades, extended and/or motivated theories of gravity have been investigated toacross various scales <cit.>, and in analogy to the gravitational trinity, f(R)<cit.>, f(T)<cit.>, and f(Q)<cit.> scenarios can extend the aforementioned approaches to provide suitable and, mostly alternative, descriptions of dark energy and dark matter, addressing de facto limitations of the ΛCDM model.Motivated by the class of symmetric-teleparallel theories, we here focus on non-minimal coupled Q gravity. Precisely, in analogy to GR, we consider a generic 0-spin scalar field associated with dark energy, non-minimally coupled to the non-metricity scalar through a Yukawa-like interaction. In this picture, we investigate the corresponding stability properties. To do so, we employ a homogeneous and isotropic universe, formulating our treatment in a spatially-flat Friedmann-Lemaître-Robertson-Walker (FLRW) metric. There, we derive the modified Friedmann equations and explore cosmological dynamics using an autonomous system of first-order differential equations reformulated by virtue of dimensionless variables. Hence, we work out various field potentials, namely singling out the exponential potential, together with power law and inverse power law potentials. We study the critical exponents for each case and we constrain the exponent that permits the critical points to arise. Analogously, we explore the regions of stability and, so, we emphasize the regions in which we expect late-time attractors.The paper is structured as follows[Throughout the text, the Lorentzian signature (-+++) is used, as well as natural units, 8π G=c=1.]. In Sect. <ref>, we introduce our Lagrangian in which we propose a further non-minimal coupling between Q and the dark energy field, under the form of a Yukawa-like potential. The corresponding cosmological features are thus reported. In Sect. <ref>, we propose the autonomous system of equations utilized to classify the regions of stability and we classify our solutions focusing on each potential form. Finally stability is developed in Sect. <ref>, while conclusions and perspectives of our work are summarized in Sect. <ref>.§ THEORETICAL SET UP In this section, to investigate non-minimally coupled Q theories, it is convenient to start with f(Q) theories, i.e., introducing analytical functions of Q into the Lagrangian, say𝒮=∫ d^4x√(-g)[-1/2f(Q)+ℒ_m].Here, as stated, f(Q) represents an arbitrary function of Q, while ℒ_m denotes the Lagrangian density for matter and, as usual, g stands for the metric determinant <cit.>.In symmetric-teleparallel theories of gravity, both curvature and torsion vanish, leaving the non-metricity tensor Q_λμν=∇_λg_μν the only quantity quantifying metric change under teleparellel transport. The traces of Q_λμν are thus defined byQ_λ=Q_λμ^μ,Q̃_λ=Q^μ_λμ.The non-metricity scalar emerging in Eq. (<ref>) is described bythe contraction of the non-metricity tensor with the superpotential tensor P^λμν, namelyQ=-Q_λμνP^λμν,and the components of P^λμν can be expressed explicitly asP^λ_μν=-1/4(Q^λ_μν-2Q^λ_(μν)+Q^λg_μν+Q̃^λg_μν+δ^λ_(μ Q_ν)).In our action, reported in Eq. (<ref>), we also add a further scalar field Lagrangian density, representing a quintessence contribution[For the sake of clearness, it is theoretically possible to model the interaction of Q with a specific scalar field form that mimes dark matter. Associating our underlying field with dark energy will become evident once the corresponding equation of state is computed.].Additionally, we justify the use of the above f(Q) description by by introducing a non-minimal coupling between the non-metricity scalar Q and the scalar field ϕ. Then, in agreement with GR, our action in Eq. (<ref>) can formally be extended by𝒮=∫ d^4x√(-g)[-1/2f(Q)+ℒ_m+ℒ_ϕ],yielding a f(Q) definition imposed by the ansatz f(Q)=Q+ξ Qϕ^2, with ℒ_ϕ=-1/2∂_μϕ∂^μϕ-V(ϕ).In our picture, ξ indicates the coupling constant strength, whereas V(ϕ) represents the the scalar field potential, responsible for the dark energy behavior.This scenario corresponds to a non-minimal quintessence in Q gravity and it can be denoted as non-minimal symmetric-teleparellel dark energy. This justifies the choice f(Q)=Q+ξ Qϕ^2. For the sake of clearness, in fact, the function f(Q) is instead a superfield made up by a double-field approach, say f(Q,ϕ). However, by virtue of the additivity of the energy momentum tensor we can naively assume f(Q) as above.The variation of Eq. (<ref>) with respect to the metric provides, in fact, the following field equations2/√(-g)∇_λ(√(-g)f_QP^λ_μν)+1/2g_μνf+f_Q(P_μλβQ_ν^λβ-2Q_λβμP^λβ_ν)=T_μν^(m)+T_μν^(ϕ),in which the contribution due to quintessence appears in the derivatives of f(Q) and on the right side. In the above relation, we used the standard nomenclature, f_Q=∂ f(Q)/∂ Q, and the usual definition of the energy-momentum tensor, T^(m)_μν=-2/√(-g)δ(√(-g)ℒ_m)/δ g_μν and T^(ϕ)_μν=-2/√(-g)δ(√(-g)ℒ_ϕ)/δ g_μν. §.§ Non-minimal Q cosmologyAccording to the cosmological principle, we consider the spatially flat FLRW line element, ds^2=-dt^2+a^2(t)[dx^2+dy^2+dz^2], where a(t) is the scale factor.In FLRW, the non-metricity scalar takes the simple form Q=6H^2 and the modified Friedmann equations become <cit.> 3H^2=1/2f_Q(ρ+f/2),Ḣ+(3H+ḟ_Q/f_Q)H=1/2f_Q(-p+f/2),where H≡ȧ/a is the Hubble parameter the dot indicates the derivative with respect to the cosmic time, t. The standard Friedmann equations with f(Q)=Q+ξ Q ϕ^2 read 3H^2=ρ-3H^2ξϕ^2,2Ḣ+3H^2=-p-4ξϕϕ̇H-3H^2ξϕ^2-2ξϕ^2Ḣ.Employing the dust case, we have H^2=1/3(ρ^ϕ_ eff+ρ_m),2Ḣ+3H^2=-p^ϕ_ eff, with the density and pressure acquiring the simple forms,ρ^ϕ_ eff=ρ_ϕ-3ξϕ^2H^2,p^ϕ_ eff=p_ϕ+4ξϕϕ̇H+3ξϕ^2H^2+2ξϕ^2Ḣ.Thus,combining Eqs. (<ref>) and  (<ref>), we obtain the modified Raychaudhuri equation,Ḣ=-1/2(ρ_m+ρ^ϕ_ eff+p^ϕ_ eff),whereρ_ϕ=1/2ϕ̇^2+V(ϕ) and p_ϕ=1/2ϕ̇^2-V(ϕ), leading to the following density and pressure for the effective non-minimal caseρ^ϕ_ eff=1/2ϕ̇^2+V(ϕ)-3ξϕ^2H^2,p^ϕ_ eff=1/2ϕ̇^2-V(ϕ)+(3H^2+2Ḣ)ξϕ^2 +4ξϕϕ̇H. The dynamical modified Klein-Gordon equation derives from varying the action in Eq. (<ref>) with respect to the scalar field ϕ,ϕ̈+3Hϕ̇+V_,ϕ=-ξ Qϕ,in which ξ Q ϕ represents a source term. The usual Klein-Gordon equation is recovered as ξ→ 0.Selecting a suitable expression for V(ϕ) implies characterizing different stability. Thus, choosing it falls into an autonomous system, allowing for numerical solutions by setting suitable initial values for new proper variables, as we will clarify in the next section. § THE PHASE-SPACE ANALYSIS Recasting cosmological equations into autonomous systems provides a method to investigate the universe's dynamics <cit.>. We study the cosmological evolution searching for the critical points for a given dark energy potential. Critical points of the autonomous system occur if the derivatives of cosmic variables vanish therein. Specifically, we focus on late times attractors, i.e., those critical points asymptotically behaving as solutions for the autonomous system. To accomplish this, it is convenient to worknew dimensionless variables out, defined as x≡ϕ̇√(6)H ,y≡√(V)√(3)H , v≡√(ρ_m)√(3)H ,u≡ϕ . Thus, rewriting the constraint in Eq. (<ref>) and using Eqs. (<ref>)-(<ref>), we obtainx^2+y^2+v^2-ξ u^2=1,whereas the parameter s=-Ḣ/H^2 is derived from Eqs. (<ref>)-(<ref>) bys=3x^2+3/2v^2+2√(6)ξ x u/1+ξ u^2.At this stage, the dynamical system is given by the conservation equation for ρ_ϕ in terms of the new variables, sayx'=(s-3)x-√(6)ξ u-V_,ϕ/√(6)H^2, y'=sy+x/√(2)HV_,ϕ/√(V), u'=√(6)x,where prime indicatesderivative with respect to the number of e-foldings, N≡ln a. In addition, the conservation equation for ρ_m can be derived by applying the constraint in Eq. (<ref>). In this context, we note that the dimensionless energy densities for matter and scalar field can be written asΩ_m=v^2,Ω_ϕ=x^2+y^2-ξ u^2,respectively. Finally, the dark energy equation of state is denoted asω_ϕ=x^2-y^2+ξ u^2-2/3ξ u^2 s+4 √(2/3)ξ x u/x^2+y^2-ξ u^2. §.§ Describing dark energy potentials After selecting an appropriate form of V(ϕ), Eq. (<ref>) transforms into an autonomous system and it can be numerically solved by setting suitable initial values for the set of variables, {y,u,v}. Appropriate initial conditions can be given as <cit.> y_i=10^-6, u_i=10^-6, v_i^2=0.999,at a_i=10^-2.Thus, to find the fixed points and studying the stability around them, we can first impose the condition x'=y'=u'=0.In the following subsection, we explore three distinct forms of scalar field potential for which, imposing the above condition, we are able to argue critical points.§.§.§ Power law potential The likely most viable form of potential is represented by a power law expression of the form,V(ϕ)=V_0ϕ^α, i.e., a typical potential widely-used for the ϕCDM model, as shown in Ref.<cit.>. In this case, the autonomous system in Eq. (<ref>) yields x'=(s-3)x-√(6)ξ u-α√(3/2)y^2/u, y'=sy+α√(3/2)x y/u, u'=√(6)x.From the system above, if α≠ 0, the critical points are allowed for ξ<0 only [If ξ>0, the critical points obtained for α≠0 are not real.], and those respecting the condition y≥ 0 are the following(x,y,u)_I=(0,√(2/2+α),-√(α/-ξ(2+α))),(x,y,u)_II=(0,√(2/2+α),√(α/-ξ(2+α))).When substituted into the equation of state, reported in Eq. (<ref>), these points yield ω_ϕ=-1, representing a late-time universe with a cosmological constant term as dark energy component.Both the critical points exist for α≥ 0 and, particularly, we select two particular values, namelyα=0 and α=1, providing, - a constant potential,- a linear potential,as two main subcases, respectively. In this respect, we single out the two cases above as follows.-α=0 case. In Ref. <cit.>, exact solutions with V(ϕ)=V_0 are found for the evolution of a dynamical system within a flat FLRW metric in a universe containingdust-like matter and a non-minimally coupled scalar field. For α=0, thesystem in Eq. (<ref>)reduces tox'=(s-3)x-√(6)ξ u, y'=sy, u'=√(6)x,and the critical point reads(x_c,y_c,u_c)_I=(0,1,0),admitting a priori the two cases[For the sake of completeness, choosingξ signs turns out to be of particular interest in field theories applied to early-time cosmology, see e.g. <cit.>. ], ξ>0 or ξ<0.At this stage, the corresponding normalized energy densities in Eq. (<ref>)turn intoΩ_m=0,Ω_ϕ=1.Consequently, for a constant potential, the universe lying on the critical point can be represented by a de Sitter phase, where clearly dark energy behaves as a genuine cosmological constant term[Here, the issue related to the cosmological constant magnitude at late times is not relevant. For a broad discussion on that, refer to as e.g. Refs. <cit.>.].-α=1 case. This scenario refers to as a symmetric-teleparallel dark energy with a linear potential,V(ϕ)=V_0ϕ. This potential has been adopted with the aim of alleviating the coincidence problem <cit.>. Hence, through itEq. (<ref>) becomesx'=(s-3)x-√(6)ξ u-√(3/2)y^2/u, y'=sy+√(3/2)x y/u, u'=√(6)x.and, this time, the critical points are given by(x_c,y_c,u_c)_I=(0,√(2/3),-1/√(-3 ξ)),(x_c,y_c,u_c)_II=(0,√(2/3),1/√(-3 ξ)).Again, normalized energy densities are determined for both points asΩ_m=0,Ω_ϕ=1,denoting a dark energy dominating the universe at late times.§.§.§ Inverse power law potential The second framework that we analyze consists in an inverse power law potential, of the form V(ϕ)=V_0ϕ^-α<cit.>. Considering this potential, the dynamical system in Eq. (<ref>) assumes the form x'=(s-3)x-√(6)ξ u+α√(3/2)y^2/u, y'=sy-α√(3/2)x y/u, u'=√(6)x.Even though it is also possible to determine this kind of potential using a power-law as above, with a negative index, we focused on two distinct cases, marking the huge physical differences between the two approaches, see e.g. <cit.>.The critical points in this system are determined by selecting ξ>0 to ensure real solutions and considering y≥ 0, we find(x_c,y_c,u_c)_I=(0,√(2/2-α),-√(α/ξ(2-α))),(x_c,y_c,u_c)_II=(0,√(2/2-α),√(α/ξ(2-α))). The equation of state in these points is ω_ϕ=-1, giving a cosmological constant at late-times.Remarkably, in this case the condition for the existence of critical points limits the exponent to be 0≤α<2, where, the case α=0 already falls into the previous constant potential. Hereafter, we therefore focus on α=1 that naively provides possible viable solutions.With this value, Eq. (<ref>) turns intox'=(s-3)x-√(6)ξ u+√(3/2)y^2/u, y'=sy-√(3/2)x y/u, u'=√(6)x,providing(x_c,y_c,u_c)_I=(0,√(2),-1/√(ξ)),(x_c,y_c,u_c)_II=(0,√(2),1/√(ξ)),as critical points. Both of them implyΩ_m=0,Ω_ϕ=1,so we get a universe constituted by dark energy only.§.§.§ Exponential potential The last potential considered is an exponential potential, i.e., V(ϕ)=V_0e^-ϕ. It has been largely investigate in inflation, structure formation and dark energy contexts <cit.>. The choice of this potential give us the following autonomous systemx'=(s-3)x-√(6)ξ u+√(3/2)y^2, y'=sy-√(3/2)x y, u'=√(6)x.By using the exponential potential and the condition y≥0, the critical points obtained are (x_c,y_c,u_c)_I=(0,0,0),(x_c,y_c,u_c)_II=(0,√(2ξ-2√(ξ(ξ-1))), 1-√(ξ-1/ξ)),(x_c,y_c,u_c)_III=(0,√(2ξ+2√(ξ(ξ-1))), 1+√(ξ-1/ξ)).The existence for the first point is admitted ∀ξ, instead for the second point we have to impose ξ≤0 and for the third one ξ need to be ξ <0 or ξ≥ 1. In the first critical point, the universe is dominated purely by the matter, since Ω_m=1,Ω_ϕ=0,while for the last two points, we obtainthe same de Sitter solution derived with the other potentials, i.e., ω_ϕ=-1,Ω_m=0,Ω_ϕ=1, indicating again a cosmological constant dominated universe at late-times.§ THE STABILITY ANALYSISIn this section, we explore the stability of the critical points outlined in <Ref>. The aim is to verify if the derived cosmological solutions can behave as late time attractors<cit.>. To this end, we evaluate the linear perturbations of the dynamical system and examine the sign of the eigenvalues of the Jacobian matrix associated with each critical point𝒥=( [ ∂ x'/∂ x ∂ x'/∂ y ∂ x'/∂ u; ∂ y'/∂ x ∂ y'/∂ y ∂ y'/∂ u; ∂ u'/∂ x ∂ u'/∂ y ∂ u'/∂ u;])_(x=x_c,y=y_c,u=u_c) .The stability of the solution depends on the eigenvalues, namely as all real parts of the eigenvalues are negative, it establishes a stable point. Conversely, as all are positive, it leads to an unstable point. Interestingly, if the eigenvalue signs are positive and negative, it defines a critical point as a saddle point.In the first case, the critical point is called attractor. Small perturbation, δ x, δ y and δ u, around the critical point are obtained as follows( [ δ x'; δ y'; δ u' ]) = 𝒥( [ δ x; δ y; δ u ]),where the coefficients of the Jacobian matrix 𝒥 depend on the potential under exam.For each potential, discussed above, we are now in condition to evaluate linear perturbations to search for stability properties. §.§ Constant potential In the case of constant potential V(ϕ)=V_0, the matrix 𝒥 isdetermined by the following coefficients𝒥_11=9x^2+8√(6)ξ ux-3(1+y^2+ξ u^2)2(1+ ξ u^2),𝒥_12=-3xy1+ξ u^2,𝒥_13=-ξ(3ux^3-3uxy^2+2√(6)x^2(-1+u^2ξ)(1+ξ u^2)^2+ξ(√(6)(1+ξ u^2)^2)(1+ξ u^2)^2,𝒥_21=y (3x+ 2 √(6) uξ)1 + ξ u^2,𝒥_22=3 (1 + x^2 - 3 y^2) + 3ξ u^2 +4 √(6)ξ x u2 (1 + ξ u^2), 𝒥_23=-yξ(3u(x^2-y^2)+2√(6)x(-1+ξ u^2))2u^2(1+ξ u^2)^2 , 𝒥_31=√(6) , 𝒥_32=𝒥_33=0.Then, the eigenvalue constraint evaluated at critical point (x_c,y_c,u_c)_I=(0,1,0) is provided by(3+μ)(μ^2+3μ+6ξ)=0,providing the following solutionsμ_1=-3 , μ_2=12(-3-√(9-24ξ)) , μ_3=12(-3+√(9-24ξ)) . The real parts of eigenvalues are all negative if ξ >0. Through this fact, the critical point is stable and indicates a late-time attractor, as shown in <Ref>.§.§ Linear potential The choice of linear potential, V(ϕ)=V_0ϕ, determines the following Jacobian matrix coefficients 𝒥_11=9x^2+8√(6)ξ ux-3(1+y^2+ξ u^2)2(1+ξ u^2) , 𝒥_12=-√(6)yu-3xy1+ξ u^2 , 𝒥_13=y^2(√(6)+2√(6)ξ u^2+6ξ x u^3+√(6)ξ^2 u^4)2u^2(1+ξ u^2)^2 -ξ u^2(3x^3 u+2√(6)x^2(-1+ξ u^2)+√(6)(1+ξ u^2)^2)/u^2(1+ξ u^2)^2 , 𝒥_21=y (√(6) + 6 x u + 5 √(6)ξ u^2)2 u(1 + ξ u^2) , 𝒥_22=√(6) x + 3 u (1 + x^2 - 3 y^2) + 3ξ u^3 +5 √(6)ξ x u^22 u(1 + ξ u^2) , 𝒥_23=y(-6 ξ x^2 u^3 + 6ξ y^2 u^3- √(6) x (1 - 2ξ u^2+ 5 ξ^2 u^4))2u^2(1+ξ u^2)^2 , 𝒥_31=√(6) , 𝒥_32=𝒥_33=0 . Both the critical points, provided by Eqs. (<ref>) and (<ref>), yield the same eigenvalues equation,(3+μ)(μ^2+3μ+18ξ)=0,and, so, once the above equation is solved, the eigenvalues readμ_1=-3 , μ_2=-32(1-√(1-8ξ)) , μ_3=-32(1+√(1-8ξ)) .Since for the linear potential the coupling constant is negative, ξ <0, then no attractor solutions exist and the critical points are unstable. The same is valid for all the power law potential with α>0, indeed for these value not all the eigenvalues are negative[Remarkably, for a generic exponent, α>0, the eigenvalues are easily μ_1=-3, μ_2=1/2(-3-√(9+24(2+α)ξ)) and μ_3=1/2(-3+√(9+24(2+α)ξ)), and it is evident that not all these values provide negative real parts as ξ <0. Thus, the critical points for α>0 are not stable and no late-times attractors can be found.]. §.§ Inverse potential We examine the inverse potential V(ϕ)=V_0ϕ^-1 with the following coefficients for matrix 𝒥 𝒥_11=-3 ξu^2+8 √(6)ξu x-9 x^2+3 y^2+3/2 ξu^2-2,𝒥_12=√(6) y/u+3 x y/ξu^2-1,𝒥_13=ξ(√(6)(ξu^2-1)^2-2 √(6) x^2 (ξu^2+1)+3 u x^3)/(ξu^2-1)^2-ξ(3 u x y^2)/(ξu^2-1)^2-√(3/2) y^2/u^2,𝒥_21=2 √(6)ξu y-3 x y/ξu^2-1-√(3/2)y/u,𝒥_22=-3√(6)ξu^2 x+√(6) x+3 ξu^3-3 u (x^2-3 y^2+1)/2 u (ξu^2-1),𝒥_23=√(3/2) x y/u^2-ξy (2 √(6) x (ξu^2+1)-3 u x^2+3 u y^2)/(ξu^2-1)^2, 𝒥_31= √(6), 𝒥_32=𝒥_33=0.Again, both the critical points respect the following eigenvalues equation(μ+3) (μ^2+3 μ+6 ξ)=0,which gives the solutionsμ_1=-3 , μ_2=12(-3-√(9-24ξ)) , μ_3=12(-3+√(9-24ξ)).The values are the same obtained in the constant potential case,indicating an attractor, ∀ξ>0. §.§ Exponential potential The last case deals with the analysis of the exponential potential, V(ϕ)=V_0e^-ϕ. Here, 𝒥 coefficients are𝒥_11=9 x^2 + 8 √(6)ξ x u - 3 (1 + y^2 + ξ u^2)2(1+ξ u^2) , 𝒥_12=y (√(6) - 3x1+ξ u^2) , 𝒥_13=-ξ(3 x^3 u - 3 x y^2 u + 2 √(6) x^2 (-1 + ξ u^2 ))(1 + ξ u^2)^2-ξ(√(6) (1 + ξ u^2)^2)(1 + ξ u^2)^2 , 𝒥_21=-y (-6 x + √(6) (1 - 4ξ u + ξ u^2))2(1+ξ u^2) , 𝒥_22= 3(1 + x^2 - 3 y^2 + ξ u^2)- √(6) x (1 - 4ξ u + ξ u^2)2(1+ξ u^2) , 𝒥_23=-ξ y (3u(x^2 + y^2) + 2 √(6) x (-1 + ξ u^2))(1 +ξ u^2)^2 , 𝒥_31=√(6) , 𝒥_32=𝒥_33=0 . In this scenario, three distinct eigenvalues equations occur, one corresponding to each critical point. For the first critical point, we have(3-2μ)(2μ^2+3μ+12ξ)=0,whose solutions areμ_1^(I)=32 , μ_2^(I)= 14(-3 - √(9 - 96 ξ)) , μ_3^(I)=14(-3 + √(9 - 96 ξ)) . These eigenvalues are all positive, so the first critical point is unstable. The second critical point gives us the relation, ξ(3 + μ)[3 μ + μ^2 +6 (-2 ξ + √(ξ(ξ-1))) - 2 (-3 μ - μ^2 + 6 ξ) (-ξ + √(ξ(ξ-1)))](ξ- √(ξ(ξ -1)))^2=0 , implying as solutions,μ_1^(II)=-3 , μ_2^(II)=-3 - 6 ξ + 6 √(ξ(ξ-1)) + √(3)[3 + 64 ξ^3 + 4 √(ξ(ξ-1)) +8 ξ(1 + 5 √(ξ(ξ-1))) - 8 ξ^2 (9 + 8 √(ξ(ξ-1)))]^1/22 - 4 ξ + 4 √(ξ(ξ-1)) , μ_3^(II)=-3 + 6 ξ - 6 √(ξ(ξ-1)) + √(3)[3 + 64 ξ^3 + 4 √(ξ(ξ-1)) +8 ξ(1 + 5 √(ξ(ξ-1))) - 8 ξ^2 (9 + 8 √(ξ(ξ-1)))]^1/22 - 4 ξ + 4 √(ξ(ξ-1)) ,Hence, for ξ >1, all the eigenvalues are negative indicating the stability of the second critical point, acting as an attractor for the universe at late-times, as illustrated in <Ref>.Finally, the third critical point provides the following eigenvalues equationξ(3 + μ)[-3 μ - μ^2 +6 (2 ξ + √(ξ(ξ-1))) - 2 (-3 μ - μ^2 + 6 ξ) (-ξ + √(ξ(ξ-1)))](ξ+ √(ξ(ξ -1)))^2=0 ,whose solutions areμ_1^(III)=-3 , μ_2^(III)=--3 + 6 ξ + 6 √(ξ(ξ-1)) + √(3)[3 + 64 ξ^3 - 4 √(ξ(ξ-1)) +8 ξ(1 - 5 √(ξ(ξ-1))) + 8 ξ^2 (-9 + 8 √(ξ(ξ-1)))]^1/2-2 + 4 ξ + 4 √(ξ(ξ-1)) , μ_3^(III)=3 - 6 ξ - 6 √(ξ(ξ-1)) + √(3)[3 + 64 ξ^3 - 4 √(ξ(ξ-1)) +8 ξ (1 - 5 √(ξ(ξ-1))) + 8 ξ^2 (-9 + 8 √(ξ(ξ-1)))]^1/2-2 + 4 ξ + 4 √(ξ(ξ-1)) .It is evident that either for ξ<0 or ξ≥1 not all eigenvalues are real and negative, implying the third critical point is unstable.§ OUTLOOKS In this paper, we explored a modified Q theory of gravity that involves a non-minimal coupling between the non-metricity scalar and a dark energy scalar field. We referred to this scenario as non-minimal symmetric-teleparallel dark energy, where the phionfield acts as quintessence non-minimally coupled with gravity, in analogy to the well-established geometric couplings with the Ricci scalar in standard Einstein's gravity.Within this framework, we searched for regions of stability and, so, to identify late-time attractors, we conducted a stability analysis for different types of potentials. Particular attention has been devoted to the coupling constant sign and strength, showing how it acts to modify the stability of the system, compared with the free Lagrangian, i.e., the one without non-minimal coupling.Concerning the potentials, among all the possibilities, we specifically considered power-law, inverse power-law, and exponential potentials that appear as viable cases of dark energy potentials.For power-law potential, we distinguished α = 0 and α = 1. In the first case, the critical point identifies a late-time attractor since it is real even for ξ > 0. However, for α = 1, and generally for α > 0, ξ < 0,critical points turn out to be unstable. In addition, we conducted a detailed analysis of the linear potential, indicating analogous outcomes.For inverse power-law potential, we focused on 0 ≤α < 2 with ξ > 0. By choosing α = 1 and ξ = 1/2, both critical points appear stable.Finally, when analyzing the exponential potential, only (x_c,y_c,u_c)_II is an attractor solution for the dynamical system, implying ξ > 1. Consequently, we ended up that a generic power-law potential with α > 0 is not suitable as a non-minimal quintessence potential in the equivalent Q modified gravity scenario.In addition, the equations of state have been also computed for each case, prompting how they asymptotically tend to a de Sitter case. In this respect, we noticed the evidence for a degeneracy among exponents and coupling constants that emerged as a direct consequence of our analysis, i.e., in order to guarantee stability and physical soundness of our potentials. Specifically, even in our case the constant potential appeared particularly straightforward since the critical point does not depend on the non-minimal coupling constant strength. This output is not surprising, as it corresponds to a case resembling the cosmological constant contribution.Future works will analyze the use of alternative versions of quintessence, where the scalar field can act differently than stiff matter, as for quintessence. For example, we intend to work fields that can adapt to dark matter instead of dark energy and/or to unified dark energy models. More complicated versions of non-minimal couplings will also be taken into account. Finally, we will investigate more deeply whether in modified theories of gravity, complicated couplings are allowed, possibly providing different information than pure GR. The work of OL ispartially financed by the Ministry of Education and Science of the Republic of Kazakhstan, Grant: IRN AP19680128. The authors express their gratitude to Rocco D'Agostino for discussions toward the computational parts of this work. The authors are also grateful to Roberto della Ceca and Luigi Guzzo for their great support during the time spent in INAF-Brera, where this work has been finalized. 10Sahni:2004ai Varun Sahni. Dark matter and dark energy.Lect. Notes Phys., 653:141–180, 2004.Oks:2021hef Eugene Oks. Brief review of recent advances in understanding dark matter and dark energy.New Astron. Rev., 93:101632, 2021.Arbey:2021gdg Alexandre Arbey and Farvah Mahmoudi. Dark matter and the early Universe: a review.Prog. Part. Nucl. Phys., 119:103865, 2021.Nojiri:2017ncd Shin'ichi Nojiri, Sergei D. Odintsov, and Vasilis K. Oikonomou. Modified Gravity Theories on a Nutshell: Inflation, Bounce and Late-time Evolution.Phys. Rept., 692:1–104, 2017.Perivolaropoulos:2021jda Leandros Perivolaropoulos and Foteini Skara. Challenges for CDM: An update.New Astron. Rev., 95:101659, 2022.DiValentino:2021izs Eleonora Di Valentino, Olga Mena, Supriya Pan, Luca Visinelli, Weiqiang Yang, Alessandro Melchiorri, David F. Mota, Adam G. Riess, and Joseph Silk. In the realm of the Hubble tension—a review of solutions.Class. Quant. Grav., 38(15):153001, 2021.Peebles:2002gy P. J. E. Peebles and Bharat Ratra. The Cosmological Constant and Dark Energy.Rev. Mod. Phys., 75:559–606, 2003.Frusciante:2019xia Noemi Frusciante and Louis Perenon. Effective field theory of dark energy: A review.Phys. Rept., 857:1–63, 2020.vandeBruck:2022xbk Carsten van de Bruck, Gaspard Poulot, and Elsa M. Teixeira. Scalar field dark matter and dark energy: a hybrid model for the dark sector.JCAP, 07:019, 2023.Kase:2019veo Ryotaro Kase and Shinji Tsujikawa. Scalar-Field Dark Energy Nonminimally and Kinetically Coupled to Dark Matter.Phys. Rev. D, 101(6):063511, 2020.Xu:2021xbt Tengpeng Xu, Yun Chen, Lixin Xu, and Shuo Cao. Comparing the scalar-field dark energy models with recent observations.Phys. Dark Univ., 36:101023, 2022.Dunsby:2016lkw Peter K. S. Dunsby, Orlando Luongo, and Lorenzo Reverberi. Dark Energy and Dark Matter from an additional adiabatic fluid.Phys. Rev. D, 94(8):083525, 2016.Boshkayev:2019qcx Kuantay Boshkayev, Rocco D'Agostino, and Orlando Luongo. Extended logotropic fluids as unified dark energy models.Eur. Phys. J. C, 79(4):332, 2019.Muccino:2020gqt Marco Muccino, Luca Izzo, Orlando Luongo, Kuantay Boshkayev, Lorenzo Amati, Massimo Della Valle, Giovanni Battista Pisani, and Elena Zaninoni. Tracing dark energy history with gamma ray bursts.Astrophys. J., 908(2):181, 2021.Copeland:2006wr Edmund J. Copeland, M. Sami, and Shinji Tsujikawa. Dynamics of dark energy.Int. J. Mod. Phys. D, 15:1753–1936, 2006.Capozziello:2022jbw Salvatore Capozziello, Rocco D'Agostino, and Orlando Luongo. Thermodynamic parametrization of dark energy.Phys. Dark Univ., 36:101045, 2022.Luongo:2015zgq Orlando Luongo, Giovanni Battista Pisani, and Antonio Troisi. Cosmological degeneracy versus cosmography: a cosmographic dark energy model.Int. J. Mod. Phys. D, 26(03):1750015, 2016.BeltranJimenez:2019esp Jose Beltrán Jiménez, Lavinia Heisenberg, and Tomi S. Koivisto. The Geometrical Trinity of Gravity.Universe, 5(7):173, 2019.Capozziello:2019cav Salvatore Capozziello, Rocco D'Agostino, and Orlando Luongo. Extended Gravity Cosmography.Int. J. Mod. Phys. D, 28(10):1930016, 2019.Sotiriou:2008rp Thomas P. Sotiriou and Valerio Faraoni. f(R) Theories Of Gravity.Rev. Mod. Phys., 82:451–497, 2010.Heisenberg:2023lru Lavinia Heisenberg. Review on f(Q) Gravity.9 2023.BeltranJimenez:2017tkd Jose Beltrán Jiménez, Lavinia Heisenberg, and Tomi Koivisto. Coincident General Relativity.Phys. Rev. D, 98(4):044048, 2018.Koussour:2023rly M. Koussour and Avik De. Observational constraints on two cosmological models of f(Q) theory.Eur. Phys. J. C, 83(5):400, 2023.Koussour:2023hgl M. Koussour, N. Myrzakulov, Alnadhief H. A. Alfedeel, E. I. Hassan, D. Sofuoğlu, and Safa M. Mirgani. Square-Root parametrization of dark energy in f(Q) cosmology.10 2023.Mandal:2020lyq Sanjay Mandal, P. K. Sahoo, and J. R. L. Santos. Energy conditions in f(Q) gravity.Phys. Rev. D, 102(2):024057, 2020.BeltranJimenez:2019tme Jose Beltrán Jiménez, Lavinia Heisenberg, Tomi Sebastian Koivisto, and Simon Pekar. Cosmology in f(Q) geometry.Phys. Rev. D, 101(10):103507, 2020.Khyllep:2022spx Wompherdeiki Khyllep, Jibitesh Dutta, Emmanuel N. Saridakis, and Kuralay Yesmakhanova. Cosmology in f(Q) gravity: A unified dynamical systems analysis of the background and perturbations.Phys. Rev. D, 107(4):044022, 2023.Capozziello:2022rac Salvatore Capozziello, Rocco D'Agostino, and Orlando Luongo. The phase-space view of non-local gravity cosmology.Phys. Lett. B, 834:137475, 2022.DAgostino:2018ngy Rocco D'Agostino and Orlando Luongo. Growth of matter perturbations in nonminimal teleparallel dark energy.Phys. Rev. D, 98(12):124013, 2018.Hrycyna:2015vvs Orest Hrycyna. What ξ? Cosmological constraints on the non-minimal coupling constant.Phys. Lett. B, 768:218–227, 2017.Belfiglio:2023rxb Alessio Belfiglio, Youri Carloni, and Orlando Luongo. Particle production from non-minimal coupling in a symmetry breaking potential transporting vacuum energy.7 2023.Belfiglio:2022yvs Alessio Belfiglio, Orlando Luongo, and Stefano Mancini. Inflationary entanglement.Phys. Rev. D, 107(10):103512, 2023.Belfiglio:2022cnd Alessio Belfiglio, Orlando Luongo, and Stefano Mancini. Geometric corrections to cosmological entanglement.Phys. Rev. D, 105(12):123523, 2022.Luongo:2018lgy Orlando Luongo and Marco Muccino. Speeding up the universe using dust with pressure.Phys. Rev. D, 98(10):103520, 2018.Avelino:2004vy Pedro Pina Avelino. The Coincidence problem in linear dark energy models.Phys. Lett. B, 611:15–20, 2005.Copeland:1997et Edmund J. Copeland, Andrew R Liddle, and David Wands. Exponential potentials and cosmological scaling solutions.Phys. Rev. D, 57:4686–4690, 1998.Ferreira:1997au Pedro G. Ferreira and Michael Joyce. Structure formation with a selftuning scalar field.Phys. Rev. Lett., 79:4740–4743, 1997.Loo:2023oxk Tee-How Loo, Raja Solanki, Avik De, and P. K. Sahoo. f(Q, T) gravity, its covariant formulation, energy conservation and phase-space analysis.Eur. Phys. J. C, 83(3):261, 2023.Ghosh:2023amt Sayantan Ghosh, Raja Solanki, and P. K. Sahoo. Dynamical system analysis of scalar field cosmology in coincident f(Q) gravity.9 2023.
http://arxiv.org/abs/2312.16088v1
{ "authors": [ "Youri Carloni", "Orlando Luongo" ], "categories": [ "gr-qc", "astro-ph.CO" ], "primary_category": "gr-qc", "published": "20231226152208", "title": "Phase-space analysis in non-minimal symmetric-teleparallel dark energy" }
=1 @subsection 0.3ex>-0.75em-1.1ex∼ 0.3ex<-0.75em-1.1ex∼#1 #1 SU(2)_L ⊗ SU(2)_R ⊗ U(1)_B-L neutrinoless double beta decay𝒞𝒫𝒞𝒫𝒯𝒫𝒯𝒞2𝒵_2 et al.h.c.#1⟨ #1⟩#1 to #1 ptlepton flavour violation lepton number violation lepton flavour violation lepton number violating SU(3)_c ⊗ SU(2)_L ⊗ U(1)_YSU(2)_L ⊗ U(1)_YSU(3)_c ⊗ SU(3)_L ⊗ U(1)_XSU(3)_c ⊗ SU(3)_L ⊗ U(1)_X ⊗ U(1)_N 21SU(3) ⊗ SU(2) ⊗ U(1) #1#1#1#1#1#1#1#1#1#1 [email protected]@[email protected]@iiserb.ac.inThe current upper limit on N_ eff at the time of CMB by Planck 2018 can place stringent constraints in the parameter space ofBSM paradigms where their additional interactions may affect neutrino decoupling. Motivated by this fact in this paper we explore the consequences of light gauge boson (Z') emerging from local U(1)_X symmetry in N_ eff at the time of CMB. First, we analyze the generic U(1)_X models with arbitrary charge assignments for the SM fermionsand show that, in the context of N_ effthe generic U(1)_X gauged models can be broadly classified into two categories, depending on the charge assignments of first generation leptons. We then perform a detailed analysis with two specificU(1)_X models:U(1)_B_3-3L_e and U(1)_B_3-3L_μ and explore the contribution in N_ eff due to the presence of Z' realized in those models. For comparison, we also showcase the constraints from low energy experiments like: Borexino, Xenon 1T, neutrino trident, etc.We show that in a specific parameter space, particularly in the low mass region of Z',the bound from N_ eff (Planck 2018) is more stringent than the experimental constraints. Additionally, a part of the regions of the same parameter space may also relax the H_0 tension.Hubble Tension and Cosmological Imprints ofU(1)_X Gauge Symmetry: U(1)_B_3-3 L_i as a case study Rahul Srivastava===================================================================================================§ INTRODUCTION The cosmological parameter N_ eff, associated with the number of relativistic degrees of freedom, is crucial in describing the dynamics of the thermal history of the early universe.At very high temperature of the universe, the photon and electron bath were coupled , whereas at low temperature the interaction rate drops below the Hubble expansion rate and the two baths decouple <cit.>.N_ eff is parameterised in terms of the ratio of energy densities of photon and neutrino bath. Within the Standard Model (SM) particle contents, the two aforementioned baths were coupled through weak interactions at high temperature and as temperature drops (T∼ 2 MeV) they decouple. Assuming such scenario the predicted value of N_ eff^ SM turns out to be 3.046<cit.>. This value of N_ eff deviates from the number of neutrinos (3) in the SMparticle content is due to various non-trivial effects like non-instantaneous neutrino decoupling, finite temperature QED corrections and flavour oscillations of neutrinos <cit.>. However, in observational cosmology also,measurements of the cosmic microwave background (CMB), baryon acoustic oscillations (BAO), andother cosmological probes provide constraints on N_ eff.The current Planck 2018 data has precise measurementof N_ eff at the time of CMB with95% confidence level, N_ eff = 2.99^+0.34_-0.33<cit.>.Thus the current upper limit from Planck 2018 data shows that there can be additional contribution (apart from SM predicted value) to N_ eff, indicating the scope for new physics. It is evident that N_ eff will change in the presence of any beyond standard model (BSM) particles with sufficient interactions with either of the photon or neutrino bath at temperatures relevant for neutrino decoupling <cit.> or in presence of any extra radiation <cit.>. Thus the upper limit on N_ eff from CMB can be used to constrain such BSM paradigms dealing with any extra energy injection. Several studies have been performed to explore the imprints of BSM models in N_ eff like models with early dark energy <cit.>, relativistic decaying dark matter <cit.> and non-standard neutrino interactions (NSI) <cit.>.From the perspective of neutrino physics, the last one is very interesting since the non-standard interactions of light neutrinos may alter the late-time dynamics between photon and neutrino baths, contributing to N_ eff and the same NSI interactions can be probed from ground based neutrino experiments as well <cit.>.On the other hand, the anomaly-free U(1)_X gauge extended BSM models are well-motivated from several aspects like non zero neutrino masses <cit.>, flavor anomalies <cit.> etc.The anomaly condition allows the introduction of right-handed neutrinos in the theory and thus it can also explain non-zero neutrino masses via the Type-I seesaw mechanism <cit.>.These scenarios naturally involve a gauge boson (Z^') that originates from the U(1)_X abelian gauge symmetry and has neutral current interactions with neutrinos and electrons which may have some nontrivial role in neutrino decoupling and hence in deciding N_ eff<cit.>.The detectability of gauge boson throughout the mass scale (M_Z^') also motivates such scenarios. For TeV-scale Z', constraints arise from collider experiments <cit.>, whereas in the sub-GeV mass region, low energy scattering experiments (neutrino electron scattering <cit.>, neutrino-nucleus scattering <cit.> etc.) are relevant to constrain the parameter space. However, both types of direct searches become less sensitive in the mass M_Z^'≲𝒪(10   MeV) and lower coupling (g_X ≲ 10^-5) region. In that case, the CMB observation on N_ eff plays a crucial role and can impose severe constraints on the M_Z^' - g_X plane, which stands as the primary focus of our study.In cosmology, there exists some discrepancy between the values of expansion rate H_0 obtained from CMB and local measurement <cit.>. Using direct observations of celestial body distances and velocities, the SHOES collaboration calculated H_0 = 73.04 ± 1.04   Km s^-1 Mpc^-1<cit.>. However, the CMB measurements like Planck 2015 TT data predict H_0=68.0^+2.6_-3.0  Km s^-1Mpc^-1 at 1σ<cit.> and from recent Planck 2018 collaboration it turns out to be H_0 = 67.36 ± 0.54   Km s^-1 Mpc^-1<cit.>, where both the collaboration analyzes the CMB data under the assumption of ΛCDM cosmology.So there is a disagreement between the local and CMB measurements roughly at the level of 4 σ-6σ<cit.>.Although such discrepancy may arise from systematic error in measurements <cit.>, it also provides a hint for BSM scenarios affecting the dynamics of the early universe. One possible way out to relax this so called “H_0 tension" involves increasing N_ eff inthe approximate range ∼3.2-3.5<cit.>[However, it is essential to acknowledge that increasing N_ eff also provokes the tension related to the other parameter of ΛCDM model, σ_8<cit.>.]. As discussed earlier U(1)_X local gauge extension leads to new gauge boson and new interaction with SM leptons which might affect neutrino decoupling and hence contribute to N_ eff. Thus U(1)_X gauge extension can be one possible resolution to the H_0 tension also. Two kinds of U(1)_X scenarios have been considered so far in the literature to address the Hubble tension problem and/or the excess in N_ eff at CMB: U(1)_μ-τ<cit.> and U(1)_B-L<cit.>. In both these two cases, the corresponding gauge boson mass lies below 100 MeV, which attracts severe constraints from scattering experiments <cit.>, energy loss in supernovae, etc <cit.>. Note that in the second scenario, U(1)_μ-τ, the light gauge boson couples to the μ and τ leptons but does not to the electron and quarks at the tree level. As a result, it is subject to less existing constraints than U(1)_B-L.In this work, we begin with a generic U(1)_X extension and explore the aforementioned cosmological phenomena more comprehensively. For simplicity, we assume the light gauge boson was in a thermal bath, and the right-handed neutrinos are heavy enough (≳ 100 MeV) that they hardly play any role in neutrino decoupling. We study the dynamics of neutrino decoupling in the presence of the light gauge boson (Z') in a generic U(1)_X scenario with arbitrary U(1)_X charge assigned to SM fermions.We evaluate N_ eff by solving a set of coupled Boltzmann equations that describe the evolution of light particles i.e. electron, SM neutrino, and light gauge boson ( M_Z'≲𝒪(10  MeV) ).Thus we identify the region of parameter space that is consistent with Planck 2018 observations. We also illustrate the parameter space in which Hubble tension can be relaxed where the value of N_ eff falls within the range of 3.2 to 3.5<cit.>. Our analysis show that the depending on the U(1)_X charge assignment of the first generation lepton, the generic U(1)_X gauged models can be classified in context of N_ eff. We incorporate these cosmological observations into specific gauged U(1)_X models, U(1)_B_3-3L_i (i=e,μ,τ). In comparison to others, these extensions encounter fewer constraints, as they are exclusively linked only to the third generation of quarks and one generation of leptons. Our results show that the upper limit on N_ eff from Planck 2018 <cit.> can put stringent bounds on the parameter space of light Z' in the mass region ≲𝒪(10  MeV) where the other experimental bounds are comparatively relaxed.The paper is organized as follows. In sec.<ref>, we begin with a brief model-independent discussion on the generic U(1)_X model. Sec.<ref> is dedicated to a comprehensive analysis and discussion of the dynamics governing neutrino decoupling in terms of the cosmological parameter N_ eff in the presence of the light Z^' originated from the generic U(1)_X. Subsequently, in sec.<ref>, we present the numerical results in terms of N_ eff for this generic U(1)_X model. In sec.<ref>, we discuss the analysis presented earlier, which applies to the specific U(1)_X model, U(1)_B_3-3L_i (i=e,μ,τ). We present a brief discussion on H_0 tension in sec.<ref>. Finally, in sec.<ref>, we summarise and conclude with the outcomes of our analysis. In Appendices <ref> to <ref>, we provide various technical details for the calculation of N_ eff. leading to an increase in N_ eff, which corresponds to an increase in the value of the Hubble parameter H_0, related to the rate of universe expansion. In observational cosmology,measurements of the cosmic microwave background (CMB), baryon acoustic oscillations (BAO), andother cosmological probes provide constraints on N_ eff and help refine our understandingof its relationship with H_0 and the overall cosmological model.Thus N_ eff is a possible way of solving the H_0 tension [For a recent review see ref.<cit.>], which describes the discrepancy in measurements of the universe's expansion rate obtained from local as well as CMB measurement <cit.>.The Planck 2018 collaboration also investigated H_0 by analyzing the CMB data under the assumption of ΛCDM cosmology. They found it to be H_0 = 67.36 ± 0.54   Km s^-1 Mpc^-1<cit.>.Using direct observations of celestial body distances and velocities, the SHOES collaboration calculated H_0 = 73.04 ± 1.04   Km s^-1 Mpc^-1<cit.>. So there is a disagreement between the two approaches of roughly at the level of 4-6σ<cit.>. ( We should also mention that this discrepancy or tension could arise fromsystematic uncertainties in the measurements.) This discrepancy can be reduced if the expansion rate of the early universe differs from what the standard ΛCDM cosmology predicted. However, within the PLANCK bound of N_ eff<cit.>, one cannot find H_0 >70   Km s^-1Mpc^-1<cit.>. Hence there still exists a disagreement of 3.6 σ in the H_0 value predicted from CMB and local measurements. Thus this solution to the H_0 problem via N_ eff has proven to be insufficient with the latest data from Planck and new direct measurements of H_0. Although the main goal of this paper is to show the imprints of light Z' models in N_ eff. ==>Additional relativistic degrees of freedom produce more radiation and hence can increase the value of H_0. Thus N_ eff is a possible way of solving the Hubble constant tension [For a recent review see ref.<cit.>].The Planck 2015 TT data predicts H_0=68.0^+2.6_-3.0  Km s^-1Mpc^-1 at 1σ and N_ eff=3.13^+0.32_-0.32 at 1σ<cit.>. With increasing N_ eff reduces the H_0 tensionupto 1.8 σ with using Planck 2015 TT data<cit.>. However, the Planck 2018 polarization measurements provide a more stringent bound on N_ eff<cit.> and within the ΛCDM one cannot find H_0>70   Km s^-1Mpc^-1<cit.>. Hence there still exists a disagreement of 3.6 σ in the H_0 value predicted from CMB and local measurement.Thus this solution to the H_0 problem via N_ eff has proven to be insufficient with the latest data from Planck and new direct measurements of H_0. Although the main goal of this paper is to show the imprints of light Z' models in N_ eff, as an aside we also portray the parameter space favored by Planck 2015 TT data that can ameliorate the H_0 tension as explored in ref.<cit.>.] § MODEL INDEPENDENT DISCUSSION: EFFECTIVE Z' MODELS In this section, we look at effective light Z' models. We take a model-independent but minimalist approach andonly assume that the Z' originates from the breaking of a new local U(1)_X abelian gauge symmetry under which the SM particles are charged as shown in Table <ref>.The anomaly cancellation conditions for a typical U(1)_X local gauge symmetry requires the presence of additional chiral fermionsbeyond the SM fermions[Note that for one generation of SM fermions, the only U(1) symmetry which is anomaly free is the SM hypercharge U(1)_Y symmetry.For the full three generations of SM fermions, it is possible to have additional U(1)_X symmetries which can be made anomaly free without adding any new chiral fermions such as U(1)_q_i -q_j ; i,j = 1,2,3 with q_i, q_j denoting charges of a given generation of SM quarks but such symmetries have other phenomenological problems and are not of interest to the current study.].To keep the minimal scenario we assume that the only BSM fermions needed for anomaly cancellation are the (three) right handed neutrinos ν_R which are taken to be charged under the U(1)_X symmetry.With the addition of ν_R, several different type of gauged U(1)_X models including the popular U(1)_X models such as the U(1)_B-L or U(1)_μ - τ can be constructed. We will discuss some of the models in the next Section <ref>. For this purpose within this section, we will not specify the nature of U(1)_X symmetry nor the charges of SM particles and ν_R under it and will treat them as free parameters and proceed with a generic discussion.We will also not go into details of anomaly cancellation constraints. All these things will be clarified in the following sections duringour discussions on some well motivated U(1)_X models. In Table <ref>, in addition to ν_R(typically required for anomaly cancellation)we also add an SM singlet scalar σ carrying U(1)_X charge whose vacuum expectation value (VEV) will break the U(1)_X symmetry.Note that apart from the minimal particle content ofTable  <ref>, most of the U(1)_X models available in literature may also contain additional BSM particles[Such new particles are typically much heavier than MeV scale and will not change our analysis <cit.>.]. These additional particles are modeldependent and we refrain from adding them to proceed with a minimal setup. Now some general model independent simplifications and conclusions can be immediately drawn for the charges of the particles underU(1)_X symmetry listed in the aforementioned Table <ref>. * Light Z':In the upcoming sections, we delve into an effective resolution of the well-known Hubble parameter tension <cit.> by increasing N_ eff considering a light Z^' with massM_Z'∼𝒪( MeV) mass range<cit.>. To avoid any fine-tuning in keeping the M_Z^' very lightcompared to the SM Z gauge boson mass i.e. M^2_Z^'≪ M^2_Z, we consider the corresponding U(1)_Xcharge of the SM Higgs doublet 𝕏_Φ = 0, so that M^2_Z^' does not receive any contributionfrom the SM vev. * Mass generation for quarks and charged leptons: For the choice of 𝕏_Φ = 0, to generate the SM quark and charged lepton masses, we must take 𝕏_Q_i = 𝕏_u_i = 𝕏_d_i and 𝕏_L_i = 𝕏_ℓ_i such that the standard Yukawa term y^u_ijQ̅_iΦ̃u_j involving only SM fields can be written in the canonical form as shown in eq.(<ref>).It is important to note that although within each generation, the U(1)_X charges of quark doublet should be the same as that of the up and down quark singlets, the chargesmay differ when comparing across different generations i.e.𝕏_Q_1≠𝕏_Q_2≠𝕏_Q_3.The same applies to charged leptons. In fact, in later sections, we will indeed consider flavour dependent U(1)_X symmetries. To simplify our notation throughout the remaining part of this paper we will denote 𝕏_L_i = 𝕏_ℓ_i=X_i. *Quark mixing: As a follow up point note that if the charges of all three generations of quarks are unequal i.e. if we take𝕏_Q_1≠𝕏_Q_2≠𝕏_Q_3 then the Yukawa matrices Y^u and Y^d and hence the resulting mass matrices will only have diagonal entries and we will not be able to generate CKM mixing. Thus, the charges of some (but not all) generations should match with each other in order to allow the generation of quark mixing. The same is true for charged lepton mass matrices. Again we will elaborate it further with specific examples in the coming sections.*BSM fermions: The U(1)_X charge of right handed neutrinos is typically fixed by anomaly cancellation conditions as we will discuss in a later section with specific examples.As mentioned earlier, for the sake of simplicity, we will not consider U(1)_X symmetries involving chiral fermions beyond the fermion content of Table <ref>.Based on the aforementioned assumptions one can write the Yukawa and scalar potential for the general U(1)_X model as: ℒ_Yuk= y^u_ijQ̅_i Φ̃ u_j + y^d_ijQ̅_i Φ d_j + y^e_ijL̅_i Φℓ_j +ℒ_ν + h.c.andV(Φ,σ) =μ^2_ΦΦ^†Φ+μ^2_σσ^†σ+λ_ΦΦ^†ΦΦ^†Φ+λ_σσ^†σσ^†σ+λ_ΦσΦ^†Φσ^†σ  . It is worth highlighting a couple of salient features of the Yukawa terms and the scalar potential of this scenario. * In eq.(<ref>), ℒ_ν refers to the terms needed for light active neutrino mass generation. These terms depend on the details of the U(1)_X symmetry, the charges of leptons under it as well as the nature of neutrinos (Dirac or Majorana) and the mechanism involved for mass generation. Furthermore, this typically requires the presence of additional scalars or fermions or both, beyond the particle content listed in Tab. <ref>. Note that even in the massless limit and absence of any other interactions, the ν_R is still interacting with the rest of the particles through its U(1)_X gauge interactions and depending on the charges and strength of the U(1)_X gauge coupling (g_X), they can be in thermal equilibrium with the rest of the plasma at a given epoch in the evolution phase of the early universe. *Since we want a very light Z', therefore the vev of σ field ⟨σ⟩ = v_σwhich is responsible for Z^' mass generation should be small. Furthermore, after SSB, if we want the real physical scalar σ_R to be light as well, we should have λ_Φσ≪ 1. The condition λ_Φσ≪ 1 will also imply that the 125 GeV scalar (h) is primarily composed of the real part of Φ^0 (neutral component of Φ) and hence the LHC constraints on it can be trivially satisfied.With these general assumptions, one can look at the possibility of a general Z' parameter range which can satisfy the Hubble constant problem, leaving the charge of the leptons and the gauge coupling as free parameters. The results can be shown here in terms of plots.With these general assumptions, one can examine the potential range of a Z' parameter that may account for the observed excess of N_ eff in the Planck 2018 data, leaving the charge of the leptons and the gauge coupling as free parameters.§ Ν_L DECOUPLING IN PRESENCE OF LIGHT Z' AND N_ EFF It is a well known fact thatone of the most precisely measured quantities from cosmology is the effective neutrino degrees of freedom (N_ eff) , which may get altered by the presence of light BSM particles. This change in N_ eff from the SM value can in principle provide one of the solutions to relax the Hubble tension <cit.>.In this work, we focus on a scenario with light Z' interacting with SM neutrinos (ν_i ,(i≡ e,μ,τ)) that may lead to change in N_eff. After having a brief description of generic features of light Z' models emerging from U(1)_X symmetries in the previous section, we now move to the most crucial part of our paper which is the cosmological implications of those scenarios. Before delving into the analysis of N_ eff in the presence of such light Z', we would like to mention some key aspects of N_ eff within the SM framework.In a standard cosmological scenario at temperature T∼ 20 MeV [ By the temperature of the universe (T), we mean it as the temperature (T_γ) of the photon bath. ], onlye^±, ν_i are particles coupled to thermal (photon) bath as the energy densities of other heavier SM particles are already Boltzmann suppressed.As the universe cools, once the weak interactions involving e^± and ν_i drop below the Hubble expansion rate H(T), neutrinos decouple from the photon bath. Considering only the SM weak interactions, the neutrino decoupling[by ν_Lwe refer the whole SM neutrinos (ν_i)] temperature turns out to be 2 MeV <cit.>.After that, there exist two separate baths of photon (+e) and ν_i, each with different temperatures; T_γ and T_ν respectively. Approximately, at a temperature below T_γ≲ 0.5 MeV, e^± annihilate, and the entropy is transferred entirely to photon bath leading to a increment inT_γ compared to neutrino bath. This difference in temperature is parameterised in terms of N_ eff which is given by<cit.>, N_ eff= 8/7(11/4)^4/3(ρ_ν_L(T_ν)/ρ_γ(T_γ))=3×8/7(11/4)^4/3( T_ν^4/T_γ^4)At the time of CMB formation, the predicted value of N_ eff within the SM particle content is N_ eff^ CMB=3.046<cit.>, whereas the recent Planck 2018 data <cit.> estimates it to be N_ eff^ CMB=2.99^+0.33_-0.34 at 95% confidence level (C.L.). In this scenario, theZ' arising from the aforementioned U(1)_X models introduces new interactions with both ν_i and e^±, can potentially impact the neutrino decoupling consequently altering N_ eff^ CMB. As pointed out previously, the SM particles relevant for ν_L decoupling are only e^± and ν_i, whereas the energy densities of heavy SM particles are negligible due to Boltzmann suppression. The light quarks also do not take part due to the QCD confinement at a much higher temperature around ∼ 150 MeV.Following the same argument in BSM U(1)_X scenario, Z' must be light enough (M_Z'≲𝒪(10  MeV)) to affect ν_L decoupling which will be shown in the later part of this section. There are two other BSM particles present in our model: the BSM scalar σ andthe RHN ν_R. As we elaborated in sec.<ref>, the BSM scalar σ can be taken as heavy enough so that they are irrelevant for phenomenology at the MeV scale temperature. Hence, we integrate out σ to perceive the sole effect of light Z' in late time cosmology.And in the Majorana type mass models, ν_R are also too heavy to affect ν_L decoupling <cit.> and we can neglect them also. However, in Dirac-type mass models, ν_R are relativistic at MeV temperature and can significantly alter N_ eff<cit.>. In this work, we only consider heavy Majorana RHN and ignore their contribution to temperature evaluation. In Fig.<ref> we present the relevant mass scales by a schematic diagram. So, in our proposed scenario, we have to trace the interactions between only three baths i.e. e,ν_L and Z' to evaluate T_ν or N_ eff (eq.(<ref>)). We describe the scenario using a cartoon diagram in Fig.<ref>. It is essential to take care of various energy transfers among these 3 particles as they will play a key role in computing temperature evolution equations (see Appendix <ref>).The energy transfer rates are dictated by the various collision processes and the distribution functions of respective particles <cit.>. Here we enlist the relevant processes to consider for the successful evaluation of N_ eff.*SM contributions: The SM weak interactions are active at temperature T ∼ MeV. At thispoint, active neutrino annihilations (ν_iν_i↔ e^+e^-) as well as elastic scatterings (ν_i e^±↔ν_i e^±) mediated by SM Z or W^± take place to maintain the required thermalequilibrium of the early universe. *BSM contributions to γ bath:For a light Z' with mass M_Z'∼𝒪(MeV) sufficient energy density can be pumped into the thermal bath viathe decay and inverse decay between Z' and electrons (Z'↔ e^+e^-) at temperature aroundO( MeV). Additional contributions to the thermal bath may in principle come from scattering processes like Z'Z'↔ e^+e^-, Z'γ↔ e^+e^-. However, it turns out thatfor a very light Z^', with mass M_Z^'∼ O MeV, its coupling g_X with SMfermions is highly constrained from various experimental data, g_X ≲ (10^-3 -10^-5). For such a small coupling, the aforementioned decay process of Z^' significantlydominates over the scattering processes. Thus we ignore the scattering contributions in our numerical calculations. *BSM contributions to ν bath: Similarly, Z' can transfer energy to ν_i bath through decay and inverse decay (Z'↔ν_iν_i) processes. Moreover, (ν_iν_i↔ e^+e^-) scattering process can play an important role through one loop coupling of Z^' with electron.This one-loop coupling of Z^' with electrons is responsible for connecting two separate thermal baths containing ν_i and electrons. This feature can be seen in certain scenarios of the U(1)_X models and we will elaborate on this issue in great detail in a later section.*Within ν bath: In this case, if we assume that different ν_i flavours have different temperatures then ν_iν_i↔ν_jν_j,  (ij) mediated by both Z and Z' will have significant impact on the overall ν thermal bath. We will address this point in detail in the last part of this section. To construct the temperature equations and compute the energy transfer rates we adopt the formalism already developed in ref.<cit.>. It is worth highlighting the approximations made in the formalism prescribed in ref.<cit.> before the description of temperature equations. Firstly, Maxwell Boltzmann distributions were considered to characterize the phase space distribution of all particles in equilibrium, aiding in simplifying the collision term integral. The use of the Fermi Dirac distribution in the collision termdoes not alter the energy transfer rates substantially <cit.>. Additionally, the electron's mass was ignored to simplify the collision terms as non-zero electron mass would haveresulted in a minimal modification of the energy transfer rate, typically less than a few percent <cit.>.It is to be noted that the ν_L masses can be easily neglected as therelevant temperatures for neutrino decoupling is sufficiently higher than ν_L-mass. After successfully demonstrating all relevant processes and stating the assumptions, we are now set to construct the temperature evolution equations. The temperature equations are derived from the Liouville equation for phase space distribution of particles in a thermal bath (eq.(<ref>)) and the collision terms take care of the energy transfers among involved particles through the processes discussed before <cit.>. Following the detailed calculations of the temperatureevolution for the SM and BSM scenarios as displayed in appendices <ref> and <ref>, here we quote the finalresults of aforementioned temperature evolution equations <cit.>:dT_ν_L/dt =-( 4 Hρ_ν_L -δρ_ν_L→ e^±/δ t+δρ_Z'→ν_L/δ t) ( ∂ρ_ν_L/∂ T_ν_L)^-1dT_Z'/dt =-( 3 H ( ρ_Z' + P_Z') -δρ_Z'→ν_L/δ t -δρ_Z'→ e^±/δ t) ( ∂ρ_Z'/∂ T_Z')^-1 dT_γ/dt= - (4 H ρ_γ + 3 H ( ρ_e + p_e) + δρ_ν_L→ e^±/δ t +δρ_Z'→ e^±/δ t)(∂ρ_γ/∂ T_γ + ∂ρ_e/∂ T_γ)^-1,where, ρ_r, P_r and T_r signify the energy density, pressure density, and temperature of species r. Terms like δρ_a → b/δ t indicate the energy transfer rate from bath a to b and is determined by integrating the collision terms (see eq.(<ref>)). The energy transfer rates are discussed in great detail in Appendix <ref>.Here we assume all three ν_i generations share the same temperature T_ν_L[as for our analysis RHNs are not relevant we will often denote T_ν_L as T_ν] and ρ_ν_L is the summation over the energy densities of three generations of ν_i.Here, δρ_ν/δ t=δρ_ν_e/δ t+δρ_ν_μ/δ t+δρ_ν_τ/δ tThe energy transfer rates are given in the appendix. ========== Note that, the equations in eq.(<ref>-<ref>) are dependent on the thermal history of heavy Z' when it remains in equilibrium with thermal bath in the early universe (T_Z'≳ M_Z') through its interaction with fermions (f).Z' preserves its thermal equilibrium via decay, inverse decay (Z'↔ f f) and alsothrough scattering (f f↔ Z' Z'(γ)) process. Thanks to processes like f f↔ Z' Z'(γ) the chemical potentials (μ_i(T)) are suppressed and Z' remain in chemical equilibrium with the SM bath. The condition for the thermal equilibrium of Z' incorporates a lower bound on g_X (≳ 10^-9) and the lower bound may shift slightly depending on the specific choice of U(1)_X charge and the value of M_Z'. Alternatively, it is highly plausible that Z' was initially not in thermal equilibrium in the early universe, but producedfrom other SM particles via freeze in process. In these scenarios, it is necessary to solve the coupled equations for μ_i(T) to compute N_ eff<cit.>.This alternate scenario requires a detailed complementary study which will be reported elsewhere.As mentioned above we affix to the simplest BSM scenario assuming Z' in thermal equilibrium and set the initial condition for the set of equations eq.(<ref>-<ref>) as T_ν_L=T_Z'=T_γ at T_γ≳ M_Z' for the reasons already discussed before. In such case one can further simplify the scenario assuming T_Z'=T_ν_L as Z' remains coupled to ν bath for longer time than with γ bath <cit.>. This makes the term δρ_Z'→ν_L/δ t=0 and reduces the three equations in eq.(<ref>)-(<ref>) into two. However, the equations described in the above format are useful for generic scenarios.So far, we have assumed that all three ν_L share common temperature T_ν as mentioned earlier.The approximation is valid as neutrino oscillations are active around MeV temperature leading to all three generations of neutrinos equilibriating with each other<cit.>.However one can also evaluate the temperatures assuming different temperatures T_ν_i,(i=e,μ,τ) for all three generations and the relevant equation reads as(see eq.(<ref>)),dT_ν_i/dt =-(4 H ρ_ν_i -(δρ_ν_i→ e/δ t)_ tot-∑_j≠ i(δρ_ν_i →ν_j/δ t)_ tot+δρ_Z'→ν_i/δ t)(∂ρ_ν_i/∂ T_ν_i)^-1 . Note that earlier eq.(<ref>) differs from the one described in eq.(<ref>) as the later one also contains the energy transfer rate between different generations of ν_i. However the value of N_ eff does not change significantly (≲ 10%) if one solves the temperature equation without considering different T_ν_i (see Appendix <ref>) <cit.>.For the same reason we stick to the simpler scenario assuming a common temperature of ν_L bath and solve eq.(<ref>)-(<ref>) for numerical estimation of N_ eff throughout this paper. dT_ν_e/dt =-( 4 Hρ_ν_e -δρ_ν_e→ e^±/δ t- δρ_ν_e→ν_μ,τ/δ t+δρ_Z'→ν_e/δ t) ( ∂ρ_ν_e/∂ T_ν_e)^-1dT_ν_μ/dt =-( 4 Hρ_ν_e -δρ_ν_μ→ e^±/δ t- δρ_ν_μ→ν_e,τ/δ t+δρ_Z'→ν_μ/δ t) ( ∂ρ_ν_μ/∂ T_ν_μ)^-1dT_ν_τ/dt =-( 4 Hρ_ν_e -δρ_ν_τ→ e^±/δ t- δρ_ν_τ→ν_e,μ/δ t+δρ_Z'→ν_τ/δ t) ( ∂ρ_ν_τ/∂ T_ν_τ)^-1dT_Z'/dt =-( 3 H ( ρ_Z' + P_Z') -δρ_Z'→ν_L/δ t -δρ_Z'→ e^±/δ t) ( ∂ρ_Z'/∂ T_Z')^-1After the detailed discussion of the basic framework, we are now set to perform an exhaustive numerical analysis. In Fig.<ref> we show case the evolution of temperature ratio (T_γ/T_ν) as well as Δ N_ eff (≡ N_ eff^ CMB-3.046) with T_γ for different U(1)_X charge combinations. Here we denote T_ν≡ T_ν_i with the assumption all three neutrino share a common temperature. Following the discussion in the previous paragraphs, we stress that the relevant charges for ν_L decoupling are X_L_i= X_l_i≡ X_i or more precisely their modulus values as their squared value will enter in the collision terms (see Appendix <ref>).In the aforementioned plot, we present the simplest scenario where all 3 ν_L share the same temperature. We considerbenchmark parameter (BP) values M_Z'=10 MeV with g_X=10^-7 (Fig.<ref>&<ref>) and g_X=10^-8 (Fig.<ref>&<ref>). We will justify the importance of such a light Z' in this process at the end of this section. For such a light M_Z^' and g_X =10^-7, we show the variation ofT_γ/T_ν with T_γand Δ N_ eff with T_γ in Fig.<ref> and Fig.<ref> respectively. From the above figure, it is very clear that around T_γ≳ 10MeV, T_γ/T_ν=1 as both ν_L and Z' was coupled to photon bath at that time.However, the ratio starts to increase after T_γ∼ 0.5 MeV and then saturates at low temperature (T_γ∼ 10^-2 MeV) as almost all the processes mentioned earlier gradually become inefficient at low T_γ. In Fig.<ref> we portray the corresponding variation in Δ N_ eff for all the U(1)_X charge combinations in Fig.<ref>.As around T_γ∼ 10^-2 MeV the values of Δ N_ eff become saturated we can surmise that it will remain unchanged till recombination epoch (T_γ∼ 0.1 eV) and say Δ N_ eff(T_γ∼ 10^-2 MeV)≡Δ N_ eff^ CMB. In a similar way, for g_X=10^-8 also we display the evolution of T_γ/T_ν and Δ N_ eff in Fig.<ref> and Fig.<ref> respectively. In Fig.<ref> and Fig.<ref> we also showcase the 2σ exclusion limit Δ N_ eff^ CMB=0.28 from Planck 2018 <cit.> and shown in black dotted line. Note that this limit is only valid at CMB and it excludes any values of Δ N_ eff at that time above the black dotted line. It is easy to infer that in the presence of the light Z' the values differ from SM prediction. Before spelling out the physical implications of the U(1)_X scenario we tabulate our findings from Fig.<ref> for the ease of understanding in Table<ref>.Both from Fig.<ref> and Table<ref> it is evident that in our proposed U(1)_X scenario the value of temperature ratio or N^ CMB_ eff differsfrom the values predicted by SM only. One can interpret this feature from the new processes involved in ν_L decoupling apart from the SM weak interactions (see Fig.<ref>). Thus the light Z' acts as the bridge between photon and ν_L bath and tries to balance their energy densities (through decays and scatterings) and hence reduces the temperature ratio from the value predicted by the SM only. From both Fig.<ref> and Fig.<ref> we notice that for a fixed value of g_X and X_L_1≠0 the ratio T_γ/T_ν (at very late time) and Δ N^ CMB_ eff grow with an increase in |X_1|. For X_1≠ 0 the following BSM processes affect ν_L decoupling: (i) Z' decaying to both e^+e^- and ν_i ν_i and(ii) scattering process ν_iν_i→ e^+e^- mediated by Z'.Thus with an increase in |X_1|, the effective coupling (X_1 g_X) governing these BSM processesincreases and hence boosts the BSM contribution (see eq.(<ref>) and eq.(<ref>)) . As a result, picking higher values ofX_1 leads to a higher interaction rate between ν bath and photon bath leading to an enhancement in Δ N^ CMB_ eff or, more precisely a diminution in T_γ/T_ν . On the contrary when X_1=0 the only BSM process relevant forν_L decoupling is Z'↔ν_μ,τν_μ,τ as there is no tree level coupling of Z' with electrons. At T_ν < M_Z' eventually all Z' decay to ν_L transferring all their energy density to ν bath only. As all the equilibrium number density of Z' finally gets diluted to ν_L bath (with 100% branching ratio) it does not depend on the coupling strength (X_2/3g_X). There is no change in N_ eff with the change in charge assignments (X_2,X_3) for X_1=0. For the same reason described above, we infer that for X_1≠0, N_ eff increases with an increase in g_X whereas with X_1=0 it does not change at all with change in g_X (comparing Fig.<ref> and Fig.<ref> ). Due to the fact that for X_1=0, Z' has only decay mode to ν_μ,τ, T_ν starts to increase before e^± decouples (T_γ∼ 0.5 MeV). This causes a slight dip in the T_γ/T_ν evolution lineat higher temperature for X_1=0 in Fig.<ref> and Fig.<ref>. Before concluding the discussion about Fig.<ref>, we point out the fact that we compute N_ eff with a basic assumption that all ν_L share same temperature and hence all ν_L equilibrate with each other even if Z' decays (transfers energy) to any one of them. For this reason we observe similar behaviour of T_γ/T_ν for all U(1)_X charge combinations with same value of X_1 andthe same phenomenology elaborated in the earlier paragraph accounts for that.One should note that the U(1)_X charge combination used in Fig.<ref> is not an exhaustive one, yet one can easily deduce the outcome of other combinations from the reasoning we made above. For example all charge combinations with |X_1|=1 and |X_1|=0 will lead to N_ eff^ CMB=3.38 and 3.34 respectively for g_X=10^-7,M_Z'=10 MeV. This is the most pivotal point of our analysis and we will revisit this in the next section. At this juncture, it is worth pointing out that the dependence of N^ CMB_ eff on g_X for | X_i | = 0, indicating the absence of tree-level Z^' e^+e^- coupling. However, some effective coupling between Z' and e^± can be generated depending on the specific U(1)_X model <cit.>. In the subsequent section, we will extensively discuss such a scenario where the induced Z^' e^+e^- coupling plays a crucial role in deciding N^ CMB_ eff.For the ease of our notation from now on whenever we say N_ eff, we refer to N_ eff^ CMB only.Having discussed the dependence of N_ eff on the two most important parameters of the BSM model: the Z^' universal gauge coupling g_X and U(1)_X charge combinations, now we turn our attentionto investigate its dependence on the light M_Z'. In Fig.<ref> we show variation of Δ N_ eff^ CMB[To amplify the change in N_ eff with M_Z' and portray more lucidly, for this particular plot we switch toΔ N_ eff^ CMB≡ N_ eff-3.046. The variation in N_ eff will immediately follow from it. ] with M_Z' for a fixed coupling g_X=10^-7 with different U(1)_X charge combinations.Rather than showing all the U(1)_X charge combinations used in Fig.<ref>, we just portray only three distinct combinations of them in Fig.<ref>. We indicate the combinations (|X_1,2,3|=1), (|X_1|=0,|X_2,3|=1), and (|X_1|=3,|X_2,3|=0) by blue, green and red lines respectively. From the aforementioned figure, we observe that Δ N_ eff^ CMB decreases as M_Z^' increasesand eventually it becomes almost zero, reproducing the SM value of N_ eff when M_Z^'≳ 30 MeV. This feature can be interpreted from our previous discussion in the context of Fig.<ref>.The energy density of heavier Z' gets Boltzmann suppressed at ν_L decoupling temperature (T_γ∼ 2 MeV) making the BSM contribution less significant in decidingν_L decoupling. On the other hand Z' mediated scattering processes between ν and e, also propagator suppressed for higher M_Z'. At very high M_Z' (≳ 30 MeV), the BSM contribution hardly plays any role in decidingν_L decoupling resulting Δ N_ eff^ CMB=0. Following the same argument we infer that a lower value of M_Z' will enhance the BSM contribution and hence Δ N_ eff^ CMB. For a fixed value of M_Z', the difference in Δ N_ eff^ CMB for differentU(1)_X charge combinations can be easily apprehended from the discussion in the context of Fig.<ref>.So far we explored the dependence of Δ N_ eff^ CMB on various model parameters, and the dependence of N_ eff also can be easily understood from that. Keeping in mind the key findings from U(1)_XZ' models in the context of N_ eff, in the following section we will explore their contribution to alleviating the Hubble tension. § NUMERICAL RESULTSIn the previous section, we pinned down the key aspects of light Z' from generic U(1)_X extension and showed that the U(1)_X charge assignments play a key rolein deciding N_ eff as well as its dependence on coupling g_X. In this section, we will take a closer look at the model parameters and explore their cosmological implication through exhaustive numerical scans. Though one can have numerous U(1)_X charge assignments as suggested in the Table <ref>., in the context of N_ eff the arbitrary charge assignments can be categorised into only two classes when we assume all ν_L share the same temperature (through oscillation). Needless to say, it is the coupling of Z' with electron (apart from ν_i) that affects the N_ eff and not the couplings with τ,μ or quarks, for the reasons already discussed earlier. Thus the light Z' models can be broadly classified into two pictures: X_1≠0 and X_1= 0, more precisely, whether Z' has coupling with electron or not (see Fig.<ref>).In our discussions on the light Z' phenomenology so far we have mainly focused on its tree level couplings with fermions. Yet, it is important to highlight that even in the absence of tree level Z' e^+e^- interaction (X_1=0), Z' can develop induced coupling[The source of the induced coupling is model dependent. It can originate from kinetic mixing, can be loop induced or can originate from flavor violating couplings. In this model independent section we will remain agnostic about its origin.]with e^±<cit.>. As a result, for | X_1| = 0, while computing N_ eff, we need to consider the following effective Z^' e^+e^- interaction Lagrangian: ℒ_int= (ϵ e) eγ^μ e Z'_μ,where ϵ is the induced effective coupling. If, for instance, the induced coupling is generated from the γ -Z^' kinetic mixing at one loop level, its expression is given as(see Appendix A of ref. <cit.>). ϵ =g_X/2π^2∑_ℓ X_ℓ∫_0^1 dx  x(1-x) log( Λ^2/Δ_ℓ),where, Δ_ℓ= m_ℓ^2 -x(1-x)q^2, Λ denotes an arbitrary mass scale and the summation includes all U(1)_X charged fermions with mass m_ℓ.This induced coupling will have a significant impact in scenarios where X_1=0, as we'll shortly discover. While we aim to maintain a model-independent discussion, the effective coupling described in eq.(<ref>) relies on the characteristics of particular U(1)_X models. Therefore, to compare our findings with existing literature, we opt for the benchmark value of the effective coupling, setting ϵ=-g_X/70, which is very commonly used for L_μ - L_τ models (with X_1=0)<cit.>. Henceforth, in all our numerical results, we will use this particular value of ϵ. In Fig.<ref>, we aim to investigate the role of g_X on Δ N_ eff^ CMB while maintaining a constant M_Z'=10 MeV and as stated earlier the variation of N_ eff also follows from it. This scrutiny involves two distinct scenarios for the Z' e^+e^- coupling: (a) solely with tree-level couplings (ϵ=0) depicted in Fig.<ref>, and (b) considering induced coupling as well (ϵ≠ 0) shown in Fig.<ref>. We display the variation of Δ N_ eff^ CMB with g_X for three U(1)_𝕏 charge combinations (|X_1,2,3|=1), (|X_1|=0,|X_2,3|=1) and (|X_1|=3,|X_2,3|=0) by blue, green and redlines respectively. In both Fig.<ref> and Fig.<ref>, the grey band corresponds to the value of Δ N^ CMB_ eff that is excluded by the 2σ upper limit obtained from the Planck 2018 measurement <cit.>. Note that the dependence of Δ N_ eff^ CMB (and hence N_ eff) on g_X for the cases with X_1≠0 remains the same even after including mixing. This feature is easy to realize as the Z' e^+e^- induced coupling (ϵ) is suppressed byan order of magnitude compared to the corresponding Z e^+e^- tree level interaction.Hence, in the presence of the tree-level Z^' e^+e^- interaction (X_10), one can easily ignore the contribution of Z^' arising due to the induced coupling ϵ. For the reasons already discussed in sec.<ref>, the BSM contribution increases as the values of g_X rise, leading to an increase in Δ N_ eff^ CMB (also N_ eff) also, as shown in the aforementioned figure.However, comparing Fig.<ref> and Fig.<ref> one notices that the dependence Δ N_ eff^ CMB on g_Xfor the case with X_1=0 changes drastically when the induced coupling (ϵ 0) is present.In Fig.<ref>, we discern that Δ N_ eff^ CMB remains unchanged with variations in g_Xfor X_1=0 in the absence of induced coupling (ϵ=0). In this scenario, regardless of the associated coupling, the Z' decay is limited to ν_L exclusively, transferring its entire energy density, as reasoned in sec.<ref>. This observation upholds our earlier analysis discussed in the preceding section. On the other hand, when |X_1|=0, in the presence of the induced coupling (ϵ 0),Z' can couple to e^±. Thus for a nonzero ϵ the decay Z'→ e^+e^-andν_iν_i→ e^+e^- scattering processes mediated by Z' continue during ν_L decoupling temperature (T_γ∼ 1 MeV). Among these two processes, the scattering processtries to balance e and ν_L bath by increasing T_ν or increasing N_ eff. Therefore, as g_X increases, the contribution of BSM scenarios also increases, resulting inan overall rise in Δ N_ eff^ CMB. This feature is reflected in Fig.<ref> (red line)for g_X≳ 4 × 10^-8. On the other extreme, for g_X≲ 4 × 10^-8, the scattering process (∝ϵ^2 g_X^2) is unable to compete with the tree level decay Z'→ν_Lν_L (∝ g_X^2) process. Hence, for lower values of g_X (≲ 4 × 10^-8), Z' promptly decays to ν bath transferring all its energy to ν sector and the scattering processes become inefficient to dilute this extra energy density to e bath. So, when g_X (≲ 4 × 10^-8) we see a distinct rise in Δ N_ eff^ CMB for X_1=0 in Fig.<ref>. When g_X (≲ 7 × 10^-8) is very small, the BSM contribution that affects theevolution of the energy density of ν_L is primarily dominated by Z' decay processes. Irrespective of the coupling, at this level, the Z' particle transfers all its energy density to ν_L. It is essential to note that at this point, the value of Δ N_ eff^ CMB for X_1=0 turns out to beidentical for both scenarios with and without induced coupling when comparing Fig.<ref> and Fig.<ref>. This outcome serves as validation for our previous argument.At this point, it is worth pointing out another aspect of this tree-level vs induced Z' coupling in generating non-zero contributions of BSM physics to Δ N_ eff^ CMB. When g_X exceeds 10^-8 and X_1 = 0, the other two U(1)_X charge combinations with non-zero | X_1| result in the tree-level Z' e^+e^- interaction yielding greatercontributions to Δ N_ eff^ CMB compared to the aforementioned induced Z' coupling. This distinction is particularly noticeable in Fig. <ref>.Now, we put forward the main thrust of this paper and split the analysis of light Z' realized in different U(1)_X models in the context ofN_ eff into two categories as shown in Fig.<ref>. The consequences of all other U(1)_X charge combinations can be easily anticipated from the broad classifications.Following this line of thought we will now perform numerical scans to explore the imprints in N_ eff in the presence of light Z' emerging from two broad classes of generic U(1)_X models in the following two subsections.§.§ Z' having tree level coupling with e^±Here we consider the charge assignment|X_1,2,3|=1 and show the contours of constant N_ eff[As stated earlier by N_ eff we refer N_ eff at the time of CMB.]in M_Z' vs. g_X plane in Fig.<ref>. In the same plot, we also showcase the 2σ bound from Planck 2018 data <cit.> shown by the grey dashed line. The parameter space to the left of the grey dashed line, as shown by the grey region, is excluded by Planck 2028 data at 2σ<cit.>.The blue and red dashed lines indicate the values ofN_ eff^ CMB=3.2 and 3.5 respectively.The yellow region between the two lines corresponds to the parameter space that can resolve H_0tension as pointed out in ref.<cit.>.From the figure we observe that for a fixed g_X, N_ eff decreases with an increase in M_Z'as we explained in the context of Fig.<ref>.For very high M_Z' the contribution to N_ eff becomes negligible. We consider the lowest value of g_X=10^-9 as below that, Z' fails to thermalize in the early universe. Following the discussion made earlier in this section, it is evident that increasing the value of |X_1| will lead to a gradual shift of the contour for Planck 2018 upper limit (grey dashed line) towards right i.e. towards higher values of M_Z' .§.§ Z' having induced coupling with e^± Here we consider the following charge assignment|X_1|=0,|X_2,3|=1 and show the contours of constant N_ eff in M_Z' vs. g_X plane in Fig.<ref>.Similar to Fig.<ref> we portray the contour lines for N_ eff=3.2, N_ eff=3.5 and 2σ upper limit from Planck 2018<cit.> depicted by blue, red and grey dashed lines. The grey region to the left of the grey dashed line in each figure is excluded by the 2σ upper bound from Planck 2018 data <cit.>.We show the numerical results for the case |X_1|=0 without induced coupling (ϵ=0) in Fig.<ref>. The dependence of N_ eff on the model parameter g_X is pretty straight forward as we noticed in earlier Fig.<ref>. For |X_1|=0, in the absence of induced coupling, N_ eff staysunchanged despite the variation in g_X, resulting in distinct vertical lines of N_ eff contours in M_Z'-g_X plane in Fig.<ref>. Also, from the same figure we note a decrease in N_ eff with an increase in M_Z' for a fixed g_X, as elaborated earlier. Note that, in the absence of induced coupling (ϵ=0)the contour for Planck 2018 upper limit (grey dashed line) will not change with increasing X_2,3 for such kind (X_1=0) of U(1)_X models as argued before in context of Fig.<ref>. However, the results are quite different after including the induced coupling (ϵ≠ 0) of Z' with electrons as shown in Fig.<ref>. Due to the induced coupling of Z' with electrons, the N_ eff contours replicate the feature of the|X_1|=1 case(as shown in Fig.<ref>) for higher values of g_X  (≳ 10^-8) with a knee like pattern around g_X∼ 10^-8. Such non-trivial dependence is the consequence of the interplay between tree-level decay ( Z'→ν_L ν_L) and the induced Z' mediated scattering as explained earlier in detail. We notice the bend in the contour lines in Fig.<ref> since the collision term accounting for the BSM contribution in computing N_ eff is decay dominated in lower coupling region (g_X≲ 4× 10^-8). It is worth mentioning that for ϵ≠ 0 the contour for Planck 2018 upper limit(grey dashed line) will shift towards right with increasing X_2,3 for g_X≳ 4× 10^-8 and will remain unchanged for g_X≲ 4× 10^-8. Thus from this section, we propound that the cosmological imprints of light Z' models due to different charge assignments leading to differentU(1)_X models can be put under the same roof following our prescription. However, in the model independent analysis, we ignored the experimental constraints which are inevitably relevant for our parameter space. The detail analysis of those constraints on light Z' depends on specific U(1)_X choice and will bereported elsewhere <cit.>. For completeness, we will show the numerical results for some specific U(1)_X models along with experimental constraints in the next section.§ SPECIFIC U(1)_X SYMMETRIES:U(1)_B_3-3L_JLet's now look at the type of U(1)_X symmetry where the symmetry is flavour dependent in both quarks as well as the lepton sector namely the U(1)_B_3-3L_j gauge symmetries <cit.>. In this case, we will consider the following three gauged U(1) symmetries: U(1)_B_3-3L_e, U(1)_B_3-3L_μ and U(1)_B_3-3L_τ. *The Gauged U(1)_B_1-3L_1 Symmetry.*The Gauged U(1)_B_1-3L_2 Symmetry.*The Gauged U(1)_B_1-3L_3 Symmetry.*The Gauged U(1)_B_2-3L_1 Symmetry.*The Gauged U(1)_B_2-3L_2 Symmetry.*The Gauged U(1)_B_2-3L_3 Symmetry.we discuss last three cases * U(1)_B_3-3L_e * U(1)_B_3-3L_μ * U(1)_B_3-3L_τThe charges of three of such symmetries are listed in Table <ref>. The charges of the other U(1)_B_3-3L_j symmetries are analogous and can be similarly written without difficulty. For all three cases, the charges of ν_R_i can be fixed by the anomaly cancellation conditions. A convenient anomaly free charge assignment forν_R_i is ν_R_i = -3, ν_R_j = 0; j ≠ i for theU(1)_B_i-3L_j symmetry e.g. for say U(1)_B_3-3L_2 gauge symmetry the charges can be ν_R∼ (0 -3,0) <cit.>. The charges of the σ field depend on the details of the model and we will not go into details of the model building unless needed [see for example ref.<cit.>]. And as we mentioned earlier in sec.<ref>, we assume this ν_R and σ to be heavy enough that they are irrelevant for the analysis of N_ eff. Before exploring the phenomenology of the specific U(1)_X, we would like to outline an overview of the exclusion bounds on the mass of the light-gauge boson (M_Z') and corresponding gauge coupling (g_X) within the mass range M_Z'∼𝒪(10  MeV), which is our point of interest. Here, we will briefly review various types of low-energy experimental observations that can be used to constrain the scenarios involving any U(1)_X. In the following subsections, we will present the exclusion bound identified by each low-energy experiment for a specific U(1)_B_3-3L_i scenario (i=e, μ, τ), along with our cosmological findings.Elastic electron-neutrino scattering (EνES):The elastic scatterings of neutrinos with electrons (ν_α  e →ν_α e, α=e,μ,τ) in laboratory experiments serve as one of the probes for non-standard interaction of neutrino and electron with the light-gauged boson (Z^').In SM, the e-ν_e scattering involves both charged-current (CC) and neutral-current (NC) weak interactions, whereas e-ν_μ,τ scattering is solely governed CC interaction <cit.>. The elastic e-ν_e,μ,τ scattering can be altered in the presence of an additional NC interaction mediated by the U(1)_X gauge boson Z^', which can be probed in the low-energy scattering experiments. Note that the interference terms in the matrix amplitude between the SM (W and Z-mediated ) and BSM (Z^'-mediated), play a critical role in altering the scattering rate.The interference term in differential cross section is given by <cit.>,[dσ_i-e/dE_R]_ Interference = √(2)G_F m_e/π((g_X X_1)(g_X X_i)/(2 E_R m_e+M_Z'^2)) ×[(g_L^i+g_R^i)(1-m_e E_R/2 E_ν^2) -g_R^iE_R/E_ν(2-E_R/E_ν)],where E_R and E_ν denote electron recoil energy and incoming neutrino energy respectively. g_L,R is defined in appendix <ref>. For the cases where X_1=0 the term (g_X X_1) will be replaced by ϵ e in eq.(<ref>). Thus observing the electron recoil rate imposes constraints on the M_Z^' - g_X plane for a model-specific scenario.The coupling strength of the light gauge boson with leptons varies over gauge extensions. As a result, the constraints will vary from model to model.Borexino <cit.> and dark matter experiments like XENON 1T <cit.>dedicated to measuring the electron recoil rate, can be relevant in the context of e-ν_α elastic scattering <cit.>. The Borexino experiment is designed to study the low-energy solar neutrinos (ν_e)[EνES event rates for atmospheric neutrinos are negligible compared to solar ones <cit.>.] produced via decay of ^7Be by observing the electron recoil rate through the neutrino-electron elastic scattering process <cit.>.These solar neutrinos undergo flavour change as they travel from the sun to the detector. This flavor changing phenomena ν_e →ν_α (α=e,μ,τ) can be accounted using the transition probability P_eα<cit.> . Therefore the number of events for solar neutrinos (ν_e) interacting with the electrons in the Borexino will be N_ν_e∝∑_α=e,μ,τ P_e α σ_e- ν_α<cit.>. Note that in Solar neutrino flux P_e e(∼ 50%)>P_e μ=P_e τ and the dominant contribution in interference term is due to ν_e as it has both CC and NC interaction with electron<cit.>. Hence for X_1≠ 0 one can approximate the differential recoil rate <cit.>, [dR/dE_R]_ BSM∝ P_ee[dσ_ee/dE_R]_ InterferenceNote that when X_1=0 i.e. in the absence of tree-level coupling of Z' with electron one has to consider the elastic scattering (via loop induced coupling with Z') of an electron with ν_μ/τ with specific transition probabilities.Coherent elastic neutrino-nucleus scattering (CEνNS): The COHERENT experiment investigates coherent elastic scattering between neutrinos and nucleus in CsI material (ν N →ν N) <cit.>.This mode of interaction opens up new opportunities for studying neutrino interactions, including the introduction of a new light gauge boson in this case.In SM, the elastic neutrino-nucleus scattering (ν-N) takes place via the NC interactions between neutrinos (ν_e,μ,τ) and nucleons or more precisely with the first generation of quarks (q={u,d}). The additional NC interaction introduced by the light gauge boson Z^' of U(1)_X can contribute to the CEνNS process. The modification to SM CEνNS, due to the Z^' can be utilized to constrain the parameter space of M_Z^' vs g_X plane. Similar to EνES, the exclusion limit is also dependent on the specific gauged scenarios. This is because the coupling strength of quarks and neutrinos with the light-gauged boson is influenced by the specific gauge choice. However, in contrast to EνESexperiment, CEνNS is lepton flavor independent and one has to consider interaction with all 3 ν. For a detailed discussion see ref.<cit.>. Supernova 1987A (SN1987A): Non-standard neutrino interactions with a light gauge boson can also affect the cooling of core-collapse supernovae (SN) which is a powerful source of neutrinos (ν_e) <cit.>.The non-standard neutrino interaction gives rise to the initial production of light-gauged bosons in the core of a supernova (SN), ν_α ν_α→ Z^', resulting in energy loss in the SN core <cit.>. After production the late decay of these Z^' into neutrinos (Z^'→ν_i  ν_i) has the potential to modify the neutrino flux emitted by the SN <cit.>. Additionally, the neutrino self-interactions mediated by the light Z' may also affect energy loss in the SN core <cit.>. Thus the observation of the SN 1987A neutrino burst imposes constraints on these interactions <cit.>. However, there can be additional processes that might place stronger constraints from SN 1987A on light Z^' models <cit.> and explicit derivation of that is beyond the scope of this work.   Neutrino trident: The neutrino trident production mode, in which neutrinos interact with a target nucleus to produce a pair of charged leptons without changing the neutrino flavor (ν N →ν N μ^+ μ^-), is a powerful tool for probing new physics <cit.>. The gauge extension introduces new interactions between leptons and light gauge bosons, increasing the rate of neutrino trident production, as predicted by the SM. Therefore any gauge extended scenarios associated with muon (X_2 ≠ 0) face constraints from existing neutrino trident production experimental results <cit.>. After having a generic discussion on the constraints relevant to our analysis, in the following two subsections we present our numerical results for specific U(1)_X gauge choice. We portray the bound from N_ eff on these models along with the constraints discussed above. In the same plane, we also indicate the parameter space that can relax the H_0 tension. Before diving into the phenomenology of the specific U(1)_X models mentioned above, we first give an outline of the experimental constraints relevant to the parameter space of our interest. We enumerate them below. §.§ Gauged U(1)_B_3-3 L_e symmetryIn this subsection we show the values of N_ eff for light Z' in U(1)_B_3-3 L_e gauge extension. We show our numerical results in M_Z' vs. g_X plane in Fig.<ref>. In the same plane, we also portray other relevant astrophysical and experimental constraints.The grey dashed line in the same plot signify the 2σ upper limit on N_ eff from Planck 2018 data <cit.>. It excludes the parameter space to the left of that line as shown by the grey shaded region. One can easily infer from the figure that for a fixed g_X with increasing M_Z' the value of N_ eff decreases which is explained in detail in the previous section. Note that for U(1)_B_3-3 L_e, Z' has tree level coupling with e^± (X_1=-3) and hence it belongs to the group of U(1)_X models with |X_1|≠0 following our discussion in previous section <ref>.For this, we notice that the behavior of N_ eff with g_X is similar to what we observed in the context of Fig.<ref>.However, for U(1)_B_3-3 L_e model |X_1|=3 and hence for fixedg_X and M_Z' the value of N_ eff will be higher compared to the case considered in Fig.<ref> for the reason already elaborated before.As a result we notice the N_ eff lines inFig. <ref> has moved rightwards compared toFig.<ref>.In the same plane in Fig.<ref> we showcase the relevant phenomenological constraints that we mentioned earlier. Since Z' has tree level coupling with electron in U(1)_B_3-3 L_e model,it attracts strong constraint from EνES <cit.>. The exclusion region from EνES with XENON1T experiment is shown by the region shaded by dark cyan diagonal lines. In a similar fashion, Borexino also constrains the parameter space shown by the magenta shaded region <cit.>.In U(1)_B_3-3 L_e model the light Z' has no tree-level coupling with the first two generations of quarks (u,d),(c,s) which is important for CEνNS. However, Z' can have induced coupling with first-generation quarks via CKM mixing <cit.> and can contribute to CEνNS. In SM the CKM matrix V_ CKM is constructed from charge current interaction betweeen quarks <cit.>,J^μ_W = (q_u)_L  𝒰_L  γ^μ 𝕀_3× 3 𝒟_L^† (q_d)_Lwhere, q_u≡ (u  c  t)^† and q_d≡ (d  s  b)^†.𝒰_L and 𝒟_L are rotation matricesthat diagonalize mass matrices forq_u and q_d respectively. The CKM matrix can be constructed as V_ CKM= 𝒰_L 𝒟_L^†<cit.>. For our estimation we take 𝒰_L =𝕀 and hence 𝒟_L=V_ CKM^†<cit.>. In our scenario as Z' couples to only third generation of quarks (with ℒ_int⊃ g_X J^μ_Z'Z'_μ) it may induce flavor changing neutral current (FCNC) and the Noether current (J^μ_Z') is given by,J^μ_Z' = (q_d)_L  𝒟_L  γ^μ [ 0 0 0; 0 0 0; 0 0 1 ] 𝒟_L^† (q_d)_L.After inserting the values of the elements of 𝒟_L matrix <cit.>,in eq.(<ref>) the Z' effective coupling with d quark takes the following form:ℒ⊃ (0.006 g_X) Z'_μd_L γ^μ d_LUsing this we translate the bound fromCEνNS experiment in our scenario and obtain that it excludes the parameter space for g_X≳ 10^-2. This limit is weaker than EνES and hence not shown in Fig.<ref>. As Z' has coupling with b quark it generates B_s-B_s mixing and hence it attracts constraint indicated by orange shaded region <cit.>.Finally, the bound from supernova cooling is shown by the region shaded with grey vertical lines<cit.>.§.§ Gauged U(1)_B_3-3 L_μ symmetry In this subsection, we present our analysis with N_ eff for a light Z' in the gauged extension of U(1)_B_3-3 Lμ. Analogous to the previous subsection we show our numerical results in M_Z' vs. g_X plane in Fig.<ref> along with all relevant astrophysical and experimental constraints.The grey dashed line in the same plot signify the 2σ upper limit on N_ eff from Planck 2018 data <cit.> and likewise before. The grey shaded region depicts the exclusion of the parameter space to the left of the grey dashed line.However, unlike previous scenario, for U(1)_B_3-3 L_μ, Z' has no tree level coupling with e^± (X_1=0) and hence it belongs to other class of U(1)_X models with |X_1|=0 as discussed in previous sub-section <ref>.We present our numerical results with the two values of effective coupling, ϵ=0 andϵ=-g_X/70 in Fig.<ref> and Fig.<ref> respectively. For ϵ=0 we observe no change in N_ eff with g_X in Fig.<ref> as argued in the previous sections in the context of Fig.<ref>. On the other hand,for ϵ=-g_X/70 we observe the non-trivial dependence of N_ eff with g_X in Fig.<ref> due to the interplay between scattering and decays as elaborated in detailin the context of Fig.<ref>. In the same plane for both the plots Fig.<ref> and Fig.<ref> we portray the constraints arising from EνES experiment with XENON1T<cit.> shown by the regions shaded by dark cyan diagonal lines.Similarly, we showcase the exclusion region from Borexino depicted by magenta shaded region. Note that these bounds are weaker for U(1)_B_3-3 L_μ compared to Fig.<ref> as in this case Z' has no tree level coupling with e^± and the recoil rategets smaller here (see eq.(<ref>)).Because, in this scenario Z' couples directly with μ, the Neutrino trident experiment significantly constrains the parameter space shown by the dark blue dashed dot line <cit.>. The parameter space above that line is excluded. In this model also Z' can generate B_s-B_s mixing and attracts constraint depicted by orange shaded region <cit.>.Similar to the previous subsection the bound fromCEνNS experiment is weaker than EνES and hence not shown in Fig.<ref>. The bound from supernova cooling is shown by the region shaded with grey vertical lines <cit.>.We do not show the results for U(1)_B_3-3 Lτ explicitly, though the imprint in N_ eff for this model will be similar like the one in U(1)_B_3-3 Lτ as both these models have |X_1|=0. § THE NEUTRINO PORTAL SOLUTION TO THE HUBBLE TENSIONThe change in N_ eff due to the presence of light Z' gauge bosons can also lead to an interesting consequence namely a simple and elegant resolution to the cosmological Hubble constant problem. Recall that the Hubble constant problem is the discrepancy between the obtained value of the Hubble constant using early and late time probes <cit.>. As mentioned in the introduction the discrepancy in H_0 value can be ameliorated with increasing N_ eff<cit.>. Several different resolutions have been proposed to address this discrepancy. One of the simplest possible resolution is the presence of light degrees of freedom changing the N_ eff<cit.>. There are already a few model specific works successfully addressing this problem for particular U(1)_X extensions such as U(1)_μ - τ<cit.> and U(1)_B- L<cit.>. The general formalism developed in this work also can be used to show that the U(1)_X symmetries with light Z' gauge bosons, in general, can address the Hubble constant problem reducing the tension. According to the Planck 2015 TT data <cit.>, the H_0 tension issue can be relaxed upto 1.8 σ with the N_ eff value between 3.2 to 3.5. However, the Planck 2018 polarization measurements provide a more stringent bound on N_ eff<cit.> and it is difficult to reach H_0>70   Km s^-1Mpc^-1<cit.> within the ΛCDM. Hence there still exists a disagreement of 3.6 σ in the H_0 value predicted from CMB and local measurement.In our discussion, we only highlight the parameter space where N_ eff lies between 3.2 to 3.5 inM_Z^'-g_X plane, that may relax H_0 tension followed by the analysis in ref. <cit.>. In sec.<ref> we portray the parameter space for generic U(1)_X models in Fig.<ref> (for |X_1|≠ 0)and Fig.<ref> (|X_1|=0) where the blue and red dashed lines indicate the values of N_ eff=3.2 and 3.5 respectively.Similarly, in sec.<ref> we also showcase the same lines with constant N_ eff=3.2 (blue dashed line) and 3.5 (red dashed line) contours for the specific models: U_B_3-3L_e (in Fig.<ref>) and U_B_3-3L_μ (in Fig. <ref>).The region between these two contour lines may ameliorate H_0 tension. However claiming any possible resolution of H_0 problem with quantitative effectiveness, itself requires a dedicated analysis which is beyond the scope of this work <cit.>.* Z' having tree level coupling with e^±Here we consider the charge assignment|X_1,2,3|=1 and show the contours of constant N_ eff^ CMB in M_Z' vs. g_X plane in Fig.<ref>. In the same plot, we also showcase the 2σ bound from Planck 2018 data <cit.> shown by the grey dashed line. The blue and red dashed lines indicate the values ofN_ eff^ CMB=3.2 and 3.5 respectively.The yellow region between the two lines corresponds to the parameter space that can resolve H_0tension as pointed out in ref.<cit.>. From the figure we observe that for a fixed g_X, N_ eff decreases with an increase in M_Z'as we explained in the context of Fig.<ref>.For very high M_Z' the contribution to N_eff becomes negligible. We consider the lowest value of g_X=10^-9 as below that, Z' fails to thermalize in the early universe. * Z' having induced coupling with e^±Here we consider the following charge assignment|X_1|=0,|X_2,3|=1 and show the contours of constant N_ eff^ CMB in M_Z' vs. g_X plane in Fig.<ref>.Similar to Fig.<ref> we portray the contour lines for N_ eff^ CMB=3.2, N_ eff^ CMB=3.5 and 2σ upper limit from Planck 2018<cit.> depicted by blue, red and grey dashed lines. We show the numerical results for the case |X_1|=0 without induced coupling (ϵ=0) in Fig.<ref>. The dependence of N_ eff on the model parameter g_X is pretty straight forward as we noticed in earlier Fig.<ref>. For |X_1|=0, in the absence of induced coupling, N_ eff staysunchanged despite the variation in g_X, resulting in distinct vertical lines of N_ eff contours in M_Z'-g_X plane in Fig.<ref>. Also, from the same figure we note a decrease in N_ eff with an increase in M_Z' for a fixed g_X,as elaborated earlier.However, the results are quite different after including the induced coupling (ϵ≠ 0) of Z' with electrons as shown in Fig.<ref>. Due to the induced coupling of Z' with electrons, the N_ eff contours replicate the feature of the|X_1|=1 case(as shown in Fig.<ref>) for higher values of g_X  (≳ 10^-8) with a knee like pattern around g_X∼ 10^-8. Such non trivial dependence is the consequence of the interplay between tree level decay ( Z'→ν_L ν_L) and the induced Z' mediated scattering as explained earlier in detail. We notice the bend in the contour lines in Fig.<ref> since the collision term accounting for the BSM contribution in computing N_ eff is decay dominated in lower coupling region (g_X≲ 4× 10^-8).Thus from this section, we propound that the cosmological imprints of light Z' models due to different charge assignments leading to differentU(1)_X models can be put under the same roof following our prescription. However, in the model independent analysis, we ignored the experimental constraints which are inevitably relevant for our parameter space. The detail analysis of those constraints on light Z' depends on specific U(1)_𝕏 choice and will bereported elsewhere<cit.>. For completeness, we will show the numerical results for some specific U(1)_𝕏 models along with experimental constraints in the next section.§ SUMMARY AND CONCLUSION The Planck experiment has a very accurate measurement of the CMB which has established stringent limits on Δ N_ eff, representing the number of effective relativistic degrees of freedom duringthe early epoch of the universe <cit.>.Thus Δ N_ eff is sensitive to extra relativistic energy injected bythe particle contents of the physics beyond the standard model (BSM). This sensitivity makes Δ N_ eff a useful probe for investigating various BSM scenarios that affect theneutrino decoupling at the time of the CMB.In this work, we have studied the impact of light Z' particle, arising from generic U(1)_X model, on N_ eff. In the presence of this light Z', N_ eff receives two-fold contributions: the decay ofZ^'→ν_L ν̅_L, e^+e^- andscattering of light SM leptons (ν_L,e) via this Z'.At first, we considered a generic U(1)_X model with arbitrary charge assignments.Apart from the light Z' gauge boson, the model also contains a BSM scalar σ and RHN ν_R which are necessary for the model construction.To understand the sole effect of the light Z' in early universe temperature evolution, we assume the other BSM particles are sufficiently heavy (≳ 100 MeV) that they decouple at the time of neutrino decoupling.In this work, we only consider the scenario where Z' was initially (T≳ M_Z') in a thermal bath and leave the other scenario with non-thermal Z' for future work <cit.>. We adopt the formalism developed in ref.<cit.> and solve the temperature equations to evaluate N_ eff in sec.<ref>. We enumerate our findings below.*Firstly, we noticed that the U(1)_X charges for e,ν_e,μ,τ play a crucial role in N_ eff for our proposed generic U(1)_X model in sec.<ref> as these are the only SM particles relevant for ν decoupling. In a similar vein, we found that the mass of the BSM gauge boson Z' should be around ≲30 MeV to affect neutrino decoupling. *After a careful analysis in sec.<ref> we observed that the value of N_ eff dependsdifferently on the coupling g_X for two distinct scenarios those are |X_1|=0 and |X_1|≠0.Based on this fact, in sec.<ref> we categorise U(1)_X models into two classes: (a) electrons having tree level coupling with Z' (|X_1|=0) and (b) without tree level coupling of an electron with Z' (|X_1|≠0). However, in the second scenario electron may have some induced coupling with Z' even if |X_1|=0 and we explored that scenario too. For both the two classes we present the contours from the upper limit on N_ eff from Planck 2018 <cit.> in M_Z' vs. g_X plane as shown in Fig.<ref> and Fig.<ref>. In the same plane, we also highlight the parameter space favoured to relax H_0 tension shown in earlier analyses <cit.>. *As an example model, in sec.<ref> we considered a specific U(1)_X model and discuss the cosmological implications in context of N_ eff.We present the numerical results for U(1)_B_3-3 L_e (|X_1|=3) and U(1)_B_3-3 L_μ (|X_1|=0) models insub-sec.<ref> and sub-sec.<ref> respectively.Depending on the coupling of an electron with Z' these two models also lead to distinguishable N_ eff contours in M_Z' vs. g_X plane as portrayed in Fig.<ref> andFig.<ref>. *For both U(1)_B_3-3 L_e and U(1)_B_3-3 L_μ models we also showcase the relevant astrophysical and laboratory constraints in Fig.<ref> andFig.<ref> respectively. We displayed the constraints from EνES with XENON1T <cit.> and Borexino <cit.>, neutrino trident <cit.>, B_s-B_s mixing <cit.>as well as from SN1987A <cit.>. We checked the constraints from CEνNS <cit.> is weaker than the other constraints as our chosen U(1)_X models do not have Z' coupling with first generation quarks. For U(1)_B_3-3 L_μ model electron has no BSM gauge coupling and hence, the bounds from EνES for this model are weaker than the other one. In the same plane, we also indicate the parameter space that can relax the H_0 tension <cit.>.For both U(1)_B_3-3 L_e and U(1)_B_3-3 L_μ models we have shown that the bounds on N_ eff from Planck 2018 data <cit.> can provide more stringent bound on the parameter space than the laboratory searches. The future generation experiments like CMB-S4 ( Δ N_ eff =0.06 at 2 σ) <cit.> can even probe more parameter space for such models. Our analysis also shows that there is a certain parameter space still left to relax H_0 tension <cit.> allowed from all kinds of constraints. On the other hand, as U(1)_X models are well motivated BSM scenarios and widely studied in several aspects, this analysis may enhance insights to explore their connection with the H_0 problem too.Our generalised prescription for N_ eff analysis with generic U(1)_X models is extremely helpful to put stringent constraints on the parameter space from cosmology complementary to the bounds obtained from gorund based experiments.In this analysis we considered the thermal Z' scenario only and hence the bounds from N_ efffor extremely small couplings are not shown. However, in future, we will explore the non-thermal Z' scenario alsoto constrain the parameter space from N_ eff even for the smaller Z' mass and couplings <cit.>.SJ thanks Sougata Ganguly for the helpful discussion and Miguel Escudero for the email conversations. The work of SJ is funded by CSIR, Government of India, under the NET JRF fellowship scheme with Award file No. 09/080(1172)/2020-EMR-I. PG would like to acknowledge the Indian Association for the Cultivation of Science, Kolkata for the financial support.§ NEUTRINO DECOUPLING IN SM SCENARIOIn this section, we recapitulate the formalism for ν_L decoupling in the SM scenario which is already developed in ref.<cit.>. However, for better understanding and smooth transition to the extra U(1)_X scenario we just note down the key points for the SM case here. As mentioned in sec.<ref> the only particles relevant at the temperature scale ofν_L decoupling are ν_i (i=e,μ,τ),e^± and γ.So in the SM scenario, the temperature evaluation of only these 3 particles is needed to evaluate T_ν/T_γ. Initially all ν_i (i=e,μ,τ) were coupled to γ and e^± bath at high temperature via weak interaction processes.To track the exact temperature of photon bath (T_γ) and neutrino bath (T_ν) we start from the Liouville equation <cit.>:∂ f(p,t)/∂ t- H f(p,t) ∂ f(p,t)/∂ p = 𝒞[f],where,f(p,t) is the distribution function in a homogeneous and isotropic universe and p,t signifies the momentum and time respectively. 𝒞[f] is the collision term and it describes the total change in f(p,t) with time.For SM ν_L decoupling the involved processes are 2→2 scattering between ν,e as shown in Table <ref>. So, we write the collision terms for a particle χ undergoing 2→2 process say, χ+i→ j+k for which 𝒞[f] can be written as (assuming CP invariance),𝒞[f_χ]≡- 1/2 E_χ∫ dΠ_i dΠ_j dΠ_k (2π)^4 δ^4 (p_χ + p_i - p_j -p_k)× |ℳ|^2_χ+i→ j+k[f_χf_i[1 ± f_j ][1 ± f_k ] -f_j f_k [ 1 ± f_χ] [1 ± f_i ] ]where, f_ℓ,E_ℓ, and p_ℓ denote the distribution function, energy, and 4-momentum respectively for particle ℓ. |ℳ|^2_χ+i→ j+k is the amplitude square for the above mentioned process.Multiplying eq.(<ref>) with g_χE d^3p_χ/(2π)^3 and integrating over momenta will give the following energy density equation:dρ_χ/dt + 3 H (ρ_χ + P_χ) = ∫ g_χ Ed^3p_χ/(2π)^3 𝒞[f_χ] =δρ_χ→ j/δ t δρ_χ→ j/δ t is the energy density transfer rate from χ→ j( or  k). P_χ is the pressure density. So, we can rewrite energy density equation eq.(<ref>) for χ,dT_χ/dt = ( - 3 H (ρ_χ+P_χ) +δρ_χ/δ t)(∂ρ_χ/∂ T_χ)^-1 Now for SM ν_L decoupling we consider the energy transfers between ν_i (i=e,μ,τ) to e^±,γ photon bath. For photon bath containing γ,e^±we consider a common temperature T_γ and add their energy density equations and rearrange accordingly,dρ_γ/dt+dρ_e/dt = - 4 H ρ_γ - 3 H (ρ_e+P_e) +∑_i=e,μ,τδρ_e→ν_i/δ t dT_γ/dt = (- 4 H ρ_γ - 3 H (ρ_e+P_e) -∑_i=e,μ,τδρ_ν_i→ e/δ t)(∂ρ_γ/∂T_γ+∂ρ_e/∂T_γ)^-1,where, in the last step we have used δρ_ν_χ→ j/δ t=-δρ_ν_j→χ/δ t. Similarly, following eq.(<ref>) we can write the respective temperature (T_ν_i) equations for each type of ν,dT_ν_i/dt = (- 4 H ρ_ν_i +δρ_ν_i/δ t) ( ∂ρ_ν_i/∂ T_ν_i )^-1Once we have the temperature equations the remaining step is to evaluate the energy transfer rates i.e. the collision terms. The relevant processes in the SM scenario are given in Table <ref>. All the processes are either elastic scatterings between ν_i and e or annihilations and all are mediated by W^±,Z. As we are focusing on the MeV scale we can integrate out the mediators and write in terms of Fermi constant G_F. In Table <ref>, p_1,2 and p_3,4 denote themomentum of incoming and outgoing particles. Here, g_L_e = 1/2 + s_W^2, g_R_e = s_W^2, g_L_μ, τ = -1/2 + s_W^2, g_R_μ, τ = s_W^2 and s_W= sinθ_W where θ_W isWeinberg angle. The reason behind the difference of 1 between g_L_e and g_R_μ, τ is the additional charged current process available for ν_e.Collision terms for ν,e^± scattering The first 2 processes are responsible for energy transfer from ν_e to e sector (photon bath) and the rest are responsible for energy transfer from ν_e to ν_μ/τ sector. Finally, we enumerate below all the energy transfer rates for the SM scenario following the formalism in ref.<cit.>:(δρ_ν_e → e/δ t)_ SM = G_F^2/π^5{(1+4s_W^2+8s_W^4) F_ MB(T_γ,T_ν_e)} (δρ_ν_μ/τ→ e/δ t)_ SM = G_F^2/π^5{(1-4s_W^2+8s_W^4) F_ MB(T_γ,T_ν_μ)} (δρ_ν_e →μ/τ/δ t)_ SM = G_F^2/π^5 F_ MB(T_ν_μ,T_ν_e),where,F_ MB(T_1,T_2) = 32(T_1^9-T_2^9) + 56 T_1^4 T_2^4 (T_1-T_2).Plugging these collision terms in eq.(<ref>) and eq.(<ref>) we can track the temperature evolution. A. Computing δρ_ν_e/δ tSumming over all the collision terms :(δρ_ν_e/δ t)_ SM =2×{16 G_F^2/π^5 (1+4s_W^2+8s_W^4)[ T_γ^9-T_ν^9 ]+ 28 G_F^2/π^5 (1+4s_W^2+8s_W^4)  T_γ^4 T_ν^4 [ T_γ-T_ν] }2×{32 G_F^2 /π ^5 (T_ν_μ/τ^9-T_ν_e^9) + (48+8) G_F^2 /π ^5  T_ν_e^4 T_ν_μ/τ^4 ( T_ν_μ/τ-T_ν_e )}= G_F^2/π^5{(1+4s_W^2+8s_W^4) F_ MB(T_γ,T_ν_e) + 2 F_ MB(T_ν_μ,T_ν_e)},where,F_ MB(T_1,T_2) = 32(T_1^9-T_2^9) + 56 T_1^4 T_2^4 (T_1-T_2).In the above eq.(<ref>) the 2 factor in the last term accounts for transfer to both ν_μ and ν_τ. The 2 factor in the first term accounts for both e^±. B. Computing δρ_ν_μ/δ t≡δρ_ν_τ/δ tFor ν_μ, g_L^2+g_R^2=1/4(1-4s_W^2+8s_W^4) (δρ_ν_μ/δ t)_ SM=G_F^2/π^5{(1-4s_W^2+8s_W^4) F_ MB(T_γ,T_ν_μ) - F_ MB(T_ν_μ,T_ν_e)}in R.H.S. the first term accounts for energy transfer to the γ sector and the second term accounts for energy transfer to the ν_e sector. However for simplicity one can assume T_ν_e=T_ν_μ=T_ν_τ. Now total energy transfer rate :δρ_ν/δ t=δρ_ν_e/δ t+2δρ_ν_μ/δ t § NEUTRINO DECOUPLING IN PRESENCE OF A U(1)_X GAUGE BOSONThe relevant lagrangian for the calculation of N_ eff in U(1)_X model has the following generic form,ℒ_ int⊃ Z^'_α J^α_𝕏,where, J^α_𝕏 is the Noether current associated with U(1)_𝕏 and given by,J^α_𝕏 =g_X( X_3τ̅γ^ατ + X_3 ν̅_τγ^α P_L ν_τ+ X_2μ̅γ^αμ + X_2ν̅_μγ^α P_L ν_μ) +g_X(X_1 e̅γ^α e + X_1ν̅_eγ^α P_L ν_e) We have not written the corresponding interactions with quark sectors since around ∼ 2 MeV quarks are already confined <cit.>.The μ,τ particles also do not take part as they have suppressed energy density around MeV temperature. In the presence of these Z' the following things will affect neutrino decoupling (see Fig.<ref>).*Depending on charge assignments (X_1≠ 0) Z' can decay to e^+e^- and the inverse decay can also happen upto T_γ=M_Z'. So there will be energy transfer from Z' to e^± sector(γ bath) and vice versa. Similarly, Z' can transfer energy to ν bath with the decays and inverse decays to ν_iν_i. *Through Z' portal ν_i+ν_i⟷ e^+ e^-, ν_i+e^±⟷ν_i+ e^± ,i=(e,μ,τ) processes can happen (depending on X_1,2,3) leading to an additional energy transfer between ν_i sectorande^±.We will discuss the above points in the following subsections. As discussed in Sec.<ref>, in this work we will only focus on the scenarios where Z' was in thermal equilibrium with e,ν_i at T>T^ν_ dec. §.§ Z' decay to e^+e^-We consider the process Z'(k)↔ e^-(p_1)+e^+(p_2),where, k,p_1,p_2 denote the four momenta of respective particles. Following eq.(<ref>) and eq.(<ref>) we calculate the energy transfer rate from Z' to e bath,∫ g_Z' E_Z'd^3 k/(2π)^3 2 E_Z'𝒞[f] = -∫ g_Z'd^3k/(2π)^3 2 E_Z'g_1d^3p_1/(2π)^3 2 E_1g_2d^3p_2/(2π)^3 2 E_2E_Z'1/g_Z' g_1 g_2| M_e^+e^- → Z'|^2(f^ (eq)_z'(k)-f^ (eq)_e( p_1) f^ (eq)_e(p_2)) (2π)^4 δ^4(p_1+p_2 - k)Here the degrees of freedom and energy of the corresponding particle are denoted as g_ℓ& E_ℓ, (ℓ≡ Z',1,2) respectively.f^eq_i, (i≡ Z',e) signify the equilibrium distribution function of i particleThe decay width of Z' to e^+ e^- is given by, Γ_Z'→ e^+ e^- = 1/2 M_Z'∫d^3p_1/(2π)^3 2 E_1d^3p_2/(2π)^3 2 E_21/g_Z'| M_Z'→ e^+ e^-|^2(2π)^4 δ^4(p_1+p_2 - p_k) =X_1^2 g_X^2 M_Z'/12 π (1+2 m_e^2/M^2_Z')√(1-4 m_e^2/M^2_Z')where m_e is the electron mass.Using eq.(<ref>) we can simplify above eq.(<ref>) as∫ g_z' E_k d^3 k/(2π)^3 2 E_k𝒞[f]= 3 M_Z'^3 Γ_Z' → e^+e^-/2 π^2 [T_γ K_2( M_Z'/T_γ) - T_Z' K_2(M_Z'/T_Z') ] Therefore, from eq.(<ref>) we writeδρ_Z'→ e/δ t=3 m_Z'^3/2π^2[ T_γ K_2(M_Z'/T_γ) - T_Z' K_2(M_Z'/T_Z') ] Γ_Z' → e^+ e^- , T_Z' denotes temperature of Z' bath. §.§ Z' decay to ν_i ν_iHere we consider the processes Z'(k)↔ν_i(p_1)+ ν_i(p_2),  where i=e,μ,τ.The corresponding decay widths of Z' to i type ν are given as,Γ_Z'→ν_i ν_i =X_i^2 g_X^2 M_Z'/24 π.Similar to the previous section we can write the energy density transfer rates from Z' to each generation of ν as,δρ_Z'→ν_i/δ t=3 m_Z'^3/2π^2[ T_ν_i K_2(M_Z'/T_ν_i) - T_Z' K_2(M_Z'/T_Z') ] Γ_Z' →ν_i ν_i , §.§Energy transfer from ν_i to e^+e^- mediated by BSM Z'In the presence of light Z' there will be additional energy transfer between e^± and ν_i apart from the SM weak processes (mediated by SM W&Z). The two relevant processes are: ν (p_1) +ν̅(p_2) ↔ e^-(p_3) +e^+(p_4) ν (p_1)+ e^± (p_2)↔ ν (p_3) +e^± (p_4)We denote the amplitudes for SM and BSM processes as ℳ_ SM and ℳ_ BSM respectively. Then the total amplitude ℳ_ tot can be written as |ℳ_ tot|^2= |ℳ_ SM|^2+ 2 Re(ℳ_ SMℳ^†_ BSM)+|ℳ_ BSM|^2Plugging this in eq.(<ref>) we will get the total energy transfer rate (ν↔ e) Three sources contribute to the total energy transfer rate; pure SM, SM-BSM interference, and pure BSM respectively.For simplicity, we calculate the energy transfer rates for each of them separately. The amplitudes for pure SM contributions can be found in sec.<ref>. And the energy transfer rates for pure SM contributions are given in eq.(<ref>- <ref>) <cit.>.The amplitude for BSM processes (with proper momentum assignment) will be (s≪ M^2_Z')ℳ_BSM=-(X_i g_X)(X_1 g_X)/M_Z'^2[ ν̅_iγ^μ P_L ν_i] [ e̅γ_μ e]The interference amplitudes in eq.(<ref>) are tabulated in Tab.-<ref>.Following the formalism developed in ref.<cit.> we write the collision terms for the interference terms:∫d^3p/2π^3p  𝒞[f]_ν_i + e ↔ν_i +e= (X_i g_X)(X_1 g_X)  7 √(2) G_F/π^5 M_Z'^2 (g_L +g_R)  T_γ^4 T_ν_i^4 [ T_γ-T_ν_i]∫d^3p/2π^3p  𝒞[f]_ν_i ν̅_̅i̅↔ e e=(X_i g_X)(X_1 g_X)  4 √(2) G_F/π^5 M_Z'^2 (g_L +g_R)  [ T^9_γ-T^9_ν_i]The first one in the above two collision terms accounts for elastic scattering of ν_i with both e^±. Here we have multiplied with an additional factor 2 to count the effect of ν_i. The second collision term in eq.(<ref>) is due to annihilations. So, the total energy transfer rate from ν to e bath due to the interference with U(1)_X gauge boson, (δρ_ν_i→ e/δ t)_ int = (X_i g_X)(X_1 g_X) √(2)G_F/8π^5 M_Z'^2{(g_L +g_R) F_ MB(T_γ,T_ν_i) }, In a similar fashion, we now move to calculate the collision term due to pure BSM amplitudes (third term in eq.(<ref>)). The pure BSM amplitudes in eq.(<ref>) are tabulated in Tab.-<ref>. The collision terms will be:∫d^3p/2π^3p  𝒞[f]_ν_i + e ↔ν_i +e= (X_i g_X)^2(X_1 g_X)^2  14 /π^5 M_Z'^4 T_γ^4 T_ν_i^4 [ T_γ-T_ν_i]∫d^3p/2π^3p  𝒞[f]_ν_i ν̅_̅i̅↔ e e=(X_i g_X)^2(X_1 g_X)^2  8/π^5 M_Z'^4 [ T^9_γ-T^9_ν_i]So, the total energy transfer rate from ν to e bath due to the pure BSM matrix element ofU(1)_X gauge boson, (δρ_ν_i→ e/δ t)_ BSM = (X_i g_X)^2(X_1 g_X)^2 1/4π^5 M_Z'^4F_ MB(T_γ,T_ν_i),So the total energy transfer rate from ν_i to e bath will be(δρ_ν_i→ e/δ t)_ tot=(δρ_ν_i→ e/δ t)_ SM+(δρ_ν_i→ e/δ t)_ int+(δρ_ν_i→ e/δ t)_ BSM X_1=0: Here we would like to highlight the scenarios where X_1=0. In such case apparently, it seems that the BSM contributions in eq.(<ref>),eq.(<ref>),eq.(<ref>) vanish. However, in such cases even though Z' doesn't have any tree level coupling with e^±, it can couple to e^± through induced couplings which may generate from kinetic mixing, fermion loops or CKM mixing. If the loop factor induced coupling is denoted as ϵ, then in such cases we have to just replace the term (X_1 g_X) in eq.(<ref>),eq.(<ref>),eq.(<ref>) with ϵ e.(X_1 g_X)⟶ (ϵ e)    ( for   X_1=0)So, even if X_1=0, there can be energy transfer from Z' to e bath as well as ν_i to e bath by Z' mediation.Note that eq.(<ref>) differs from ref.<cit.> where the authors ignored the SM-BSM interference term in the calculation of energy transfer rates.However, our BSM contribution term (δρ_ν_i→ e/δ t)_ BSM in eq.(<ref>) agrees with ref.<cit.>for a specific choice of U(1)_X  ( U(1)_L_μ-L_τ).§.§ Temperature evolutionNow with all the collision terms, we are set to formulate the energy evaluation equations. Following the prescription in appendix-<ref> we can write the following energy density evaluation equations:dρ_ν_i/dt =-(4 H ρ_ν_i -(δρ_ν_i→ e/δ t)_ tot-∑_j≠ i(δρ_ν_i →ν_j/δ t)_ tot+δρ_Z'→ν_i/δ t) dρ_Z'/dt = -( 3 H (ρ_Z'+P_Z') -∑_iδρ_Z'→ν_i/δ t-δρ_Z'→ e/δ t) dρ_γ/dt = - (4 H ρ_γ + 3 H ( ρ_e + p_e) +∑_i (δρ_ν_i→ e/δ t)_ tot +δρ_Z'→ e/δ t),where the ∑_i indicates the summation of energy density transfer rates over all 3 generatation of ν (e,μ,τ). And (δρ_ν_i →ν_j/δ t)_ tot bears a similar kind of form as in the SM case. However, in the presence of the light Z', it will get some additional contribution from Z' mediation. So the G_F^2 in eq.(<ref>) will be simply replaced by G_F→ (G_F+(X_i g_X)(X_j g_X)/M_Z'^2).Similar to the previous section we can write the temperature evaluation equations using the partial derivatives and we get,dT_ν_i/dt =-(4 H ρ_ν_i -(δρ_ν_i→ e/δ t)_ tot-∑_j≠ i(δρ_ν_i →ν_j/δ t)_ tot+δρ_Z'→ν_i/δ t)(∂ρ_ν_i/∂ T_ν_i)^-1 dT_Z'/dt = -( 3 H (ρ_Z'+P_Z') -∑_iδρ_Z'→ν_i/δ t-δρ_Z'→ e/δ t)(∂ρ_Z'/∂ T_Z')^-1 dT_γ/dt = - (4 H ρ_γ + 3 H ( ρ_e + p_e) + ∑_i(δρ_ν_i→ e/δ t)_ tot+δρ_Z'→ e/δ t)(∂ρ_γ/∂ T_γ + ∂ρ_e/∂ T_γ)^-1 Hence solving the aforementioned five Boltzmann equations on can track the evolution of ν_L decoupling. By exploiting the neutrino oscillations which are active around ∼ MeV temperature <cit.>, one can further simplify the scenario by assuming all 3 ν_i equilibrate with each other and acquire a common temperature i.e. T_ν_e=T_ν_μ=T_ν_τ≡ T_ν.The benefit of such an assumption is that we no longer have to keep track of the energy transfer rates within different ν sectors i.e. (δρ_ν_i →ν_j/δ t)_ tot=0 in eq.(<ref>). In such case the three T_ν_i equations(eq.(<ref>)) simply reduces to a single temperature (T_ν) evaluation equation:dT_ν/dt =-(4 H ρ_ν -(δρ_ν→ e/δ t)_ tot+δρ_Z'→ν/δ t)(∂ρ_ν/∂ T_ν)^-1, where, ρ_ν=∑_i=e,μ,τρ_ν_i, (δρ_ν→ e/δ t)_ tot=∑_i=e,μ,τ(δρ_ν_i→ e/δ t)_ tot and δρ_Z'→ν/δ t=∑_i=e,μ,τδρ_Z'→ν_i/δ t§ EVALUATION OF N_ EFF WITH DIFFERENT T_Ν_IThroughout the paper we evaluated N_ eff assuming all three ν_i, (i≡ e,μ,τ) share the same temperature. However, the values of N_ eff change if we allow all ν_i to develop different temperatures as prescribed in context of eq.(<ref>-<ref>). We show our results for g_X=10^-8, M_Z'=10 MeV for three different U(1)_X charge combinations in Fig.<ref> assuming only tree level couplings (i.e. ϵ=0). We consider |X_1|=1,|X_2,3|=0, |X_1|=0,|X_2,3|=1 and |X_3|=-1,|X_1,2|=0 in Fig.<ref>,Fig.<ref> and Fig.<ref> respectively. The red, blue and green lines indicate the temperature ratio T_γ/T_ν_i for ν_e, ν_μ and ν_τ respectively.Note that for |X_1|=1,|X_2,3|=0 only ν_e has BSM interactions which lead to enhancing T_ν_e, whereas T_ν_μ/τ reproduces the same value as predicted in SM scenario (where, (T_γ/T_ν_i)_ SM∼1.4<cit.>) as portrayed in Fig.<ref>. It is interesting to note that this value of N_ eff=3.09 is less (∼ 8%) than the one predicted for the same charge configuration in sec.<ref>. This is due to the fact we overestimated the neutrino energy increment in presence of Z' while assuming all 3 ν_i share the same temperature.For Fig.<ref> and Fig.<ref> also we notice the calculated values of N_ eff is slightly less than what was predicted in sec.<ref>. For |X_1|=0,|X_2,3|=1, Z' promptly decays to ν_μ and ν_τ and not in electron. This causes a sudden dip (before T_γ≈ 0.5 MeV ) in the respective temperature ratio curves shown by the blue and green line in Fig.<ref>. For the same reason we notice similar dip in the temperature ratio curve (green line) for ν_τin Fig.<ref>. The value of N_ eff is slightly higher for |X_1|=0,|X_2,3|=1 in Fig.<ref> compared to the one obtained for Fig.<ref> as the former one contains two type of ν_i having BSM interactions. Also it is worth pointing that the value of N_ eff obtained for |X_1|=1,|X_2,3|=0 is less compared to the other two as here Z' decays to both ν_e and e (i.e. photon bath) whereas in the other two cases Z' decays only to ν_i bath. utphys
http://arxiv.org/abs/2312.16304v2
{ "authors": [ "Dilip Kumar Ghosh", "Purusottam Ghosh", "Sk Jeesun", "Rahul Srivastava" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20231226192026", "title": "Hubble Tension and Cosmological Imprints of $U(1)_X$ Gauge Symmetry: $U(1)_{B_3-3 L_i}$ as a case study" }
Theory and Design of Space-Time Metallic Metasurfaces for Wireless CommunicationsSalvador Moreno-Rodríguez1,Antonio Alex-Amor2,Pablo Padilla1, Juan F. Valenzuela-Valdés1, Carlos Molero11 Department of Signal Theory, Telematics and Communications, Research Centre for Information and Communication Technologies (CITIC-UGR), Universidad de Granada, 18071 Granada, Spain.2 Department of Information Technologies, Universidad San Pablo-CEU, CEU Universities,Campus Montepríncipe, 28668 Boadilla del Monte (Madrid), Spain. January 14, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================This paper details a class of metal-based space-time metasurfaces for application in wireless communications scenarios. Concretely, we describe space-time metasurfaces that periodically alternate their properties in time between three spatial states: "air", "conductor" and "grating". We analyze the physics of these metastructures via a computationally-efficient analytical technique based on the use of Floquet-Bloch series, integral equations and circuit models. By doing so, we reveal important features of these spatiotemporal metasurfaces: scattering parameters, field profiles, diffraction angles and nature of the space-time harmonics. The results, corroborated with a self-implemented numerical FDTD approach, show the potential application of these space-time metasurfaces as beamformers acting in reflection, in transmission or both. The amplitude and direction of the diffracted orders can be electronically controlled with the paramaters of the metasurface. Moreover, the intrinsic ability of time-modulated diffractive metasurfaces to mix and multiply frequencies is tested. We show how two different modulations can lead to the same diffraction angle but with different mixed output frequencies. Space-time, time-modulation, metasurfaces, analytical method, wireless communications, beamforming, frequency mixing. § INTRODUCTION Metamaterials have been conceived as artificially engineered devices with the property of manipulating electromagnetic waves. The term metamaterial originally referred to structures exhibiting negative permittivity and permeability simultaneously <cit.>. However, in the subsequent years this conception has experimented different evolutions, remarking the emergence of thinner/planar versions called metasurfaces <cit.>. A conventional metasurface can be physically described as a thin screen with planar geometry exhibiting spatial modulation <cit.>. Thanks to this modulation, metasurfaces become excellent terminals for a wide range of applications, such as microwave absorbers <cit.>, beam shaping <cit.>, or lenses <cit.>, among others. Currently, researchers are paying close attention to a new evolution of metasurfaces, the so-called space-time metasurfaces, in which time is employed as an additional modulation to the existing purely spatial one<cit.>. This introduces additional degrees of freedom as well as richer physics on the system <cit.>. Likewise, time modulation also enhances the capabilities of the terminals, leading to promising concepts such as intelligent programmable metasurfaces <cit.> that could be combined and integrated into wireless communications and radars systems <cit.>.The recent development and fabrication of magnetic-free non-reciprocal devices <cit.> has been a key factor for theinterest in space-time modulated devices <cit.>. Thus, the number of theoretical studies of space-time varying structures in the microwave and optics range <cit.> have increased in the last few years favoring the emergence of novel theoretical applications, such as the use of temporal photonic crystals to achieveamplification <cit.>, filtering and isolation <cit.>, the utilization of travelling-wave modulations forpower combining <cit.>, the use of grounded slab for efficient phase conjugations <cit.>, production of temporal chirping and lensing <cit.>,or frequency mixing and multiplying <cit.>. Specifically, in the field of telecommunications, radar and wireless systems, frequency mixing and multiplying is of capital relevance <cit.>. Traditionally, frequency mixing has been achieved via nonlinear circuit components such as Schottky diodes, GaAs FETs and CMOS transistors. Interestingly,frequency mixing is an intrinsic property of time-modulated diffractive systems, as the frequency of the diffracted waves is directly related to the frequency of the incident wave <cit.>. Thus, spatiotemporal metasurfacescan complement the use of nonlinear components and even replace them in frequency ranges or scenarios where they are no longer functional. Time-modulated antennas have opened new alternatives in this regard as well <cit.>. Furthermore, the versatility to simultaneously modify the momentum and frequency of waves has led space-time-modulated devices to be strong candidates for beamforming or beamsteering <cit.>. Recently, it has been possible to test the beamsteering capabilities of these devices by modifying the phase constant associated to each unit cell with different temporal sequences <cit.>. It probes the potential of these metasurfaces as candidates for the future intelligent communications <cit.>.Many of these applications are usually simulated using numerical techniques such as the finite-difference time-domain (FDTD) method <cit.>. However, the high computational costs and long simulation times that this code entails, have prompted the development of adaptive mesh-based solutions to enhance computational efficiency, such as the Discontinuous Galerkin Time-Domain (DGTD) method <cit.>. Others numerical solutions based on generalized sheet transition conditions (GSTCs) <cit.>, modal techniques <cit.> and integral-equation methods <cit.> show high performance when simulating time-varying metasurfaces. Nonetheless, many times the researchers also search approaches that can provide a deep physical insight of the studied problems. Thus, some alternatives based on analytical and semi-analytical techniques have been reported in the literature <cit.>. Following this line, fully-analytical circuits for 1D-spatial, 2D-spatial, 3D-spatial <cit.> and 1D-temporal <cit.> modulationshave been previously reported by the authors. Furthermore, we have recently exploited the use of Lorentz transformations to reduce a (1+1)-D spatiotemporal problem to a time-invariant (1D-spatial) system <cit.>. Now, in this work, we extend the circuital approach to a different set of (1+1)-D space-time scenarios. The scattering parameters, as well as information on the diffraction spectrum, are extracted in significantly shorter computation times compared to general numerical methods. In fact, the main advantage that this method presents is the possibility to simulate a wideband range of frequencies in a few seconds. The paper is organized as follows. Section II presents the space-time metasurface and depicts the analytical framework. We discuss on the physics of the diffraction spectrum, field profiles, nature of Floquet harmonics and scattering parameters via the associated equivalent circuit. Section III focuses on describing the feasibility of the spatiotemporal metasurface for use in wireless applications. Concretely, we focus on its beamforming and frequency-mixing capabilities, both studied via the analytical method and a supportive self-implemented FDTD approach. Finally, general conclusions are drawn in Section IV.§ ANALYTICAL FRAMEWORK §.§ Space-Time Metasurface fig1 depicts the spatio-temporal system studied in this manuscript. The system is formed by a space-time-modulated diffractive metasurface that manipulates the incident transverse electric (TE) or transverse magnetic (TM) plane waves. The diffractive metasurface is based on different slabs painted in red and blue that alternate their electromagnetic properties in time. Each of the slabs can either become electromagnetically transparent or behave like a solid metal as it is assumed in <cit.>. Depending on the electromagnetic behavior of each of the slabs, the space-time-modulated diffractive metasurface can operate in three different states: “air” (A), “conductor” (C) and “grating" (G). In the air state, all slabs appears to vanish from an electromagnetic perspective, so full transmission of the incident wave is expected. In the conductor state, all slabs behave as a solid metal achieving a solid metallic wall, so full reflection of the incident wave is expected. In the case of the grating state, the red slabs behave as solid metals, and the blue slabs become electromagnetically transparent, causing the space-time metasurface to operate as a spatially-modulated metallic diffraction grating. The geometrical parameters of the metasurface are the following. The space-time diffractive metasurface is placed along the XY plane, as fig1 illustrates.In the time intervals in which the metasurface is in the grating state, it turns into a periodic arrangement of perfect electric conductor (PEC) strips (period P and slit width W) that are infinitely-extended along the x axis.The spatial periodicity of the grating state is along the y axis. In the conductor and air states, the metasurface can be modeled as infinitely-extended uniform thin PECand air screens, respectively. The air state assumes electrical parameters ε_r = μ_r = 1. In order to construct a proper analytical framework, we only need to enforce the temporal variation of the system to follow a time-periodic scheme, of period T_M. By wisely combining the three states in time (A-G, C-G or A-C), a rich diffraction phenomenology can be created. From an engineering perspective, the diffraction pattern can be electrically tuned by setting the space-time parameters of the system. This enables an efficient beamforming platform that could be potentially applied in wide range of wireless communications scenarios. Remarkably, the space-time metasurface can act as a beamformer in reflection, transmission or both simultaneously by simply tuning its electrical parameters. This will be illustrated in detail in the next sections.The three states of the space-time metasurface can be realized by alternating between total transparency or total reflectivity, as recently achieved in <cit.>. Nonetheless, the implementation of the metasurface could also be inspired by several innovative prototypes in the microwave range <cit.>. Many of these devices are based on adjustable biased PIN diodes <cit.> and varactors <cit.>, traditionally implemented in reflectarray and trasmittarray systems. Moreover, novel theoretical designs based on the use of electronically-reconfigurable materials such as graphene <cit.> or transparent conductive oxides <cit.> open new alternatives for the millimeter-wave and terahertz regimes. §.§ Floquet-Bloch ExpansionSince the diffractive metasurface behaves as space-time-periodic media, the transverse electric and magnetic fields in regions (1) and (2) can be expressed in terms of Floquet-Bloch series. Both media are electromagnetically defined by ε_r^(i) and μ_r^(i). Thus, let us consider a space-time metasurface with spatial period P along the y-direction and temporal macroperiod T_M<cit.>. It receives a plane wave impinging obliquely with angular frequency ω_0 and TE polarization. The fields at both sides of the metasurface can be expressed in terms of a Floquet expansion of harmonics. In region (1) (z < 0):𝐄_t^TE,(1)(y, z, t) =[e^j( ω_0 t -k_0 y - β_00^(1) z) + R e^j (ω_0 t -k_0 y + β_00^(1) z)+∑ _∀ mn ≠ 00 E_mn^(1)e^j (ω_n t -k_m y + β_mn^(1) z)] 𝐱̂ 𝐇_t^TE,(1)(y, z, t) =[ Y_00^TE,(1)e^j (ω_0 t -k_0 y - β_00^(1) z)- R Y_00^TE,(1)e^j (ω_0 t -k_0 y + β_00^(1) z)- ∑ _∀ mn ≠ 00 Y_mn^TE,(1) E_mn^(1)e^j (ω_n t -k_m y + β_mn^(1) z)]𝐲̂. In the former expressions, E_mn^(1) is the amplitude of the (m,n)-th space-time harmonic operating in region (1), R is the reflection coefficient associated with the incident wave (00-th harmonic), ω_n is the angular frequency associated with the temporal n-th harmonic,ω_n = ω_0 + n2π/T_M , and k_m is the transverse wavenumber linked to the spatial m-th harmonic:k_m = k_t + m2π/Pwithk_t = √(ε_r^(1)μ_r^(1)) ω_0/csin(θ_0) and θ_0 being the incidence angle.In a similar way, we define the tangential electromagnetic field via Floquet expansion in the region (2) (z > 0), 𝐄_t^TE,(2)(y, z, t) = [ T e^j (ω_0 t -k_0 y - β_00^(2) z)+ ∑ _∀ mn ≠ 00 E_mn^(2)e^j (ω_n t -k_m y - β_mn^(2) z)] 𝐱̂ 𝐇_t^TE,(2)(y, z, t) =[ T Y_00^TE,(2)e^j (ω_0 t -k_0 y - β_00^(2) z)+∑ _∀ mn ≠ 00 Y_mn^TE,(2) E_mn^(2)e^j (ω_n t -k_m y - β_mn^(2) z)]𝐲̂ where T denotes the transmission coefficient and E_mn^(2) the amplitude associated to the (m,n)-thFloquet harmonic. In the magnetic fields expansions in (<ref>) and (<ref>), Y_mn^(i) denote the admittance of the the (m,n)-th Floquet harmonic located at the i-th medium, defined for TE incidence asY_mn^TE,(i) = β_mn^(i)/μ_r^(i)μ_0 ω_n,with β_mn^(i) =√(ε_r^(i)μ_r^(i)[ω_n/c]^2 - [k_m]^2).being the propagation constant of the (m,n)-th harmonic in region (i).The previous derivation can easily adapted for TM incidence. In the TM case, the electric and magnetic field orientations are the opposite to the TE case, thus 𝐄_t^TM,(i)(y, z, t)= [𝐱̂·𝐄_t^TE,(i)(y, z, t)] ·𝐲̂ 𝐇_t^TM,(i)(y, z, t)= [𝐲̂·𝐇_t^TE,(i)(y, z, t)] ·𝐱̂ .In addition, the admittance expressions must now be replaced to those related to TM harmonics, Y_mn^TM,(i) = ε_r^(i)ε_0 ω_n/β_mn^(i) .§.§ Nature of the Space-Time Harmonics Real(imaginary) values of β_mn^(i) imply that the nature of the (m,n)-th Floquet harmonic is propagative(evanescent).By using eq. (<ref>), it can be shown that all evanescent space-time harmonics fulfill the general conditions√(ε_r μ_r)/c(ω_0 sin (θ_0) - | ω_0 + n2π/T_M| ) + m 2π/P > 0 √(ε_r μ_r)/c(ω_0 sin (θ_0) + | ω_0 + n2π/T_M| ) + m 2π/P < 0In a simpler scenario where the surrounding media are air (ε_r = μ_r = 1) and the waves impinge normally (θ_0 = 0) to the space-time metasurface, eq. (<ref>) reduces to:- |1 +nT_0/T_M| +m λ_0/P > 0 +|1 +nT_0/T_M| +m λ_0/P< 0 The former expressions show that, for a fixed temporal macroperiod T_M,there will be a greater number of evanescent space-time harmonics as the spatial periodicity P reduces. Likewise, for a fixed P,there will be a greater number of evanescent harmonics as T_M increases. Moreover, all the (0,n)-th harmonics (m=0, ∀ n) are propagative under normal incidence. This is consistent with the results extracted in our previous work <cit.>, as the(0,n)-th Floquet harmonics are only driven by the time modulation.Evanescentes summarizes all the aforementioned phenomenology related to the space-time harmonics. In the figure, the vertical axis represents the spatial (m-indexed) harmonics and the horizontal axis represents the temporal (n-indexed) harmonics. The evanescent harmonics are marked in white and the propagative ones in black. Upon analyzing Evanescentes, it is evident the existence of certain symmetry for both indexes. In the case of considering normal incidence, this symmetry appears around the null of the propagation constant in both the vertical and horizontal planes. Therefore, seeking the roots of eq. (<ref>) under normal incidence, the (m,n)-th harmonic with β_mn=0emerges for m=0 andn=-T_M/T_0. It corresponds to a DC harmonic (ω_n= 0) <cit.>, which is present at (m=0,n=-2) in Evanescentes(a)-(b) and at (m=0,n=-3) in Evanescentes(c). The phenomenology of the propagative and evanescent harmonics is much richer here than in the case considered in our previous works <cit.>. In this work, we are considering a space-time modulation, where in <cit.> we considered a time-only modulation. Time-only modulations led to scenarios where most of the harmonics are propagative, and only a few are evanescent. In fact, it was shown in <cit.> that all time harmonics are propagative under normal incidence conditions in the time-modulated metasurface. The situation is rather different and more interesting from a physics and engineering perspective when spatial and temporal modulations intervene together <cit.>.§.§ Diffraction Angles and Beamforming The diffraction angle of the (m,n)-th Floquet harmonic can be computed using:θ_mn^(i) = asin(k_m√(ε_r^(i)μ_r^(i))[ω_n/c]) = atan( k_m/β_mn) θ_mn^(i) = asin(√(ε_r^(1)μ_r^(1))sin(θ_0)+mλ_0/P√(ε_r^(i)μ_r^(i))[1+nT_0/T_M])At a first sight, one can visualize in eq. (<ref>) that propagation below the XZ plane, i.e., negative angles θ_mn^(i), is allowed. A negative transverse wavenumber k_m (with real-valued β_mn^(i)) is easily achievable for some negative indexes m, which translates into a negative diffraction angle θ_mn^(i). This constitutes a major difference with respect to our previous works on time-only metasurfaces, where negatives diffraction angles were forbidden <cit.>. Additionally, eq. (<ref>)reveals that most of the diffraction harmonics tend to close to the normal (θ_mn→0) when the temporal period T_M decreases or, analogously, when the time-modulation frequency ω_M increases, with the exception of the (m, 0)-th harmonics that are not affected by the time modulation. Therefore, the diffraction pattern produced here is of a much richer nature. Thus, the beamforming capabilities of the present space-time metasurface are expected to significantly exceed those of the previous space-only and time-only configurations.§.§ Reflection/Transmission Coefficients The derivation of the scattering parameters, i.e., the reflection and transmission coefficients, require of prior knowledge of the electric-field profiles at the space-time discontinuity. This is an insightful technique that has been successfully applied in the past for the analysis of 1D-spatial <cit.>, 1D-time <cit.>, 2D-spatial <cit.> and 3D-spatial <cit.> periodic systems. Formally, the space-time profile, also referred to as base function (bf), can be described by the function 𝐄_bf(y,t) depending on space and time. The application of the continuity of the electric fields through the space-time interface (z=0), 𝐄_t^TE/TM,(1)(y, 0, t) = 𝐄_t^TE/TM,(2)(y, 0, t) = 𝐄_bf(y,t) , leads to1+R= T 1 + R= E_bf(k_0, ω_0)/PT_M E_mn^(1) = E_mn^(2) = E_bf(k_m, ω_n)/PT_M, withE_bf(k_m, ω_n) = ∫_-P/2 ^P/2∫_0 ^T_M E_bf(y,t) e^- j (ω_n t +k_m y)dy dt. Note that E_bf(y,t) is simply the instantaneous amplitude of the vector 𝐄_bf(y,t) representing the base function and E_bf(ω_n, k_m) is its Fourier transform. From eqs. (<ref>) and (<ref>), we may also obtain thatE_mn^(1) = E_mn^(2) = (1 + R) N(k_m, ω_n),whereN(k_m,ω_n) = E_bf(k_m, ω_n)/E_bf( k_0, ω_0) is a term that accounts for the coupling between the fundamental 00-th harmonic and the corresponding (m,n)-th space-time harmonic.Now, the continuity of the instantaneous Poynting vector is imposed at the space-time interface (z=0). The power passing through the interface is evaluated over a temporal period T_M and over a spatial period P. This leads to ∫_-P/2 ^P/2∫_0 ^T_M𝐄_bf(y,t) ×𝐇_𝐭^TE/TM,(1)(y, 0, t) dy dt = ∫_-P/2 ^P/2∫_0 ^T_M𝐄_bf(y,t) ×𝐇_𝐭^TE/TM,(2)(y, 0, t) dy dtBy replacing the values of eqs. (<ref>), (<ref>) (TE incidence) or eq. (<ref>) (TM incidence) and eq. (<ref>) into eq. (<ref>) and operating, we achieve:(1 -R)Y_00^TE/TM,(1) - (1 + R) ∑_mn ≠ 00Y_mn^TE/TM,(1)|N(k_m,ω_n)|^2= (1 + R)Y_00^TE/TM,(2)+ (1 + R) ∑_mn ≠ 00Y_mn^TE/TM,(2) |N(k_m,ω_n)|^2,By manipulating (<ref>), the reflection coefficient R admits to be expressed asR = Y_00^TE/TM,(1) - Y_00^TE/TM,(2) - Y_eq/Y_00^TE/TM,(1) + Y_00^TE/TM,(2) + Y_eq . The term Y_eq, whichgroups the double sums in eq. (<ref>), is the equivalent admittance that models the space-time metasurface. It is computed asY_eq = ∑_∀ mn ≠ 00|N(k_m,ω_n)|^2(Y_mn^TE/TM,(1) + Y_mn^TE/TM,(2)). §.§ Circuit TopologyEqs. (<ref>) and (<ref>) are circuitally interpreted by the topology shown in Admitancias. In general, eq. (<ref>) reveals that the circuit model associated to the analytical framework consists of two semi-infinite transmission lines modeling the input (1) and output (2) media, and an equivalent admittance Y_eq that models the space-time metasurface [see Admitancias(b)]. A deeper insight into eq. (<ref>) shows that the equivalent admittance is internally formed by a sum of the admittances (parallel connections of semi-infinite transmission lines) associated to each space-time Floquet harmonic. Each of the parameters N(k_m,ω_n) taking part in eq. (<ref>) is interpreted as a complex transformer that takes into consideration the coupling between the (m,n)-th harmonic and the fundamental (00-th) one. Each admittance of each Floquet harmonic is loaded with a complex transformer, except for the fundamental harmonic whose value is N(k_0,ω_0) = 1. This is sketched in Admitancias(a). Note that, for the sake of visualization, only a few harmonics of the double infinite sum are representedin Admitancias(a), and that the transmission lines of the fundamental harmonic are marked in blue.At a first sight, the circuit topologies shown in Figs. <ref>(a) and <ref>(b) coincide with those shown in previous works on space-only and time-only metamaterial configurations <cit.>. However, the value of the forming admittances and complex transformers, which rule the diffraction behavior of the metasurface, are completely different. In (three-dimensional) space-only systems, the complex transformers are a function of the transverse wavenumbers k_m, k_n and k_l. In time-only problems, those are a function of the angular frequency ω_n. In spatiotemporal systems, the complex transformers are a function of the transverse wavenumbers plus the angular frequency. It was shown that higher-order harmonics in space-only and time-only metasurfaces described by the present topology are of evanescent and propagative nature, respectively. This causes that higher-order harmonics contribute with capacitive and/or inductive terms in space-only (patch-based and aperture-based) metasurfaces <cit.>, whilehigher-order harmonics contribute with a purely resistive term in time-only configurations <cit.>. The scenario is rather more complex in the present space-time metasurface. Higher-order harmonics fulfilling |n|≫|m| are predominantly temporal. Their propagation constant is frequency-independent and real-valued.It can be approximated asβ_mn^(i)|_|n|≫|m|. ≈|2π n/cT_M| √(ε_r^(i)μ_r^(i)).By using eqs. (<ref>) and (<ref>), the admittances of the (m,n)-th harmonic read in this case:Y_mn^TM/TE, (i)|_|n|≫|m|. ≈ Y_00 √(ε_r^(i)/μ_r^(i)) Therefore, all the space-time harmonics with |n|≫|m| contribute together with a pure resistive term. On the other hand, higher-order harmonics fulfilling |m|≫|n| are predominantly spatial. Their propagation constant is frequency-independent and imaginary:β_mn^(i)|_|m|≫|n|. ≈ -j2π/P| m|.The associated admittances read: Y_mn^TM, (i)|_|m|≫|n|. ≈j (ω_0 + nω_M)ε_r^(i)ε_0 P/2π |m| Y_mn^TE, (i)|_|m|≫|n|. ≈1/j (ω_0 + nω_M) 2 π |m|/μ_r^(i)μ_0 Pwhere ω_M = 2π / T_M is simply the angular frequency of the time modulation.From the former expressions, it can be identified that the (m,n)-th higher-order TM harmonic fulfilling |m|≫|n| contributes with two parallel capacitors. The first capacitor is dependent on the frequency of the incident wave ω_0 and second one on the time modulation of the space-time metasurface ω_M. When considering higher-order terms (|n|≫ 1), the latter capacitor is the dominant, being the time modulation of the metasurface truly relevant in this case. Similarly, the (m,n)-th higher-order TE harmonic fulfilling |m|≫|n| contributes with two series inductors, the one associated to the frequency ω_M being the dominant. Therefore, space-time harmonics with |m|≫|n| contribute jointly with a pure capacitive or inductive term, depending on the type of wave incidence.The propagation constant of higher-order harmonics that fulfill |m|∼|n| (|m|, |n| ≫ 1) is still frequency-independent, but cannot lead to a simple classification of the harmonic contributions into purely resistive or purely capacitive/inductive. Depending on the surrounding media (ε_r and μ_r) and electrical parameters of the space-time modulation (P and T_M), the propagation constant β_mn^(i) will be real or imaginary and the (m,n)-th higher-order harmonic fulfilling |m|∼|n| will either contribute with a resistive or a capacitive/inductive term.Conversely, low-order harmonics have associated a propagation constant that is frequency-dependent and a function of the incident angle θ_0, among other parameters. As a consequence, low-order terms cannot be decoupled into purely spatial or purely temporal. Thus, they cannot be directly associated to resistors, capacitors or inductors. To maintain the wideband behavior of the analytical approach, low-order harmonics should be analyzeddirectly with their complex admittances Y_mn^TE/TM,(i). Therefore, the whole equivalent admittance Y_eq in eq. (<ref>) can be divided into three main parallel contributions: a complex and frequency-dependent admittance Y_eq^(lo) describing the effect of low-order harmonics, a resistor R^(hi) modeling predominantly-temporal higher-order TM/TE harmonics, and either a capacitor C^(hi) (TM incidence) or an inductor L^(hi) (TE incidence) modeling predominantly-spatial higher-order harmonics. If needed, further approximations can be made in the quasi-static regime (low frequencies: ω_0≪ 2π/T_M and ω_0 / c ≪ 2π/P) in order to neutralize the frequency-dependence of the complex low-order admittanceY_eq^lo. However, this comes at the expense of reducing the operation bandwidth of the wideband analytical model.In the quasi-static regime, the time modulation truly marks the behavior of the space-time harmonics, i.e., ω_n ≈n ω_M and k_m≈ 2π m/P. Thus, the admittance of the (m,n)-th harmonic under a quasi-static (qs) approximation is Y_mn^TM, (i), qs≈ε_r^(i)ε_0n ω_M/√(ε_r^(i)μ_r^(i) (n ω_M)^2 - (2π m/P)^2) Y_mn^TE, (i), qs≈√(ε_r^(i)μ_r^(i) (n ω_M)^2 - (2π m/P)^2)/μ_r^(i)μ_0n ω_M,which can only be purely real or reactive depending on the selected (m,n)-th harmonic. As a consequence, the equivalent admittance in eq. (<ref>) can be reduced in the quasi-static case to a parallel connection of a resistor R^(hi), qs and a capacitor C^(hi), qs (TM incidence) or the resistor R^(hi), qs and an inductor L^(hi), qs (TE incidence). §.§ Basis Functions As detailed in Section <ref>, the knowledge of the tangential electric field profile 𝐄_bf(y,t) is crucial for the analysis of the space-time metasurface via the analytical method. In this subsection, we describe the analytical expressions for the space-time base functions involving the three states under consideration: air (A), conductor (C) and grating (G). §.§.§ Air state (A)In the air state, the space-time metasurface vanishes from an electromagnetic perspective. Therefore, the base function that models the air state coincides with the electric profile of the incident plane wave; namely,E_bf(y,t)= A sin(ω_0 t), ∀ y ∈[-P/2, P/2], t∈ t_A,where t_A is the time in which the space-time metasurface is in the air state.§.§.§ Conductor state (C)In the conductor state, the space-time metasurface turns into an infinitesimally-thin metallic sheet. If losses are neglected, the metallic sheet can be modeled with PEC condictions, so the tangential electric field must be zero at its surface. This leads to the base functionE_bf(y,t)= 0, ∀ y ∈[-P/2, P/2], t∈ t_C,where t_C is the time in which the metasurface is in the conductor state.§.§.§ Grating (G)The grating state can be modeled with the following base functionE_bf(y,t)= A sin(ω_0 t) [1-(2y/W)^2]^1/2,TE polarization [1-(2y/W)^2]^-1/2,TM polarization, defined in the slit region (-W/2 ≤ y ≤ W/2) and the time interval t∈ t_G, time in which the space-time metasurface is in the grating state. This base function relates the field excitation and charge distribution in a conventional one-dimensional diffraction grating illuminated by a sinusoidal plane wave of frequency ω_0.As previously introduced, a wise combination (A-G, C-G) of the three states in a time-periodic scheme can lead to a rich diffraction phenomenology from whichbeamformers can benefit. Top panel of Fig4(a) sketches the scenario where the space-time metasurface alternates between conductor and grating states (C-G case). As shown, the scheme is periodic in space and time, with periods P and T_M, respectively. Moreover, the terms t_C and t_G indicate the time in which the metasurface is in the conductor and grating states. Note that T_M = t_C + t_G in this case to retain time periodicity. The basis function that models the C-G scenario is plotted at the bottomof Fig4(a) for TE and TM incident polarizations. Null regions indicate that the space-time metasurface is in the conductor state [eq. (<ref>)], while ∩- and ∪-shaped regions [eq. (<ref>)] are associated to the grating state under TE and TM incidences, respectively.Similar rationale is applied to the air-conductor (A-G) case, sketched at left side of Fig4(b). The A-G configuration is periodic in space and time, with periods P and T_M, respectively. In this case, the time period is constituted by two addends, T_M = t_A + t_G, related to the time in which the metasurface is in the air and grating states. The basis function of the A-G case is illustrated in the right side of Fig4(b). ∩/∪-shaped regions indicate that the space-time metasurface is in grating state, while sinusoidal regions indicate that the metasurface is in the air state [eq. (<ref>)]. Transients between air, conductor and grating states are neglected along the document.In general, the electric field at the space-time metasurface takes a nonzero time in reaching the steady state for the considered metasurface state. However, transient phenomena has no appreciable impact on the diffraction pattern as long as the considered operating frequencies are sufficiently low. In our particular case, we must ensure operation far below the plasma frequency ω_pof the metal under consideration and time periods (T_0 and T_M) much greater than the relaxation time τ of the free charges. Metals that are good conductors such as copper, aluminum, silver or gold have plasma frequencies of the order of ω_p ∼ 10^15 rad/s and relaxation times τ∼ 10^-14 s. Thus, transient times are several orders of magnitude shorter than the periods of the incident wave and time modulation in the radio, microwave and millimeter-wave frequencies. In practice,the operation frequency of the analytical approach is limited down to the low-THz regime. The different states of the structure can also be invoked by regardingreconfigurable metasurfaces, which for this case would consist of a reconfigurable grating. Reconfiguration is realized via active elements such as diodes or varactors <cit.>, or via electronic materials such as graphene <cit.> or Ge_2Sb_2Te_2 <cit.>. Their electronic effect over the grating is tuned to vary periodically, invoking periodic cycles in which the states of full transparency (A), full reflectivity (C), or the natural grating (G) interchange in the sense described above. This can be possible thanks to the excitation of resonant states in the grating through these elements. The existence of a transient regime when passing from one state to the other is neglected since we are assuming transient time to be temporarily short enough in comparison with the rest of times (t_C, t_A, t_G) taking part in a periodic cycle. Small transient times have negligible effects on the Fourier Transform of the electric field profile at the discontinuity <cit.>. The considered basis functions also limit the frequency range of the analytical approach. Specifically, the most constraining basis function is the one that models the grating state. It was reported in previous works on spatially-periodic metasurfaces that the maximum operating frequency is limited, from a practical perspective, to f_max∼ 1.5c / P<cit.>. Thus, the proposed analytical works in a wide frequency range, even within the grating lobe regime. In the case of considering oblique incidence, the analytical approach is expected to work fine as well for both TM and TE polarization states, but the maximum operating frequency could reduce to f_max∼ c / P for large incident angles. Moreover,the slit width should be less than W ≲ 0.7P to obtain accurate results. That is, the analytical approach is wideband, robust to large incidence angles and not strictly constrained to the subwavelength regime. Taking into account the aforementioned limitations, from an engineering perspective, we will be able to excite many propagative harmonics if we go into the diffraction regime with P ≫ w. This scenario would lead to a richer diffraction pattern. § WIRELESS APPLICATIONS In this section, we test the capabilities of the space-time metasurface for wireless applications. We show how the electrical tuning of the different space-time parameters can lead to a rich diffraction phenomenology, mixing frequencies, setting specific angular regions and favoring transmission or reflection. Furthermore, we validate the present analytical approach by comparing the results with full-wave simulations in a self-implemented finite-difference time-domain (FDTD) code.§.§ Harmonic Distribution We start by considering the case in which the space-time metasurface alternates between the conductor and grating states. The basis function that models the C-G case is mathematically described by a combination of eqs. (<ref>) and (<ref>).C-G_Tm depicts the obtained normalized amplitudes of the Floquet harmonics under normal TE and TM incidences. In all these simulations, the incidence of a monochromatic wave with a frequency of 30 GHz has been assumed.The results are obtained with the proposed analytical model and then compared with our FDTD code. The agreement between the analytical and FDTD results is good, thus validating the present approach.The harmonic pattern shows similarities for TE and TM incidences. Note that two main lobes appear in C-G_Tm for n=0 and n=-16. These indexes correspond to the frequencies ω_n = ±ω_0, respectively. However, in the TM case, the amplitude of the spatial (m-indexed) harmonics show a slower decay; namely, a larger number of spatial modes are needed to accurately reconstruct the basis function. This is due to the singularity of the spatial profile present in the TM case that is not in the TE case. On the other hand, the harmonic pattern under TE incidence is mainly governed by the temporal n-indexed harmonics. C-G_Tm(b) presents the Floquet harmonic pattern in the case of reducing the time period from T_M=8T_0 to T_M=2T_0. The peaks are observed for the temporal harmonics n=0 and n=-4. As illustrated, the two main lobes approach each other in the case of reducing the time period (increasing the switching frequency of the metasurface).The spatial and temporal harmonic dependencies are decoupled, as eqs. (<ref>)-(<ref>) impose and C-G_Tm confirms. Thus,the behaviour of the spatial modes for both incidences is not affected by modifying the time period. Another temporal parameter of interest is the duty cycle D <cit.>. The duty cycle is a dimensionless parameter that relates the amount of time that the space-time metasurface is in each of the three states. As it will be discussed later, the duty cycle plays an important role in controlling the amplitude of the diffracted, reflected and transmitted, waves. For the present C-G case, the metasurface keeps in conductor state a time given by DT_M, with T_M being the temporal macroperiod. Otherwise, it remains in the grating state (1-D)T_M seconds. First and second row of C-G_D_W(a) show the harmonic pattern when the duty cycle is D=0.25 and D=0.75, respectively. Increasing the duty case in this case implies a greater presence of the grating state. Therefore, the width of the main lobes narrows along the temporal (n-indexed) axis as the spatiotemporal field profile (basis function) is more similar to that of a pure sine.The last spatial parameter of interest is the relation between the slit width W and the spatial period P; namely, the dimensionless ratio W/P. Logically, decreasing the slit width in a grating implies an increase in the reflection through the structure. From a harmonic perspective [see the harmonic pattern in C-G_D_W(b)], decreasing the ratio W/P provokes that the width of the main lobes widens. This is specially noticeable in the TM case, as the basis function includes the presence of two singularities at the edge of the slits (positions y = ± W/2 within the unit cell) in a ∪-shaped profile. The two singularities get closer as W/P reduces. As a consequence, more spatial modes are needed to reconstruct the tangential electric-field profile due to the rapid spatial variation of the basis function along y.§.§ Air–Grating (A-G)For completeness, we now study the scenario where the space-time metasurface alternates between the air and grating states. The basis function that models the A-G case is described by a combination of eqs. (<ref>) and (<ref>) in a time-periodic scheme. The results are shown in A-G_Emn for both TE and TM incidences. A good agreement is also observed in this case between the analytical and FDTD results. While maintaining the grating state, replacing the conductor state by air state has physical implications in the formation of the harmonic patterns. The air state is ruled by a temporal sinusoidal basis function, as Fig4 illustrates. Therefore, in a metasurface switching between the air and grating state the majority of the power is carried by the fundamental harmonics. In other words, the width of the main lobes is narrower compared to the previous C-G scenario. This is shown in A-G_Emn. Nonetheless, the different effects of TE/TM incidences on the spatial modes are still perceptible.§.§ Diffracted Waves To gain a more comprehensive understanding on the diffraction patterns of these metasurfaces, R_vs_T(a) and R_vs_T(b) illustrate the electric-field distributions of the A-G and C-G cases, respectively, extracted with the self-implemented FDTD code. The same duty cycle D=0.5 is maintained in both configurations for a fair comparison. In general, higher transmission is observed for TM polarization than for TE polarization. This is in line with the results obtained in previous works on spatially-modulated surfaces <cit.>. However, when the metasurface alternates between air and grating states [A-G configuration in R_vs_T(a)], a high degree of transmission is generally seen. In contrast, when the metasurface switches between the conductor and grating states[C-G configuration in R_vs_T(b)], high reflection is created. This is a somewhat expected but interesting feature from an engineering perspective, since the air and conductor states tend to favor transmission and reflection, respectively.This feature of the space-time metasurface, combined with the tuning of the duty cycle parameter, can be exploited in practice for the efficient design of reflection-based, transmission-based or mixed reflection-transmission-based beamformers. From a physical perspective, it is noteworthy that the incorporation of spatial modulation by the grating leads to the emergence of spatial modes in the upper (y≥0) and lower (y<0) half-spaces. This constitutes a major difference with respect to our previous works on time-only modulated metamaterials, where only diffraction in the upper half-space was permitted <cit.>. Thus, the present space-time metasurface creates a richer diffraction pattern compared to time-only configurations. §.§ Frequency Mixing Frequency mixing is vital for some physics and engineering applications such as radio astronomy, telecommunications or radar systems. Traditionally, nonlinear components such as Schottky diodes, GaAs FETs and CMOS transistors are used for this purpose. In this regard, temporal and spatiotemporal metamaterials possess the inherent ability to mix frequencies <cit.>. This is essentially due to “temporal" diffraction and the production of higher-order Floquet harmonics ruled by the condition ω_n = ω_0 + 2π n / T_M [see eq. (<ref>)]. Thus, the proposed space-time metasurface can produce new frequencies, ω_n, from the frequency of the incident wave, ω_0.Interestingly, eq. (<ref>) suggests that two different space-time modulations can lead to the same diffraction angle, but with different output frequencies, by simply tuning their space-time parameters. In order to show this frequency-mixing effect, let us consider the two following modulations. Modulation #1 is defined by the (unprimed) space-time parameters {m,n, P, T_M}. Similarly, modulation #2 is defined by the (primed) parameters {m',n', P', T'_M}. This leads to the diffraction angles θ_mn and θ_m'n' (associated to modulations #1 and #2, respectively), given byθ_mn = asin( k_t + 2π m/P/2π/cT_0√(ε_r^(1)μ_r^(1))(1 + n T_0/T_M)) θ_m'n' = asin( k_t + 2π m'/P'/2π/cT_0√(ε_r^(1)μ_r^(1))(1 + n' T_0/T'_M)) Nonetheless, both configurations are expected to have the same spatial period (P' = P), since this parameter cannot be electronically reconfigured in practice like the duty cycle D or the time period T_M. Under this assumption, by equating eqs. (<ref>) and (<ref>), we havek_t + 2π m/P/2π/cT_0√(ε_r^(1)μ_r^(1))(1 + n T_0/T_M) = k_t + 2π m'/P/2π/cT_0√(ε_r^(1)μ_r^(1))(1 + n' T_0/T_M')The former equation reduces under normal incidence (k_t = 0) tom/m' = 1 + n T_0/T_M/1 + n' T_0/T_M' Expressions (<ref>) and (<ref>) give us the design equations to produce frequency mixing between harmonics of different orders ({m,n} and {m',n'}) while maintaining the same output diffraction angle. This can be done by simply adjusting the temporal periods of the two modulations, T_M and T'_M. mixer shows a practical example of frequency mixing. For simplicity, we have selected an scenario ruled by normally-incident waves. However, the previous discussion can be straightforwardly extended to the design of spatiotemporal frequency mixers that operate under oblique-incidence conditions by using eq. (<ref>). In the figure, a monochromatic plane wave of frequency ω_0 impinges normally on a C-G space-time metasurface (located at z/λ_0=0). As seen, the metasurface creates a rich diffraction pattern, where only the transmitted diffracted fields are plotted for a better visualization of the results. With the design information given by eq. (<ref>), the temporal periods of mixer(a), <ref>(b), <ref>(c) have been carefully chosen. The underlying idea is to keep the same diffraction angle while varying the output frequencies in the three configurations that are shown. For instance, it can be seen that the diffraction orders (m=2, n=2) and (m' = 1, n'=1) in mixer(a) and <ref>(b) share the same diffraction angle (θ_22 = θ_1'1' = 33.72^o) while their output frequencies are different (ω_2=3ω_0, ω_1' = 1.5ω_0). The aforementioned diffraction orders are marked with a blue line in mixer(a) and <ref>(b). Similarly, the diffraction orders (m=3, n=3) and (m' = 1, n'=1), marked with a black line in mixer(a) and <ref>(c), have the same diffraction angle (θ_33 = θ_1'1' = 38.65^o) while their output frequencies are ω_3=4ω_0 and ω_1' = 1.33ω_0, respectively. As this example seeks to illustrate, the capabilities of the present metal-based metasurface are promising for the production of frequency mixers and frequency multipliers based on space-time architectures that could be integrated in wireless communication systems. §.§ Transmissive BeamformerThe main purpose of this section is to exploit the performance of these structures as transmission beamformers. Thus, we test the A-G configuration by changing the spatial and temporal periods. For a better visualization in the simulations, only the transmissive part of the diagrams is plotted. The A-G metasurface is placed at the beginning of the space simulation (z/λ_0=0). The most important diffraction orders have been marked with an arrow to facilitate their visualization. and their space and time indexes (m,n) have been included to know their nature. Thus, A-G_beam(a) depicts the diffraction pattern of a monochromatic wave that impinges in the spatiotemporal metasurface under oblique TM incidence (θ_0=40^o). Note how, in the upper part of the transmission plane (y/λ_0>0), the diffraction angles must be similar to the time-only case since the(m=0,n)-th harmonics are implied. However, in this manuscript, the beamformer capabilities have been increased since the arising of the new space-time harmonics in the lower part of the transmission diagram (y/λ_0<0). In these examples, it is noticeable the emerging of the (-1,n)-th harmonics. A-G_beam(b) shows the same scenario but with a slight increase of spatial period (from P=0.7λ_0 to P=λ_0). The rest of the parameters, T_M=4T_0, W=0.5P and D=0.25, are kept fixed. A-G_beam(b) illustrates how (m=-1,n)-th harmonics approach the normal as the spatial period P and the temporal index n increase. This is well predicted by eq. (<ref>). Moreover, notice how the temporal (m=0,n)-th harmonics are not affected by a change in the spatial period, since the temporal period is the same in both A-G_beam(a) and <ref>(b).In contrast, A-G_beam(c)-(d) depict the phenomenology when the temporal period is modified. For simplicity, TE normal incidence is assumed and slit width is increased to W = 0.6P in order to achieve a higher transmission. Furthermore, a spatial period greater than the wavelength of the incident wave is imposed to enrich the diffraction diagram since the first (spatial) grating lobes are excited. A-G_beam(c) shows the diffraction pattern for the same time period as considered throughout the previous section of the manuscript (T_M=4T_0). In this case, a symmetric pattern is appreciated where the temporal (0,n)-th harmonics are in the same direction (θ_0n=0^o).Moreover, all the (0,n)-th harmonics are propagative due to their temporal nature and the normal incidence conditions. However, since spatial modulation has been included, new spatiotemporal harmonics emerge symmetrically at both the upper and lower parts of the diagram. Finally, in A-G_beam(d), it is observed how the higher-order (m,n)-th harmonics approach to the normal when the temporal period is reduced (from T_M=4T_0 to T_M=T_0), as it was predicted in Section <ref>.Table <ref> provides a comparison of the diffraction angles for the previous cases shown in A-G_beam. They have been extracted with the analytical approach [by means of eq. (<ref>)] and the FDTD method. A good agreement is appreciatedbetween the analytical and numerical results.Frequency-mixing phenomena can be appreciated in A-G_beam(c) and <ref>(d) as well. As discussed in Section <ref>, two different space-time modulations can lead to different output frequencies while maintaining the same diffraction angle. This is case for the diffraction orders (m=1, n=1) in A-G_beam(c) and (m'=4, n'=4) inA-G_beam(d). Both diffraction orders have associated the same diffraction angle (θ_11 = θ_4'4' = 41.77^o) but different output frequencies (ω_n=1 = 1.25 ω_0, ω_n'=4 = 5 ω_0). In a general case,eqs. (<ref>) and (<ref>) can be applied in the design of frequency mixers under oblique and normal incidence, respectively, by simply tuning the space-time parameters of the modulation. The key parameter in design is the time period, as the spatial period cannot be easily reconfigured electronically in the current metastructure. The present discussion opens the way to the design of spatiotemporal frequency mixers based on aperiodic metallic structures, which are expected to provide more degrees of freedom in design and ease of control than purely-periodic metallic ones <cit.>. §.§ Reflective Beamformer In this subsection we test the tunability of the reflection coefficient for the C-G beamformer. C-G_beam depicts the analysis of the reflection coefficient as a function of the frequency of the incident wave when the temporal T_M=4T_0 (f_0 = 4f_M) and spatial P=10 mm periods are fixed. Note that, in the present scenario, frequency of the modulation increases as the frequency of the incident wave does. Thus, the ratio f_0 / f_M is fixed to a value of four.The main advantage that the analytical approach contributes is the faster extraction of the scattering parameter for different points in frequency. Note that, the simulations carried out by means of the FDTD method [assuming a monochromatic impinging wave] takes several minutes depending on the mesh and the simulation space. However, the circuital approach takes only a few seconds to simulate the whole frequency range.C-G_beam(a) shows the curves obtained by the proposed analytical method for the reflection coefficient |S_11| when varying the duty cycle D. In this case, DT_M represents the time in which the C-G metasurface is in the conductor state. FDTD simulations of the electric-field distributions are also included as insets. The black, blue and red solid lines are associated to the duty cycleD=0.75, D=0.5 and D=0.25, respectively. It can be seen how the amplitude of the reflection coefficient decreases as the space-time metasurface remains less time in the conductor (fully reflective) state; namely, as D decreases. This is in line with the former theoretical discussion and with the FDTD results inserted in C-G_beam(a).Moreover, the oblique incident angle (θ_0=20^o) leads to the emerging of the grating lobe approximately at 23 GHz. This phenomenology is well caught by the circuital approach. It can be also appreciated in the FDTD results, as the grating lobes make appearance at 30 GHz (see the lower half space) but do not show at 20 GHz, below the grating-lobe regime. Finally, C-G_beam(b) shows the amplitude of the reflection coefficient in the case of varying the spatial ratio W/P and fixingthe duty cycle. In this case, less reflection is observed as the spatial ratio (slit width W) increases. As C-G_beam(a) and (<ref>)(b) illustrate,A crucial difference related to these two parameters should be noted. The modification of D and W is useful to adjust the reflection coefficient of the system, as explained in this section. However, as discussed in Section <ref>, the harmonic distribution of the diffraction pattern is also affected.Nevertheless, the majority of (m,n)-th harmonics that can attain significant amplitudes when varying W are of evanescent nature. This is not an issue from the beamforming perspective, since this parameter remains fixed in the grating state. For this reason, this parameter is the best choice for setting the desired reflection coefficient of the system.On the other hand, when D is modified, many of these (m,n)th harmonics that can attain significant amplitudes will be propagative. Thus, this property of the temporal parameters turns them into the best option for adjusting the beamsteering of the system. § CONCLUSIONSIn this manuscript, we have introduced a novel metal-based spatiotemporal metasurface for application in wireless communications systems. The metasurface can alternate its properties in time within three different states: "air" (fully transparent), "conductor" (fully reflective) and "grating" (partially transparent and reflective). By combining the three states in a time-periodic scheme, a rich diffraction pattern is created, which can be electronically reconfigured by setting the space-time parameters of the system.The physics of the space-time metasurface is described by means of an analytical technique based on equivalent circuits, Floquet-Bloch expansions and integral equations. We have consideredscenarios where the metasurface is illuminated by oblique TE and TM plane waves. The analytical technique has proven to be computationally efficient compared to general numerical approaches and the self-implemented FDTD code. Moreover, it provides physical insight on the diffraction spectrum, nature (evanescent/propagative) of the space-time Floquet harmonics, and scattering parameters. The results of the work show that the present space-time metasurface offer clear advantages compared to our previous time-only configuration. Here, we have exploited the inherent ability of the space-time metasurface to mix and multiply frequencies. We have shown that two or more output frequencies can be engineered so their diffraction angles is the same. Finally, we have shown that efficient beamforming can be realized by tuning the space-time parameters of the system. Combinations of the conductor and grating states are prone to smartly reflect most of the diffracted power. Conversely, combinations of the air and grating states can enable effiecient beamforming in transmission.§ ACKNOWLEDGMENTSThis work has been supported by grant PID2020-112545RB-C54 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR. It has also been supported by grants PDC2022-133900-I00, TED2021-129938B-I00 and TED2021-131699B-I00, and by Ministerio de Universidades and the European Union NextGenerationEU, under Programa Margarita Salas, by MCIN/AEI/10.13039/501100011033 and the European Union NextGenerationEU/PRTR, and by IJC2020-043599-I funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEUPRTR.IEEEtran
http://arxiv.org/abs/2312.16491v1
{ "authors": [ "Salvador Moreno-Rodríguez", "Antonio Alex-Amor", "Pablo Padilla", "Juan F. Valenzuela-Valdés", "Carlos Molero" ], "categories": [ "physics.app-ph" ], "primary_category": "physics.app-ph", "published": "20231227093624", "title": "Theory and Design of Space-Time Metallic Metasurfaces for Wireless Communications" }
fancy Dynamical polarization function, plasmons, their damping and collective effects in semi-Dirac bandsGabrielle Ross-Harvey^1, Andrii Iurov^1[E-mail contact: [email protected], [email protected] ], Liubov Zhemchuzhna^1, Godfrey Gumbs^2,3,and Danhong Huang^4January 14, 2024 ==============================================================================================================================================================================§ INTRODUCTIONThe objective of multi-agent pathfinding (MAPF) is to find paths for multiple agents such that each agent reaches its goal without conflicting with other agents. Agents paths are conflicting if at any time their shapes overlap. MAPF has applications in warehouses <cit.>, package delivery <cit.>, games <cit.>, firefighting <cit.>, search and rescue <cit.> and intersection management <cit.>.A significant amount of prior work has focused on “classic” MAPF with a state space represented by a grid or planar graph with unit-cost edges <cit.>. Therefore, in classic MAPF, all actions take one time step and agents always occupy exactly one vertex in a time step. These limitations simplify the problem, but cannot be applied to domains which may exhibit variable size agents and continuous-time, variable-duration motion and wait actions. We seek optimal solutions to the continuous-time MAPF problem (MAPFR) <cit.>, denoted MAPFR for real-valued action durations and costs on general graphs (e.g., planar, non-panar, unit-cost and non-unit cost graphs).Continuous-Time Conflict-Based Search (CCBS) <cit.> is a solver for MAPFR. CCBS re-formulates the Conflict-Based Search (CBS) algorithm <cit.> to allow variable-duration wait actions and constraints which account for continuous-time execution. CCBS was shown to be effective on a several settings that are inspired by real-world applications. Additional enhancements, such as heuristics <cit.>, disjoint splitting (DS) <cit.> and conflict prioritization <cit.> were added to CCBS <cit.>. These enhancements improve the runtime of CCBS. In contrast to other prior optimal continuous-time approaches <cit.> which assume only fixed-duration wait actions, CCBS plans optimal, arbitrary duration wait times.CCBS represents a significant advancement for MAPFR. However, conflict symmetries continue to pose a problem for this algorithm. A conflict symmetry <cit.> occurs when two or more agents are situated such that one or both agents must increase their path cost to avoid conflict. For optimal algorithms like CCBS, this means that many lower-cost alternate paths must be exploredbefore proving that the cost increase is necessary. Conflict symmetries can cause an exponential amount of work to resolve <cit.>. In this paper, we address conflict symmetries by adapting and building new symmetry-breaking enhancements for CCBS: * We adapt the bypass (BP) enhancement <cit.>, originally formulated for CBS, to be used with CCBS.* We adapt Biclique constraints (BC) <cit.>, originally formulated for CBS, to be used with CCBS.* We combine disjoint splitting (DS) <cit.> as formulated for CCBS <cit.> with (BC): disjoint bicliques (DB)* We newly re-formulate BC for k-partite cliques and combine with DS: disjoint k-partite cliques (DK) This paper is organized as follows: We first provide a definition of the MAPFR problem. Next, we describe the CCBS algorithm and related work. This is followed by a description of the new symmetry breaking enhancements. Finally, we present a comprehensive ablation study on all of the enhancements.§ PROBLEM DEFINITIONMAPF was originally defined for a “classic" setting <cit.> where the movements of agents are coordinated on a grid, which is a 4-neighbor planar graph. Edges have a unit cost/unit time duration and agents occupy a point in space. Thus, two agents can only have conflicts when on the same vertex at the same time, or traversing the same edge in opposite directions. MAPFR <cit.>, an extension of MAPF for real-valued action durations, uses a weighted graph G=(V,E) which may be non-planar. Every vertex v∈V is associated with coordinates in a metric space and every edge e∈E is associated with a positive real-valued edge weight w(e)∈ℝ_+. For the purposes of this paper, weights represent the times it takes to traverse edges. However, time duration and cost can be treated separately. There are k agents, A={1,..,k}. Each agent has a start and a goal vertex V_s={start_1,..,start_k}⊆V and V_g={goal_1,..,goal_k}⊆V such that start_i≠ start_j and goal_i≠goal_j for all i≠j.A solution to a MAPFR instance is Π={π_1,..,π_k}, a set of single-agent paths which are sequences of states. A state s=(v,t) is a pair composed of a vertex v∈V and a time t∈ℝ_+. A path for agent i is a sequence of d+1 states π_i=[s_i^0,..,s_i^d], where s_i^0=(start_i,0) and s_i^d=(goal_i,t_g) where t_g is the timethe agent arrives at its goal and all vertices in the path follow edges in E.Agents have a shape which is situated relative to an agent-specific reference point <cit.>. Agents' shapes may vary, but this paper uses circular agents with a radius of √( 2)/4 units. Agents move along edges along a straight vector in the metric space. Traversing an edge is called an action, a=(s,s'), where s and s' are a pair of neighboring states. Wait actions of any positive, real-valued duration is allowed at any vertex. Variable velocity and acceleration are allowed. For simplicity, this paper assumes fixed velocity motion with no acceleration.A conflict happens when two agents perform actions ⟨ a_i, a_j⟩ such that their shapes overlap at the same time <cit.>. A feasible solution has no conflicts between any pairs of its constituent paths. The objective is to minimize the sum-of-costs c(Π)=∑_π∈Π c(π), where c(π) is the sum of edge weights of all edges traversed in π. We seek Π^*, a solution with minimal cost among all feasible solutions. Optimization of the classic MAPF problem is NP-hard <cit.>, hence, optimization of the MAPFR problem is also NP-hard. § BACKGROUNDWe now describe CCBS and other prior work.§.§ Conflict-Based SearchContinuous-time Conflict-Based Search (CCBS) <cit.> is based on the classic Conflict-Based Search (CBS) <cit.> algorithm, so we describe it next. CBS performs search on two levels. The high level searches a constraint tree (CT). Each node N in the CT contains a solution N.Π, and a set of constraints N.C. A constraint blocks an agent from performing the action(s) that caused the conflict and is defined as a tuple ⟨ i,v,t⟩, where i is the agent, v is the vertex and t is the time. Each path π_i∈N.Π of agent i in the root node is constructed using a low-level search without taking any other agents into account. Next, CBS checks for conflicts between any pairs of paths π_i and π_j in N.Π. If N.Π contains no conflict, then N is a goal node and CBS terminates. If N.Π contains a conflict between any π_i and π_j, then CBS performs a split, meaning that it generates two child nodes N_i and N_j of N and adds constraints c_i and c_j to N_i.C and N_j.C respectively. Next, CBS re-plans π_i∈N_i.Π and π_j∈N_j.Πwithconstraints c_i and c_j and other constraints inherited from ancestor nodes so that the current conflict and previously detected conflicts are avoided. CBS searches the tree in a best-first fashion, prioritized by the sum-of-costs. CBS terminates when a feasible solution is found.A significant amount of improvements for CBS have been proposed such as adding high-level heuristics <cit.>, conflict prioritization <cit.>, allowing multiple constraints per split <cit.>, disjoint splitting <cit.> and conflict symmetry resolution <cit.>. Some enhancements were also proposed for MAPFR, such as kinodynamic constraints <cit.>,biclique constraints (BC) <cit.> (which will be described later) and CCBS itself <cit.>, which we describe next. §.§ Safe Interval Path Planning Safe Interval Path Planning (SIPP) <cit.> is an algorithm for planning a single agent on the same graph as moving obstacles. In the case of CCBS, the moving obstacles are other agents. The usage of SIPP with CCBS will be discussed next. SIPP uses an A*-based algorithm with a specialized successor generation routine. During successor generation, SIPP computes the actions available to the agent at each graph vertex by removing actions which would result in conflicts and/or adding wait actions with a specific duration to avoid conflicts. SIPP does this by computing a set of safe intervals for each vertex, that is, time intervals in which an agent may occupy a vertex without conflicting with moving obstacles. In this way, SIPP guarantees conflict-free shortest paths. §.§ Continuous-time Conflict-Based Search CCBS <cit.> modifies CBS by allowing continuous-time actions. This is accomplished by adding additional functionality to CBS: * CCBS uses continuous-time-and-space collision detection. It also computes exact wait times instead of using fixed duration wait actions <cit.>.* CCBS handles durative conflicts, (where agents' shapes overlap for a period of time), by utilizing time-range constraints <cit.>.* CCBS uses SIPP <cit.> at the low level. In CCBS, SIPP is adapted to interpret time-range constraints <cit.> (i.e., unsafe intervals) as safe intervals. Safe intervals are used to generate exact-duration wait actions to avoid conflicts. We now describe CCBS in detail. An example for CCBS is illustrated in Figure <ref>. Figure <ref>(b) shows a problem instance on the simple planning graph in Figure <ref>(a) in which three agents exist, one at each vertex. Each of the agents needs to rotate one edge in the clockwise direction (or two edges in the counter-clockwise direction) in order to reach their goal. Assuming an agent radius of √( 2)/4, actions A→B and C→A conflict if taken simultaneously.After CCBS detects the conflict, time-range constraints are constructed for the agents. Time-range constraints block agents from performing actions inside of a given time range. This is done by computing the delay time necessary for the action C→A to avoid conflict with the action A→B (and vice-versa). This delay time is used to create an unsafe interval, the interval in which, if the action is taken, it will conflict. In this case, the unsafe interval for action C→A is [0,0.36). Hence, if action C→A is delayed by 0.36, it can be executed without conflicting with action A→B.After CCBS constructs the time-range constraint for agent z, the low-level SIPP solver is called. CCBS re-interprets the unsafe interval [0,0.36) to the safe interval, [0.36,∞) for SIPP. Thus when SIPP is called for agent z, it will be forced to wait 0.36 time steps before executing action C→A, and the conflict with the action A→B is avoided. The wait action is shown by the self-loop on agent z in the solution, Figure <ref>(c).This problem instance is unsolvable without a non-unit cost wait action. If, for example, agent z were to wait one full time step, it would cause agent y to wait one full time step as well, causing it to conflict with agent x. This is one reason why arbitrary-duration wait actions are important for MAPFR. In addition, lower-cost solutions are possible with arbitrary-duration wait actions, since agents are allowed to wait for fractional times instead of whole time steps.There are several enhancements for CCBS as described so far. The CCBS authors introduced a high-level heuristic based on the max-weight independent set problem. It was formerly formulated as as an integer linear program (ILP) for classic MAPF <cit.>, but reformulated for continuous time as a linear program (LP) <cit.>. Finally, a special formulation of disjoint splitting <cit.> was added to CCBS. Disjoint splitting is now explained in further detail. §.§ Disjoint Splitting Disjoint splitting (DS) <cit.> is a technique for CBS which helps avoid resolving the same conflict multiple times in different sub-trees of the CT. The procedure for DS is as follows: when performing a split, one child node uses a negative constraint defined as a tuple ⟨ i,v,t⟩ and causes the agent (i) to avoid a vertex (v) at a specific time (t). The other child node uses a positive constraint, ⟨ i,v,t⟩, which forces the agent (i) to pass through a vertex (v) at a specific time (t). A positive constraint for agent i also acts as a negative constraint for all other agents so that they avoid conflicting with agent i.Positive constraints are enforced at the low level by adding time-specific sub-goals or landmarks to the search. The constraints in both child nodes are for the same agent, but there is a choice of which conflicting agent to split on. It was shown that disjoint splitting helps CBS to do less work in general <cit.>.The implementation of DS for CCBS differs from CBS in two ways <cit.>: (1) Constraints for agent i in CCBS include a time range ⟨ i,v,t_0,t_2⟩ <cit.>. Since the arrival at a landmark location is allowed at multiple times, special logic is required to determine which exact time is optimal and feasible with respect to all other constraints. (2) Unlike DS for Classic MAPF, positive constraints do not act as a negative constraint for all other agents. Instead, a single negative constraint is added for agent j to help it avoid the landmark location for agent i.This second difference is a limitation that must be solved in CCBS using bipartite analysis, covered later in this paper. Despite this weakness, DS yields consistent improvements in runtime performance versus the original splitting method <cit.>. §.§ Biclique ConstraintsBiclique constraints was proposed for MAPFR <cit.> with unit-cost wait actions, but it has never been studied with variable-duration wait actions until now. Recall from the discussion of CBS that, during a split, agents i and j each receive a new constraint c_i andc_j in their respective child nodes. For BC to work, CCBS must be combined with multi-constraint CBS (MCBS) <cit.>, which allows each agent to receive sets of one or more new constraints C_i and C_j, respectively. Completeness is ensured only when the actions blocked by C_i are mutually conflicting with all actions blocked by C_j. This is known as the mutually disjunctive property <cit.> of constraint sets. Constraint sets are valid iff no solution exists when both agents i and j violate any constraints c_i∈C_i and c_j∈C_j, respectively, simultaneously. One approach to discovering valid sets of constraints for MAPFR settings involves analysis of bipartite conflict graphs <cit.>. See Figure <ref>. Figure <ref>(a) shows an example problem where two agents must cross paths. Figure <ref>(b) shows an enumeration of all actions available to two agents at overlapping timeframes, (wait actions omitted). Figure <ref>(d) shows the bipartite conflict graph for the enumerated actions. A bipartite conflict graph (BCG) <cit.> is constructed by creating two sets of nodes for the actions available to the two agents during overlapping timeframes. The nodes for one agent are arranged on the left, and nodes for the other agent are arranged on the right. Then edges are added between nodes for pairs of actions that conflict. The graph is bipartite because no node on the left is connected to any other node on the left, similarly for nodes on the right, but nodes on the left may be connected to nodes on the right. In order to choose a mutually disjunctive set of actions to use as constraints, the nodes chosen must form a bipartite clique or biclique in the BCG. That is, every node chosen for the set on the left must be connected to every node chosen for the set on the right of the BCG. A biclique is shown with thick lines in Figure <ref>(d), where the set of nodes in the biclique is {2,3} and {4,5} respectively. In practice, one can find a max-vertex biclique (a biclique with a maximal number of vertices) in polynomial time <cit.>.Biclique nodes in a BCG can be annotated with unsafe intervals for SIPP. This is done by computing the unsafe times (i.e., the amount of time one agent should wait to avoid conflict) for each neighboring node in the biclique as shown for action 2 in Figure <ref>(c). Unsafe interval computation can be done using a binary search approach <cit.> or, for circular agents, using an algebraic approach <cit.>. In this example, action 2 cannot be performed in the time range [0,0.58) in order to avoid conflict with action 4, and [0,0.71) to avoid conflict with action 5. The intersection of those intervals is used to annotate the action in the biclique as shown for action 2 in Figure <ref>(d). Avoiding execution of the action in the intersected unsafe interval ensures the mutually-conflicting property <cit.>.In the example, blocking action 2 in the intersected interval ([0,0.58)∩[0,0.71)=[0,0.58)) ensures that the interval for which action 2 is blocked conflicts with all other actions in the biclique, namely actions 4 and 5. If the interval [0,0.71) were erroneously chosen instead, action 2 would be blocked for part of the timeframe ([.58,.71)) in which it does not conflict with action 4, potentially blocking a feasible and/or optimal solution that uses action 4 in that timeframe. This technique is necessary for creating valid sets of time-range constraints required by the SIPP routine of CCBS.§ NEW ENHANCEMENTS We now discuss new enhancements and their implementation. §.§ Bypass The bypass enhancement (BP) <cit.> is a symmetry breaking technique used to avoid some splits in the CT. BP was never implemented for CCBS, and to our knowledge, no study has been performed to determine its effectiveness. Our intent in including BP in this paper is to experiment with its effectiveness in continuous-time domains. Our findings show that it is very effective in problem instances with similarities to “classic” MAPF, but much less effective in certain continuous-time settings. These details will be discussed in the empirical results section.The implementation of BP for CCBS is straightforward, and follows the original formulation, with some adaptations for continuous-time. BP (for both CBS and CCBS) inspects the paths for two agents involved in a conflict. If a new path of the same cost is available for one of the conflicting agents such that: (1) the new path does not have an increased cost, (2) the new path respects new constraints which are required to avoid the conflict that caused the split and (3) the new path has fewer conflicts with all agents than the respective path in the parent node, this new path is called a bypass. If a bypass is found, child nodes are not generated, instead, the current node is updated with the bypass path for one of the agents and re-inserted into the OPEN list. This enhancement improves performance by avoiding splits in the tree which would otherwise result in two new sub-trees. §.§ New Biclique Constraints for CCBSIn this paper, we tested BC with CCBS for the first time. Although the use of max-vertex bicliques for BC was shown to be very effective in continuous-time domains with fixed wait actions <cit.>, when applied to CCBS, which computes arbitrary wait actions, we found that using max-vertex bicliques to generate biclique constraints was usually detrimental to performance. As noted earlier, taking the intersection of unsafe intervals of adjacent edges usually causes the interval to be shortened. Because of this shortened interval, the resulting safe intervals used with SIPP will not cause wait actions to be generated with a long enough duration to avoid conflict. This can result in causing the conflict between two actions which caused the split (which we will call the core action pair) to recur at a slightly later time, resulting in another split in the sub-tree for the same two actions.In order to remedy this, we computed the biclique so that it only includes the pairs of actions whose unsafe interval is a superset of that for the core action pair. For example, if the core action pair were actions 2 and 4, the biclique would include both actions 4 and 5, because its unsafe interval is a superset: [0,0.71)⊇[0,0.58). On the other hand, if the core action pair were actions 2 and 5, action 4 could not be included in the set. In this way, the correct wait time is generated by SIPP and the same conflict is always avoided in the sub-tree. This approach of computing an interval-superset biclique often results in a smaller biclique and thus a smaller set of biclique constraints than the max-vertex biclique approach, nevertheless, the result yields performance gains versus CCBS with the original splitting method in many settings.Biclique constraints in CBS are not usually applicable in “classic” planar graphs (assuming agents' size is sufficiently small) since agents would only conflict with one other action at a time (e.g., agents crossing the same edge in opposite directions), resulting in a 1x1 biclique which is equivalent to classic constraints. However, with CCBS, biclique constraints are useful in planar graphs since multiple wait actions are possible for the same agent at a vertex at a specific point in time, which makes multiple conflicts possible. §.§ Disjoint Splitting with Biclique Constraints Recall that with disjoint splitting, in one CT child node there is a positive constraint for agent i, and in the other there is a negative constraint for agent i. Additionally, (specifically in the formulation for CCBS), a single negative constraint for agent j is added to the node with the positive constraint. Doing this alone is quite effective <cit.>. However, it is still possible for a conflict with the positively constrained action to recur in the sub-tree of the CT. We propose that when performing a disjoint split between agent i and agent j that along with a positive constraint for agent i, a set of negative constraints be added for agent j to the same CT node so that while agent i is forced to take an action, agent j is forced to avoid all actions which conflict with it.To comprehensively enforce that agent j avoids the positively-constrained action for agent i (or vice-versa), we perform bipartite analysis. The procedure is outlined below:* Construct a BCG for a core-action pair ⟨ a_i, a_j⟩ as follows: * Create vertices for all of agent j's actions that conflict with a_i on the left.* Create vertices for all of agent i's actions that conflict with a_j on the right.* Add an edge from vertex a_i to all vertices on the right.* Add an edge from vertex a_j to all vertices on the left. * For each edge, compute the unsafe interval and annotate the edge with it.* Select either agent i or agent j to receive a positive constraint.* Select all vertices connected to the constrained agent's core action vertex to create negative constraints.* Perform interval intersection for the positive constraint only, based on edges adjacent to the constrained agent's core action vertex. This procedure for computing time-annotated biclique constraints is simpler than outlined in the previous section because it is not necessary to test for the addition of edges for all pairs of vertices in the BCG and therefore the interval intersection and shortening steps are not necessary for the negative constraints. This is because the negative constraints must avoid the positive constraint's action for the entire duration. In step (3), our implementation chooses the negatively constrained agent to be the one with the most nodes. However, alternative approaches could be taken such as choosing the set with the largest cumulative unsafe interval, the largest mean unsafe interval, etc. Step (5) is necessary to obtain the proper unsafe interval for SIPP.The placement of these constraints is illustrated in Figure <ref>. Node A is a node in the CT with a conflict between actions 1 and 5 shown in bold; this is the core action pair. Nodes B and C are child nodes. Node B shows the positive constraint for the red agent in bold for action 1, with negative constraints for the blue agent's conflicting actions shown with `x's. The other actions for the red agent in node B are dashed, meaning that they are no longer reachable because of the positive constraint. Finally, node C shows a single negative constraint that mirrors the positive constraint in node B.In summary, while regular disjoint splitting would only create a positive constraint for agent i with a single negative constraint for agent j to child node 1 and a negative constraint for agent i to child node 2, disjoint splitting with bicliques, or disjoint bicliques (DB) adds multiple negative constraints for agent j to child node 1 as well. This approach effectively eliminates further conflicts between agent i and agent j at the positively constrained action in the CT sub-tree. The effect of adding the extra constraints for agent j is that agent j avoids the positively-constrained action for agent i from multiple paths, preemptively avoiding conflicts further down in the CT. Ultimately, a potentially exponential number of child nodes are pruned from the CT.We now show that this approach is complete (if a solution to the problem instance exists) and that it also ensures optimality.Using biclique constraints with disjoint splitting (disjoint bicliques) never blocks a feasible solution from being found by CCBS. Let N be a CT node. Let a_i and a_j be a core action pair – a conflicting pair of actions from π_i∈ N.Π and π_j∈ N.Π respectively.Let N̅_̅i̅ be a child of N which contains a single negative constraint blocking action a_i. Let N̂_̂î be a second child of N with a single positive constraint forcing agent i to perform action a_i, and multiple negative constraints C̅_̅j̅, blocking agent j from conflicting with a_i.Let Π^* be the only feasible solution to the MAPF problem instance. There are three possible cases: * Π^* contains a_i* Π^* contains a_j* Π^* contains neither a_i nor a_jIf case 1 is true, Π^* is guaranteed to be found in the sub-tree of N̂_̂î because a_i is enforced by the positive constraint. If case 2 is true, Π^* is guaranteed to be found in the sub-tree of N̅_̅i̅ because a_j is not blocked. If case 3 is true, Π^* is guaranteed to be found in the sub-tree of N̅_̅i̅ because a_i is blocked and a_j is not enforced.Because the BCG only includes actions which conflict with a_i, it is not possible to block any action that does not conflict with a_i. Since in case 1, Π^* cannot contain any actions that conflict with a_i, blocking all actions for agent j which conflict with a_i cannot preclude Π^*. Thus case 1 still holds with disjoint bicliques.CCBS with disjoint bicliques ensures optimality. Per Lemma <ref>, using disjoint bicliques can never preclude CCBS from finding a solution (if any exist). Assuming the OPEN list of CCBS is ordered by lowest cost, CCBS is guaranteed to find a lowest-cost feasible solution first before terminating.§.§ Disjoint K-Partite Cliques So far, we have discussed the approach for combining bicliques with disjoint splitting for resolving a single conflict. It is often the case that multiple agents conflict with the positively constrained action. It is possible (and helpful) to additionally constrain these agents using negative constraints. This is done by performing step (1) outlined for DB for all agents that have a conflict with the positively constrained action to form two k-partite conflict graphs (KCG), one for agent i and another for agent j. But it is explained more simply as enumerating all actions by all agents which conflict with a_i and a_j respectively and computing unsafe intervals. All other steps are straightforward. In step (3), our implementation chooses the agent with the largest KCG to get the positive constraint.In order to avoid performing extra conflict checks to discover other agents that conflict with the positively-constrained action, we make use of a conflict count table (CCT) <cit.> which is adapted for MAPFR to perform bookkeeping on all conflicts. The CCT is set up so that looking up conflicts in the table is indexed on a per-agent, per-action basis to expedite the KCG creation. In a vast majority of cases, the number of partitions in the KCG graphs are much smaller than the number of agents.We show that DK is correct by a simple extension of Lemma <ref>:The use of disjoint k-partite cliques with disjoint splitting never blocks a feasible solution in CCBS. Following the definitions from Lemma <ref>, DK now adds negative constraints for multiple agents to N̂_̂î. Since the constraints in N̅_̅i̅ are unchanged, cases 2 and 3 still hold.Case 1 still holds because Π^* cannot contain any action by any agent which conflicts with a_i, therefore just as the disjoint bicliques procedure ensures that only actions which conflict with a_i are blocked for agent j, DK ensures that only actions which conflict with a_i are blocked for all agents j≠ i (or subset of agents ∈ j≠ i). Thus, DK can never block a feasible solution in any of the three cases. Finally, substituting Lemma <ref> into Theorem <ref>, we see that DK is also optimal. In summary, we now have a powerful capability for multi-agent, symmetry-breaking constraints, capable of eliminating even more nodes in the CT.§ EMPIRICAL RESULTSWe now analyze the enhancements described in the previous section, namely: bypass, which is new to CCBS in this paper, biclique constraints (BC) which is newly formulated in this paper, and disjoint k-partite cliques (DK) which is new. All tests in this section were performed single-threaded, on cloud compute instances that report an Intel Xeon 2.5GHz processor. In addition to the three roadmaps: “sparse”, “dense” and “super-dense” and the four grid maps focused on by the original CCBS authors, we test our enhancements in all 44 of the MAPF grid benchmarks <cit.> with 2^k neighborhood <cit.> connectivities, namely 4-, 8-, 16- and 32-neighborhoods. Agents are circular, with a radius of √(2)/4. All tests were run by starting with 5 agents and incrementing the number of agents by 2 until the problem instance became unsolvable in under 30 seconds. We thank the original CCBS authors for making their code publicly available. Our implementation is based on theirs and is also freely available[https://github.com/thaynewalker/CCBS]. We updated some of the memory management, which made it up to 60% faster than theoriginal. All of our results compare against our enhanced code. For this reason, some of our baseline results differ from previously published ones.We will describe each of our experiments in turn and then discuss them all together. Figure <ref> shows the success rates; the percentage of problem instances solvable in under 30 seconds for increasing numbers of agents. The rates are computed over 25 problem instances for each map. All plots, except for the super-dense roadmap are on 32-neighbor grids. We remind the reader that results are optimal in terms of cost (i.e., shortest non-conflicting paths), and that adding a single agent to a problem instance represents an exponential increase in the problem instance's computational complexity (it is exponential in the number of agents <cit.>). Hence, even a small gain (e.g., a few percentage points) can mean that a multiplicative reduction in work is actually realized.Tables <ref>, <ref> and <ref> show the sum total of the maximum number of agents solvable over 25 problem instances in under 30 seconds per map. The statistics with the best result underlined and those within the 95th percentile of the best are in bold. The label “CCBS” in Figure <ref> is CCBS with all enhancements from the original authors except DS. The label “Base” is CCBS with all enhancements, including DS, the previous state of the art. In all of the tables, the column labeled “Base” is also the previous state of the art.Table <ref> shows totals for all grid maps for 4-neighbor grids, totals for each class of maps, and an overall total. 4-neighbor grids in this context are similar to those for “classic” MAPF, except instead of fixed wait actions, agents may wait an arbitrary amount of time. Table <ref> shows aggregate group results for the same groupings as Table <ref>, and the grand total for all remaining connectivity settings, namely 8-, 16- and 32-connected grids. Table <ref> shows the results for all settings on probabilistic roadmaps <cit.>. “sparse” contains 158 nodes and 349 edges with a mean vertex degree of 4.2, “dense” contains 878 nodes and 7,341 edges with a mean degree of 16.7, and “super-dense” contains 11,342 vertices and 263,533 edges with a mean degree of 100.4. §.§ Discussion In Figure <ref>, the BP enhancement (teal circle) success rate is not significantly better or worse than CCBS alone (brown square). It never performs worse than CCBS in any of the 32-neighbor maps. On the other hand, in Table <ref>, which is on 4-neighbor maps, we see that adding BP to Base (previous state-of-the-art) yields statistically significant gains in nearly every map. Table 2 shows a similar trend for all 8-neighbor grids, but as the connectivity is increased to 16- and 32-neighbor grids, the improvement from BP becomes less significant. The cost symmetries in these settings decreases as the connectivity increases. Ultimately, in the road maps, which have no cost symmetries, we see no improvement with BP, nor any significant decline in performance. From this we learn that (1) BP offers significant performance improvements when there are many cost symmetries in the graph, and that the performance benefits are directly proportional to the amount of cost symmetries. (2) The cost of BP is not significant, even with no symmetries in the map. (3) BP is complimentary to DS and BC.The trend in Figure <ref> shows that BC provides a consistent improvement over CCBS in 32-neighbor grids. It is complimentary to DS, as evidenced by the fact that BP+DK performs better than both BP+DS and BP+BC. The trend in all tables shows that the performance improvement of DK is correlated to the mean vertex degree in the graph. The amount of improvement is especially significant in the dense and super-denseroad maps, where BP offers no benefit. In these roadmaps, the branching factor is high, with many edge crossings, a situation which is conducive to large bicliques. Still, the cost of DK is not significant in planar graphs. From this we learn that (1) DK offers significant performance improvements when many edges cross in the graph. (2) BP and DK are complimentary, as evidenced by BP being stronger in settings with many cost symmetries and DK being stronger in settings with few cost symmetries.With CCBS, because an agent may have multiple different wait actions at a location, BCGs larger than 1x1 are possible, thus a small benefit over Base is shown with BC in Table <ref>. Because the BP enhancement is never significantly detrimental to performance, and provides a benefit even when there are few cost symmetries, it can be used generally. The DK enhancement tends to work best in many cases where BP does not, and it also has no significant execution cost, hence it can be used generally. Finally, combining BP with DK consistently beats state-of-the-art by statistically significant margins. Compared to the previous state-of-the-art, in super-dense roadmaps our enhancements allow solutions for up to 10% more agents and in grid maps and for up to 20% more agents in super dense roadmaps.§ CONCLUSION AND FUTURE WORKWe have formulated and tested novel enhancements for CCBS and tested performance in roadmaps and on the full set of MAPF benchmarks on various connectivity settings. We found that disjoint splitting, bypass and biclique constraints are are complimentary and that using them together allows a statistically significant improvement over state-of-the-art generally for all MAPF benchmarks and sparse to super dense graphs. Bypassing is most effective in graphs with cost symmetries and biclique constraints are most effective in graphs with few cost symmetries and densely crossing edges.§ ACKNOWLEDGMENTSThe research at the University of Denver was supported by the National Science Foundation (NSF) grant number 1815660 and Lockheed Martin Corp.Research at the university of Alberta was funded by the Canada CIFAR AI Chairs Program. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). Research at Ben Gurion University was supported by BSF grant number 2017692. ACM-Reference-Format
http://arxiv.org/abs/2312.16106v1
{ "authors": [ "Thayne T. Walker", "Nathan R. Sturtevant", "Ariel Felner" ], "categories": [ "cs.AI", "cs.MA", "cs.RO" ], "primary_category": "cs.AI", "published": "20231226162115", "title": "Clique Analysis and Bypassing in Continuous-Time Conflict-Based Search" }
mymainaddress]Anqi Li [email protected] mymainaddress]Congying Hanmycorrespondingauthor [mycorrespondingauthor]Corresponding author [email protected]]Tiande Guo [email protected]]Haoran Li [email protected]]Bonan Li [email protected][mymainaddress]School of Mathematical Sciences, University of Chinese Academy of Sciences, Shijingshan District, Beijing, ChinaExisting methods provide varying algorithms for different types of Boolean satisfiability problems(SAT), lacking a general solution framework. Accordingly, this study proposes a unified framework DCSAT based on integer programming and reinforcement learning (RL) algorithm to solve different types of SAT problems such as MaxSAT, Weighted MaxSAT, PMS, WPMS. Specifically, we first construct a consolidated integer programming representation for four types of SAT problems by adjusting objective function coefficients. Secondly, we construct an appropriate reinforcement learning models based on the 0-1 integer programming for SAT problems. Based on the binary tree search structure, we apply the Monte Carlo tree search (MCTS) method on SAT problems. Finally, we prove that this method can find all optimal Boolean assignments based on Wiener-khinchin law of large Numbers. We experimentally verify that this paradigm can prune the unnecessary search space to find the optimal Boolean assignments for the problem. Furthermore, the proposed method can provide diverse labels for supervised learning methods for SAT problems. Boolean satisfiability problemsreinforcement learning [2010] 00-0199-00 § INTRODUCTIONBoolean satisfiability problem (SAT) is a very classical combinatorial optimization problem. There are many variants depending on the number of variables and the weight of clauses, such as MaxSAT, Weighted MaxSAT, Partial MaxSAT (PMS), Weighted Partial MaxSAT (WPMS). In recent years, many scholars have devoted themselves to this problem <cit.>. However, most of the methods are applicable to a specific type of problems. There is still lack of a general and efficient unified solution framework.Motivated by these observations, we propose to analyze and improve the Boolean satisfiability problem with the Monte Carlo tree search (MCTS), further pushing forward the frontier of the Boolean satisfiability problem in a general way. This study focuses on four core aspects: (1) transformed into a general binary integer programming framework, (2) propose an appropriate reinforcement learning (RL) model, (3) giving a unified MCTS algorithm, and (4) providing thorough theory for the optimality and complexity, as shown in Figure <ref>.Constructing a unified representation for different SAT problems is the prerequisite for proposing a unified solution framework. Given that Boolean variables take true or false values, we spontaneously consider binary linear programming (BLP). First, we reagard Boolean variables as binary variables. On this basis, we introduce binary variables for each clause to measure whether the clause is satisfied. The optimization objective is to maximize the weighted sum of satisfied clauses. When the clause is not satisfied, the corresponding binary variable can only take 0. On the contrary, the binary variable corresponding to the clause can take 1 under the goal of maximization. We generalize the weighted and partial SAT problems into a unified framework through different weights.Subsequently, we transform the binary linear programming framework of SAT problem into a reinforcement-learning model. Following the simplex tableaux structure of linear programming, we propose SAT tableaux as the state representation of our reinforcement learning model. In addition, we set the action space to all feasible Boolean variable assignments in the current state. And after each action selection, we immediately update the action space to the set of unselected actions. Because the optimal solution of the BLP problem is equivalent to the optimal Boolean assignment of the four classes of SAT problems. We set the reward as the objective function of the BLP. Furthermore, we naturally consider application of reinforcement learning method, based on the unified BLP framework and RL models for different SAT problems. Considering the binary tree structure of SAT problems, we undertake the Monte Carlo tree search method. We conduct exploratory estimation on all feasible action spaces and select the most appropriate action assignment according to the evaluation results. In this way, we ensure the optimality of Boolean variable assignment.Consequently, we prove the optimality and completeness of the MCTS-based assignment method. The MCTS framework can acquire all optimal assignments according to Wiener-khinchin law of large Numbers. In particular, the MCTS method can find the optimal assignment when explorations become infinite. Additionally, it can find all the different pivot paths when executions approach infinity.Given four intentions hereinabove, we develop a highly efficient reinforcement learning framework, which provides all optimal assignments for gemeral SAT problems. Moreover, we can label massive instances with little overhead. Substantial experiments on the Max-SAT-2016 and SATLIB benchmark demonstrate the effectiveness. Notably, our method can solve extensive instances compared with sat4j and other advanced methods. Our main contributions are summarized as follows:* Construct a genaral framework for solving varying SAT problems.* Propose a method to determine all the optimal Boolean assignments.* Give comprehensive theory for the optimality and complexity of the MCTS-based assignment method.§ RELATED WORK SAT Solving Based on Machine Learning In recent years, machine learning methods are widely used in combinatorial optimization problems <cit.>, such as backpack <cit.>, TSP <cit.>, and P-median problem <cit.>. As the first NP-complete problem to be proven, the SAT problem has also aroused the interest of many researchers <cit.>. Some studies have proposed to use neural networks to approximate the optimal heuristic algorithm in SAT solvers <cit.>. However, the performance is easily limited by the heuristic algorithm, and it is difficult to ensure the optimality <cit.>. Selsam et al.  <cit.> proposed the NeuroSAT, which transformed the SAT solving problem into a binary classification problem. Based on this, Selsam et al.  <cit.> proposed NeuroCore, which uses the same architecture to guide variable branching between high-performance solvers (e.g., MiniSAT, Glucose). Amizadeh et al.  <cit.> considered the problem of solving the Circuit-SAT problem. Kurin et al.  <cit.> applied DQN to the SAT problem and trained a GNN model to predict the branch heuristic in the MiniSAT solver. Similarly, a reinforcement learning framework is used to train GNN models to learn a local-search heuristic algorithm <cit.>. Vaezipor et al.  <cit.> improved the #SAT solver based on the Evolution Strategy. However, these methods are often only applicable to a specific type of SAT problem, and lack a general framework <cit.>. Combinatorial Optimization Methods Based on MCTS Since the appearance of AlphaGo series <cit.>, reinforcement learning represented by MCTS has been widely used in many scenarios <cit.>. A perspective is to design a unified framework is applicable to different kinds of combination optimization problem <cit.>. In addition, we can also design suitable algorithms for some specific combinatorial optimization problems, such as the traveling salesman problem <cit.> and Boolean satisfiability problem <cit.>. In this paper, we adopt the latter idea to study the SAT problem. § NEW MATHEMATICAL FORMULATION FOR FOUR TYPES OF SAT PROBLEMS §.§ Definition and Classification In Boolean satisfiability problems, Boolean expressions consist of conjunctions of clauses, and clauses are composed of disjunctive literals. Given a set of Boolean variables {x_1,x_2,...,x_n}, a literal is either the Boolean variable itself {x_i} or its negation {¬ x_i}, where n represents the number of Boolean variables. A clause consists of disjunction of variables, i.e., c_j=l_j1 l_j2 ... l_jn_j, where n_j denotes the number of literals in clause c_j. A Conjunctive Normal Form(CNF) ℱ is the conjunctive of several clauses, i.e. ℱ=c_1c_2...c_m, where m represents the number of clauses. A truth assignment is to assign a True or False value to each variable in a Boolean expression. The clause is satisfied when at least one literal in the clause is true. A CNF formula is satisfied if and only if all of clauses are true.Given a CNF formula, SAT is a decision-making problem, to determine whether there is an assignment to satisfy ℱ. MaxSAT <cit.> is an extension of SAT that aims to find an assignment that satisfies the maximum number of clauses. Given a weight for each clause, the optimization goal of Weighted MaxSAT is to maximize the total weighted of satisfied clauses. Another way to extend this is to divide the clauses into hard and soft. The hard clause is mandatory, while the soft clause does not make a hard requirement. PMS tends to satisfy as many soft clauses as possible. WPMS is a generalization of PMS that assigns a non-negative weight to each soft clause. WPMS aims to maximize the total weight of satisfied soft clauses. In the following, we propose a general framework that applies to the four types of problems described above. §.§ Description in BLPFour SAT problems mentioned above can be reduced to the form of binary integer linear programming. Each Boolean variable in {x_1,x_2,...,x_n} corresponds to a binary variable {y_1,y_2,...,y_n} respectively. In the same way, each clause in {c_1,c_2,...,c_m} corresponds to a binary variable {z_1,z_2,...,z_m}. For each literal, the negation of the variable x_i corresponds to 1-x_i. The clause consists of disjunction of literals, which corresponds to the sum of transformed binary variables. A clause is satisfied when at least one of its corresponding literals is true, and vice versae. Because the binary variable corresponding to a clause can be true only if at least one literal is true (the clause satisfies), as Figure <ref> shows. The objective function is the weighted sum of satisfying clauses. Considering that the wcnf file format requires weights of hard clause always greater than the summation of weighted violated soft clauses in solution, we set the following objective coefficients for above four kinds of problems. (1) The weights of MaxSAT clauses are set to 1. (2) Weights of clauses are given values in the input instance for Weighted MaxSAT problem; (3) Weights of soft clauses are set to 1 and weights of hard clauses are given values in the input instance for PMS problems; (4) Weight of soft clauses and hard clauses in WPMS are given values in the input instance. This way set a unified BLP framework for the four kinds of SAT problems while ensuring that the optimal solution of the problem remains unchanged. In addition, following the simplex tableaux of linear programming, we set up a similar structure for the BLP of the SAT problem, which we call the SAT tableaux. SAT tableaux can save the complete information in the solution process, and it is also the best representation of the solution state. The binary variable z corresponds to whether each clause is satisfied. The binary variable y corresponds to the assignment of each variable, with 1 representing True and 0 representing False. w represents the objective coefficients of BLP. Each component value of b is the summation of constants in constraints. A_y is the coefficient matrix of binary variables {y_1,y_2,...,y_n} in BLP. Consider that b can be derived from the negation of Boolean variables. At the same time, the satisfiable clauses z can also be deduced when variable assignments are substituted in. Therefore, the redundant information b and z are subtracted when constructing SAT tableaux.§.§ RL Representation We need to construct a unified reinforcement learning model for different types of SAT problems, such as MaxSAT, Weighted MaxSAT, PMS, WPMS. On the one hand, it ensures the generality of our method. On the other hand, it is the premise of using reinforcement learning methods. We follow the SAT tableaux proposed in previous section for state representation. The action space is the set of actions that can be performed in the next step. To ensure that the algorithm can find the optimal solution, we set the action space to the set of all possible actions to perform.A_init={y_1=1,y_1=0,y_2=1,y_2=0,...,y_n=1,y_n=0}.Each time an action y_i=1 or y_i=0is performed, we update the current action space A. The updated action space is obtained by deleting actions corresponding to the variable that have already been performed from the current action space. The action space becomes:A=A-{y_i=0, y_i=1}. Based on the BLP model we constructed for the four classes of SAT problems, finding the optimal solution to the SAT problem is equivalent to finding the optimal value of this BLP problem. Therefore, for the construction of the reward function, we adopt the most direct way. The reward function is set to the optimal value of the objective function. In this way, finding the optimal solution to these four classes of SAT problems is equivalent to maximizing the reward of the current reinforcement learning model. Our reward function takes the following form:R=c^T y^*.For other feasible reward function settings, we conduct experimental verification in Section 6.4 and find that the current reward is the best. § PROPOSED RL ALGORITHMMaxSAT problem is a sequential decision problem. Considering that the search space has a natural binary tree structure, we employ tree-based search based reinforcement learning methods. Monte Carlo tree search algorithm evaluates the future reward of each assignment through its prior exploration. Ultimately, we choose the action with the greatest expected reward, which guarantees the highest objective function value. The entire process is shown as Figure <ref>, and the overall algorithm is shown in Algorithm <ref>. §.§ MCTS-based Boolean Assignment Method ConstructThe premise of applying the MCTS method is to transform the problem into a reinforcement learning model. We follow the representation given in the previous section for extraction and model transformation. SAT tableaux are given as the state representation based on the BLP representation of the problem. We use all possible assignments to the current state as our action space. In addition, the reward is set to the obejective value of the current solution. Algorithm <ref> presents the pseudocode to transform cnf or wcnf files into state and action spaces. ExpandIn the expansion stage, we first need to identify the set U of all unassigned variables in current state. To expand all nodes, we need to select all possible assignments A_cur for the unassigned variable once. A_cur={ y_i=0, y_i=1 | i ∈ U }.We use I to represent the set of all selected variables. We start with I=∅. After choosing a action a_i for the unassigned variable, we renewal I=I ∪{ i } and update the state to retrieve unassigned variables. Moreover, we use a random strategy to generate episodes.randomly select from { a_i∈ A_cur| i ∉ I }.We present in Algorithm <ref> the process of updating the state s after taking an action a. We maintain the search process with a tree structure, as shown in Algorithm <ref>. Each branch according to perform an action. After the action is performed, we generate a child node. The child node holds the current state and the action is taken by the parent node. In other words, the current state is the new state obtained after the action is taken.Q and N represent the current state-action value function and the number of visits, respectively, which we will discuss in detail later. After performing an action, we need to randomly construct episodes to calculate the reward. We compute the reward r_T of an episode using the definitions in the reinforcement learning model. G_t=r_T Starting from the current state-action pair, the way we calculate reward is shown in Algorithm <ref>. It is worth noting that only leaf nodes correspond to feasible solutions, and that each variable assignment contributes equally to the optimal solution. In addition, the strategy evaluation approach employs the empirical mean of rewards as the expectation.Q_π(s,a)=𝔼_π [G_t | s_t=s,a_t=a], t∈{1,2,...,T}, where {1,2,...,T } denotes subscripts of the current episode. The expanded node and its parent node are then retroactively updated with the rewards from the entire episode. We add 1 to the count function N and increase the current state-action value function Q by the cumulative reward when we encounter state s in an episode and perform action a. The backtracking procedure based on tree structure is implemented in Algorithm <ref>.N(s,a) ← N(s,a)+1. Q(s,a) ← Q(s,a)+G_t.ExploreThe exploration stage follows the expansion stage. We need to estimate the future payoff of the current assignment by performing N_explore exploration of the previously expanded node. Inspired by the MCTS rule <cit.>, we use soft max to relax the condition of the upper confidence bounds applied to the trees (UCT) algorithm <cit.>. First, we use the UCT algorithm to estimate the current state-action value function.Q(v^')=v^'.Q/v^'.N+C√(2lnv.N/v^'.N), v^'∈ children of v.Where v represents the current node, Q represents the value of state-action function at the current node, and N represents the number of visits to the current node. Then we relax the maximization by controlling α, where α∈ℝ and α∈ [0,1]. Actions are randomly selected for execution from the set larger than soft max.E_soft=(1-α)min_v^'∈ children of vQ(v^')+αmax_v^'∈ children of vQ(v^')v_next=randomselectfrom{v^'|Q(v^')≥ E_soft,v^'∈ children of v }This part is the execution process of Algorithm <ref> when tag=0. Subsequently, we continue to generate episodes in the random manner given by Formula (<ref>) and the rewards are calculated in the same way as in Algorithm <ref>. We calculate the rewards according to the previously defined reinforcement learning model. The existing tree structure information is updated according to Formulas (<ref>) and (<ref>), as Algorithm <ref> shows. When explorations reaches N_explore, the exploration ends. Evaluation of each assignment under N_explore exploration is given by the state-action value function.Exploit In the exploitation phase, we select the optimal assignment based on the evaluation results of the exploration phase. The N_explore exploration has been completed. Furthermore, the state-action value function effectively reflects the future reward after exploration. Assignments with large state-action value functions correspond to large rewards. Therefore, we choose the assignment as follows:v^*=max_v^”∈ children of vQ(v^”)/N(v^”). a^*=max_a ∈ AQ(s,a),The above process is the case of Algorithm <ref> when tag=1. We update the status after selecting the optimal assignment. The above process is completed until all variables are assigned. Otherwise we return to the expansion stage and repeat the cycle.§.§ Extracting Multiple Boolean AssignmentsOptimal solutions to the MaxSAT problem is the Boolean assignment that maximizes objective values. The assignment of variables is not necessarily unique while maximizing the objective values. Considering the randomness of action selection in the exploration phase, the action of exploration is random when we maximize the objective value. Intuitively, this MCTS-based approach can find different optimal solutions if it exists. Subsequently, we analysis the convergence property of our MCTS-based approach. § THEORETICAL ANALYSISIn this section, we present a detailed theoretical analysis of our framework in terms of optimality and completeness. First, we prove that the DCSAT that can perform optimal Boolean variable assignment at each step to find the optimal solution. It is further proved that our method can find all the optimal assignments when algorithm executions are sufficient. §.§ Optimality of the DCSATWe show that the DCSAT will find the optimal solution as explorations approximates infinity. Considering that the expectation of reward is affected by other episodes, we define the significance operator Sig based on convolutional pooling. We then provide the full proof details.DefinitionDefinition[section] [Rank Function]Given probability space Ω and a sequence of random variables RS := { R_1, R_2, …, R_n }⊂Ω, RS_O := { R_(1), R_(2), …, R_(n)} is the order statistics sequence of RS, where R_(i) represents the i-th smallest random variable. Define the function Rank_RS: Ω⟶ [n], where [n] := { 1,2,…,n }, and Rank_RS(R_i) is the index of R_i in RS_O. [Significance Operator]Given K groups of random variables sequences { R^k_1,R^k_2,...,R^k_n_k}, k ∈{ 1,2,...,K }, we define R̅^k and R_(n_k)^k as the mean statistic and the maximum statistic of k-th sequence { R^k_i}_i=1^n_k, respectively:R̅^k = 1/n_k∑_i=1^n_k R_i^k, R_(n_k)^k = max{ R^k_1,R^k_2,...,R^k_n_k}, k ∈{ 1,2,...,K }.For the mean statistics sequence ES := {R̅^1,R̅^2,…, R̅^K } and the maximum statistics sequence MS := { R_(n_1)^1,R_(n_2)^2,…, R_(n_K)^K }, define the significance operator Sig: Ω⟶Ω as:Sig(R̅^k)= [R̅^k, Rank_ES(R̅^k) = Rank_MS(R_(n_k)^k); R_(n_k)^k, Rank_ES(R̅^k) ≠Rank_MS(R_(n_k)^k). ] thmTheorem[section]If we set α in Formula (<ref>) as zero, the DCSAT with significance operator Sig will converge to the optimal Boolean assignment as N_explore goes to the infinity. In our reinforcement learning model, the reward is defined as the objective value of the transformed BLP problem. Therefore, maximizing the reward is equivalent to finding the optimal Boolean assignment for the four types of SAT problems.To avoid confusion, we assume that the depth of root node is zero. Given a SAT problem, our model constructs a tree of depth n, where n is the number of decision variables.Without loss of generality, we consider the state node s of which the depth is k. Note that k variables are assignmented in state s and denote the feasible action space in s as 𝒜_s:={ a_1, a_2,…, a_2(n-k)} where each action corresponds to a feasible variable assignment. The child nodes of s are { s_1, s_2, …, s_2(n-k)} of which the depth is k+1 and the transition functions are ℙ(s_i | s, a_j) = 𝕀_{ i=j }, ∀ i,j∈{1,2,…,2(n-k)}, where 𝕀 is the indicator function. We denote N_explore as the number of explorations from s. The random variable N_i is the number of explorations from s to s_i and ∑_i=1^2(n-k) N_i = N_explore. If we set α in Formula (<ref>) as zero, the set of selected actions is{ i|Q(s,a_i) ≥ E_soft} = { i|Q(s,a_i) ≥min_a_j∈𝒜_s Q(s,a_j)} = 𝒜_s,which indicates that all possible variable assignment can be selected, i.e. 𝔼_π[N_i] > 0, ∀ i∈{1,2,…,2(n-k)}. In fact, our algorithm takes the exploration action by uniform policy from the set { i|Q(s,a_i) ≥ E_soft}, and then we have𝔼_π[N_i]=∑_i=1^|𝒜_s|ℙ(s_i | s, a_j) π(a_j|s) N_explore= π(a_i|s) N_explore= N_explore/|{ i|Q(s,a_i) ≥ E_soft}|= 1/2(n-k)N_explore > 0, ∀ i∈{1,2,…,2(n-k)}. Algorithm <ref> has two phases of exploration. In the first phase, every child node of s is visited once. And in the second phase, child nodes are explored according to the proposed DCSAT.We can conclude that as long as N_explore≥1/ln( 1 + 1/2(n-k)-1)ln( 1/δ_1) +2(n-k), every child node can be accessed to in the second phase with probability at least 1-δ_1, i.e., ℙ( state s_i visited)=ℙ( N_i > 0 )= 1 - ℙ( N_i = 0 )= 1 - (1- 1/2(n-k))^N_explore-2(n-k)≥ 1-δ_1. We denote random variable R_j^a_i as the reward of taking action a_i for the j^th time. Given 2(n-k) groups of independent identically distributed random variables sequences { R_j^a_i}_j=1^N_i, i∈{1,2,…,2(n-k)}, we apply the significance operator Sig in Definition <ref> and we have {Sig(R̅^a_i)}_i=1^2(n-k). Note that the reward random variables { R_j^a_i}_j=1^N_i are independent and identically distributed, and we can apply Wiener-khinchin Law of Large Numbers: lim_N_explore→∞R̅^a_i = lim_N_i→∞R̅^a_i =lim_N_i→∞1/N_i∑_j=1^N_iR^a_i_j = 𝔼[ R^a_i_j ], a.s.Note that Q(s,a_i) is the same as R̅^a_i with respect to the definition of Q. Note that our tree model for SAT problems has three special structures: 1) the depth of each leaf node is the same; 2) the cardinality of the acion space of nodes of the same depth is the same; 3) the cardinality of the action space of a node is an arithmetic progression with respect to depth. For any feasible assignment AST = { y_1, y_2, …, y_n }, we can derive that as long as N_explore≥1/log( 1+1/2^n-1)log( 1/ϵ), the assignment AST can be found starting from the root node with probability at least 1-ϵ, i.e.,ℙ( find assignment AST |start from root node s_0)= 1- ℙ( not find assignment AST |s_0)= 1- ℙ(⋂_j=1^N_explore{not find AST at the j-th exploration } | s_0)= 1- ∏_j=1^N_exploreℙ(not find AST at the j-th exploration| s_0)= 1- ∏_j=1^N_explore( 1 -ℙ(find AST at the j-th exploration| s_0))= 1- ∏_j=1^N_explore( 1 -n!/∏_k=0^n-1 2(n-k))= 1- ( 1 -1/2^n)^N_explore≥ 1-ϵ,where the third equation is from the independence of each exploration and the fifth equation comes with the special structures of our tree model and our uniform exploration strategy.We have shown that our algorithm can explore each assignment AST when N_explore is sufficiently large. As a result, we havelim_N_explore→∞R^a_i_(N_i) = lim_N_i→∞R^a_i_(N_i) =lim_N_i→∞max{R^a_i_1, R^a_i_2,…, R^a_i_N_i} = R^a_i_*, a.s.where R^a_i_* is the maximum reward which can be attained by taking action a_i. Denote ES := {R̅^a_1,R̅^a_2,…, R̅^a_|𝒜|} and the maximum statistics sequence MS := { R_(N_1)^a_1,R_(N_2)^a_2,…, R_(N_|𝒜|)^a_|𝒜|}. Then, we havelim_N_explore→∞Sig(R̅^a_i)= [ 𝔼[ R^a_i_j ], Rank_ES(R̅^a_i) = Rank_MS(R^a_i_(N_i)); R^a_i_*, Rank_ES(R̅^a_i) ≠Rank_MS(R^a_i_(N_i)), a.s. ] We define the mapping Proj: ES∪ MS ⟶ MS, where Proj(R̅^a_i) = R^a_i_(N_i), ∀R̅^a_i∈ ES and Proj(R^a_i_(N_i)) = R^a_i_(N_i), ∀R^a_i_(N_i)∈ MS. Combined with Formula (<ref>), we havelim_N_explore→∞Proj∘Sig(R̅^a_i) = R^a_i_*, a.s. ∀ i∈{1,2,…,|𝒜_s|}.We take action â∈max_a_i∈𝒜_sProj∘Sig(R̅^a_i). According to the definition of Proj and Sig, we can access the child node that attains the optimal reward to execute the next iteration. We have proven that the DCSAT can find the optimal action for four types of SAT problems.According to (<ref>) and (<ref>), as long as N_explore≥1/log( 1+1/2^n-1)log( 1/ϵ),the optimal assignment can be found starting from the root node with probability at least 1-ϵ, which indicates that the DCSAT will converge to the optimal solution with the highest reward when N_explore is sufficiently large. The above theorem concludes that the DCSAT converges to optimality under the condition of α = 0. For other α values, we have conducted an ablation study in the experimental section for verification.§.§ Completeness of Multiple SolutionsThe DCSAT is a random algorithm that may find different optimal solutions when executed multiple times. Theorem <ref> indicates that the DCSAT has the potential to find all optimal Boolean assignments. If we set α in Formula (<ref>) as zero, DCSAT with significance operator Sig can find all Boolean assignments provided that algorithm executions N_exe is sufficiently large. We use the same notations as ones in the proof of Theorem <ref>. Set all optimal Boolean assignment as S_AST^*={ AST^*_1,AST^*_2,...,AST^*_|S_AST^*|}. When |S_AST^*|=1, this theorem holds by Theorem <ref>. We consider |S_AST^*|≥ 2 in the following proof. Similar to Formula (<ref>), we have lim_N_explore→∞ N_AST^*=lim_N_explore→∞1/2^n N_explore=∞,∀ AST^* ∈ S_AST^*,where N_AST^* represents the number of explorations of the optimal assignment AST^*.According to Formula (<ref>), when starting from the root node, we have lim_N_explore→∞Proj∘Sig(R̅^AST^*) = R^AST^*_*a.s. ∀ AST^* ∈ S_AST^*.Therefore, we select action to AST^* from S_AST^* by uniform policy, i.e.ℙ(AST^*_i)=1/|S_AST^*|. Then, we have that when N_exe≥1/log( |S_AST^*|/|S_AST^*|-1)log( |S_AST^*|/ϵ),the MCTS rule can find all AST^*∈ S_AST^*, i.e.ℙ( find all AST^*∈ S_AST^*)=1-ℙ( ⋃_i=1^|S_AST^*|{not find AST^*_i }) ≥1-∑_i=1^|S_AST^*|ℙ(not find AST^*_i )= 1- |S_AST^*| ( |S_AST^*| -1/|S_AST^*| )^N_exe ≥ 1-ϵ.It indicates that when the number of algorithm executions N_exe approaches infinity, each AST^*∈ S_AST^* can be found. Then repeating the above process, DCSAT can find all the optimal Boolean assignments.§ EXPERIMENT §.§ Datasets and Experiment SettingWe perform experiments on SATLIB and 2016 Eleventh Max-SAT Evaluation. We extract some data from SATLIB and 2016 Eleventh Max-SAT Evaluation to test the generalization of our method. Considering that the Boolean assignments that maximizes rewards is not necessarily unique, we randomly generate some data to verify it. We sequentially choose whether the variable is negated or not with a probability of 0.5, generating random numbers uniformly between 0 and 1000 as weights for the clauses. Then we use our method to generate the optimal Boolean assignments and verify it by sat4j. It is finally verified that the Boolean assignments generated by our method are not unique. For the hyperparameter setting, we set the explorations to 7 times of clauses. Furthermore, for the choice of hyperparameter α, we select 11 different values using the grid search method, ultimately choosing α=0.9 as the choice for the final experiment.§.§ Compare with Other Existing SolversWe conduct experiments on SATLIB with 2016 Eleventh Max-SAT Evaluation, as shown in the Table <ref>. Our method is compared with the solver sat4j and the advanced solver CCEHC2akms in the 2016 Eleventh Max-SAT Evaluation competition. We find that sat4j is only suitable for solving the data in SATLIB. On the contrary, CCEHC2akms is only applicable to the solution of the 2016 Eleventh Max-SAT Evaluation. In contrast, our method is more general and shows excellent results on both datasets.For the convenience of visualization, instances solved by each solver is compared visually using a pie chart, as shown in the Figure <ref>. The total instances solved by the two solvers accounted for 100%. The ratio of instances solved by each solver to the total instances solved by two solvers is the coverage ratio of the solver in the pie chart. In this way, the proportion of solvers in Figure <ref> is the proportion of instances solved by each solver. The left side of Figure <ref> shows the effect on the SATLIB, and the right side shows the results on the 2016 Eleventh Max-SAT Evaluation. More specifically, we present the specific instance categories solved on SATLIB and 2016 Eleventh Max-SAT Evaluation in the Table <ref>. Data represents the category of solved instances. Instances indicates instances for that category. We indicate the best performing solver effect in red and the second-best solver effect in blue. §.§ Findings of Multiple Boolean AssignmentsIn our reinforcement learning model, performed actions are guided by rewards. Consider that the Boolean assignment that maximizes the reward is not unique. Therefore, the obtained results are not unique in the multiple executions of the algorithm. We prove theoretically that all optimal Boolean assignments can be found as executions of the algorithm tends to infinity. We verify that our method can find different Boolean assignments on four types of randomly generated instances. We perform each of the randomly generated instances 10 times. It is found that different Boolean assignments can be obtained, as shown in the Table <ref>. Furthermore, we visualize the different assignments previously discovered versus algorithm executions, as shown in Figure <ref>. It can be found that as algorithm executions increases, our method can find different Boolean assignments. §.§ Ablation StudyInspired by MCTS rule, we consider three approximations based rewards to compare whether they can improve the experimental effect. Our Initial model is denoted as Model Initial, and the models obtained by embedding the three kinds of rewards are denoted as Model 1, Model 2 and Model 3 respectively. The four comparison models are shown in Table <ref>. The first reward is a weighted sum of objective value increments, as shown in Formula (<ref>).R_1=∑_i=1^n-1 w_i^1 (c^T y_i+1-c^T y_i), w_i^1=n-i/n.Where n represents the number of binary variables, i represents the i^th assignment, y_i is the locally feasible solution obtained from the i^th assignment, and w_i^1 ∈ (0,1] is the weight. It is noteworthy that the proposed linear weight factor provides the weight of linear attenuation according to the order of assignments. The second reward is a linearly weighted sum of the objective values, as shown in Formula (<ref>). w_i^2 ∈ (0,1] is also a linearly decaying weight, similar to the definition of w_i^1. Different from R_1 focuses on the increment of objective values, R_2 is more inclined to objective values brought by the earlier assignment.R_2=∑_i=1^n w_i^2 (c^T y_i), w_i^2=(n+1)-i/n.Formula (<ref>) defines the third reward, which is the equally weighted sum of R_2 and R_3. While emphasizing the importance of initial assignment, diversity of exploration paths is also encouraged.R_3=0.5 R_2+0.5 R_3. We conduct comparative experiments on 6 typical instances, as shown in the Figure <ref>. Each subgraph of Figure <ref> represents the experimental effect on a typical instance. We use different color line represent different models. The left side of the subfigures shows relationship between average objective values and explorations. It can be seen from the left subfigure that the average objective value of Model Initial is the best and the optimal value can be reached. The right side shows the relationship between computation time and explorations. Similarly, the right subfigure shows that Model 3 has the shortest average computation time, and Model Initial is not much different from it. Considering the computation time and solution quality, Model Initial has the best overall performance.We also conduct a comparative experiment to determine the best value of alpha, as shown in the Figure <ref>. Using the grid search method, we divide alpha from 0 to 1 into 11 equal parts at 0.1 intervals. We follow the six representative data sets selected previously. We solve each problem 20 times and record its objective value. Given the wide range of objective values, we use the interquartile range normalization to normalize them to between zero and one. From the results of six typical instances, it can be seen that the penultimate cylinder performs the best. Therefore, we conclude that alpha is appropriate for a value of 0.9, which is in line with our intuitive understanding of soft max. § CONCLUSIONWe propose a unified solving paradigm for four types of SAT problems (MaxSAT, Weighted MaxSAT, PMS and WPMS). Notably, our method transforms four types of SAT problems into general 0-1 integer programming structures by subtly adjusting weights. Consolidated reinforcement learning models are subsequently proposed to make this approach applicable to the reinforcement learning paradigm. In this way, the Monte carlo tree search algorithm can effectively evaluate the pros and cons of each variable Boolean value assignment, and sharply cut off unnecessarily search space and find solution of the problem. Depending on the randomness of the algorithm, different Boolean assignments can be obtained when the algorithm is executed repeatedly. Theoretical derivation and experimental verification demonstrate the superiority of the proposed framework. To some extent, it enhances the diversity of labels for supervised learning methods. This idea also applies to other combinatorial optimization problems that can be transformed into tree search structures. In the future, we also plan to approximate this framework with deep neural networks combined with reinforcement learning to achieve more efficient time efficiency. § ACKNOWLEDGEMENTSThis paper is supported by the National Key R&D Program of China [grant number 2021YFA1000403]; the National Natural Science Foundation of China [grant number 11991022];the Strategic Priority Research Program of Chinese Academy of Sciences [grant number XDA27000000]; and the Fundamental Research Funds for the Central Universities.
http://arxiv.org/abs/2312.16423v1
{ "authors": [ "Anqi Li", "Congying Han", "Tiande Guo", "Haoran Li", "Bonan Li" ], "categories": [ "cs.AI", "math.OC" ], "primary_category": "cs.AI", "published": "20231227060948", "title": "General Method for Solving Four Types of SAT Problems" }
Department of Physics, King's College London, Strand, London WC2R 2LS, United KingdomThese authors contributed equally to this work Department of Physics, King's College London, Strand, London WC2R 2LS, United Kingdom Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The NetherlandsThese authors contributed equally to this work Physics and Astronomy, Division of Natural Sciences, University of Kent, Ingram Building, Canterbury CT2 7NZ, UKTCM Group, Cavendish Laboratory, University of Cambridge, Cambridge CB3 0HE, United KingdomDepartment of Physics, King's College London, Strand, London WC2R 2LS, United KingdomWe investigate the phases and phase transitions of the disordered Haldane model in the presence of on-site disorder. We use the real-space Chern marker and transfer matrices to extract critical exponents over a broad range of parameters. The disorder-driven transitions are consistent with the plateau transitions in the Integer Quantum Hall Effect (IQHE), in conformity with recent simulations of disordered Dirac fermions.Our numerical findings are compatible with an additional line of mass-driven transitions with a continuously varying correlation length exponent. The values interpolate between free Dirac fermions and the IQHE with increasing disorder strength. We also show that the fluctuations of the Chern marker exhibit a power-law divergence in the vicinity of both sets of transitions, yielding another varying exponent. We discuss the interpretation of these results.Topological Phase Transitions in the Disordered Haldane Model M. J. Bhaseen January 14, 2024 - Preprint =============================================================§ INTRODUCTIONA defining characteristic of topological phases of matter is their resilience to local perturbations and sample defects. A prominent example is the Integer Quantum Hall Effect (IQHE) <cit.> which is robust to variations in the sample geometry and is manifest in the presence of disorder. This robustness to local perturbations makes topological systems ideal candidates for applications, including metrology and quantum information processing <cit.>. Experimental realizations of topological systems have proliferated in recent years, and now include solid state devices in two- and three-dimensions, cold atomic gases and optical systems.For reviews, see for example Refs <cit.>.From a theoretical perspective, one of the most challenging and long-standing problems is the characterisation of the plateau transitions in the IQHE. This has attracted a great deal of attention over the years, including scaling theory approaches <cit.>, network models <cit.>, and recent conjectures for the low-energy field theory <cit.>. Amongst these approaches is the idea that topological phase transitions can be described in terms of disordered Dirac fermions <cit.>. This has been the focus of renewed interest due to recent simulations of continuum Dirac fermions which confirm their relevance to the IQHE <cit.>.In this work, we explore the critical properties of disordered topological phase transitions using the real-space topological marker <cit.> and transfer matrices. The absence of translational invariance makes the real-space approach especially suitable for numerical studies of critical exponents <cit.>. We focus on the Haldane model <cit.> in the presence of on-site disorder, whose low-energy description corresponds to disordered Dirac fermions. We confirm that the disorder-driven topological transition is in the universality class of the plateau transition for disordered IQH systems, in conformity with recent work on continuum Dirac fermions <cit.>. Our numerical results are compatible with an additional line of mass-driven transitions with a continuously varying correlation length exponent ν. The results interpolate between those of free Dirac fermions with ν =1 and those of the IQHE with ν∼ 5/2, with increasing disorder strength. We also observe a power-law divergence of the fluctuations of the Chern marker yielding another continuously varying exponent κ. We discuss the interpretations of these findings including the possible need for larger system sizes at weak disorder.The layout of this paper is as follows. We introduce the model in Section 2 and the Chern marker in Section 3. We discuss the evolution of the phase diagram in Section 4 before turning our attention to the correlation length exponent in Section 5. In Section 6 we examine the sample-to-sample fluctuations of the Chern marker exposing its power-law divergence in the vicinity of the transitions. In Section 7 we discuss the variation of the correlation length exponent and the fluctuation exponent as a function of the disorder strength. We conclude in Section 8 and provide Supplementary Material.§ DISORDERED HALDANE MODELThe Haldane model <cit.> describes spinless fermions hopping on a honeycomb lattice with nearest and next-nearest neighbor hopping amplitudes t_1 and t_2. In the presence of on-sitedisorder the Hamiltonian is given by Ĥ = -t_1∑_⟨ i,j⟩(â_i â_j+)- t_2 ∑_⟨⟨ i,j⟩⟩(e^φ_ijâ_i â_j+)+M ∑_i∈ An̂_i -M ∑_i∈ Bn̂_i+∑_i v_i n̂_i,where v_i∈ [-W,W] is a random variable drawn from a flat distribution of width 2W and A,B label the two sublattices. Throughout this work we set t_1=1 and t_2 = 1/3. Here, â_i^† and â_i are fermionic creation and annihilation operators obeying anticommutation relations {â_i,â_j^†}=δ_ij and n̂_i≡â_i^†â_i. The energy offset ± M breaks inversion symmetry between the A and B sublattices allowing the possibility of a conventional band insulator when W=0. The phase factor φ_ij=±φ is positive (negative) for anticlockwise (clockwise) hopping and breaks time-reversal symmetry, allowing the possibility of topological phases. The Haldane model with W=0 was realized using cold atomic gases <cit.>, where the phase factor φ is imprinted via the periodic modulation of the optical lattice. The clean Haldane model with W=0 also played a crucial role in the discovery of topological insulators <cit.>; for a review see Ref. <cit.>. More recently, an extension of this model with spatially anisotropic first neighbor hopping parameters has been investigated for W≠ 0 <cit.>. This showed the presence of disorder-driven Lifshitz and Chern transitions. In this work, we focus on the isotropic Haldane model with W≠ 0.We extract the disorder induced critical exponents using the real-space Chern marker and transfer matrix calculations.§ REAL-SPACE CHERN MARKERIn the absence of disorder, the phases of the Haldane model (<ref>) are distinguished by the global Chern index <cit.>C = -1/π Im∑_n=1^n_occ∫_BZd k⟨∂_k_xu_nk|∂_k_yu_nk|,⟩where u_nǩ(ř)=e^-i ǩ·řψ_n ǩ(ř) is the periodic part of the ground state wavefunction and n_occ is the number of occupied bands. Here, n is the band index and the integration is over the first Brillouin zone. The Chern index is quantized and takes the values C=± 1 (C=0) in the topological (non-topological) phases. In the presence of disorder (or in finite-size samples with open boundary conditions) the definition (<ref>) is not immediately convenient due to the explicit use of momentum space. One approach to this problem is to impose periodic boundary conditions on the disordered sample and to use a supercell formulation instead <cit.>. Alternatively, one can employ the real-space Chern marker c( r_α), defined on a unit cell α, introduced by Bianco and Resta <cit.>. This can be obtained from the definition (<ref>) and is given byc( r_α ) = -4π/A_c Im∑_s=A,B⟨ r_α_s|P̂x̂Q̂ŷP̂| r_α_s⟩.Here, P̂=A_c/(2π)^2∑_n=1^n_occ∫_BZ d kψ_n kψ_nk is the projector onto the ground state, Q̂ = Î-P̂ is the complementary projector onto the unoccupied bands, A_c is the area of a unit cell, and the sum is over the two sublattice sites within the unit cell α. For a clean Chern insulator with W=0 and away from topological transitions, the value of c( r_α) evaluated in the center of a finite-size sample reproduces the value that the global Chern index (<ref>) would have in the presence of periodic boundary conditions. For further details see Ref. <cit.> and the Supplementary Material provided here.§ PHASE DIAGRAMIn order to distinguish between the topological and non-topological phases of the disordered Haldane model (<ref>) one may consider the disorder average of the real-space Chern marker c̅, in the center of a finite-size sample, and averaged over a number of independent disorderrealizations <cit.>; see Fig. <ref>.In Fig. <ref> we show the evolution of c̅ with increasingdisorder strength W. As may be seen from Fig. <ref>(a), in the absence of disorder the real-space Chern marker c directly reproduces the equilibrium phase diagram of the clean Haldane model <cit.>. A vertical slice through Fig. <ref>(a) shows clear plateaus, with values of c close to integers <cit.>. In the vicinity of the transition between the topological and non-topological phases, the value of c is no longer quantized, but smoothly interpolates between the plateaus. As shown in our earlier work <cit.>, a finite-size scaling analysis of this transition region yields the correlation length exponent ν=1. This is in agreement with the low-energy description of the clean Haldane model in terms of free Dirac fermions <cit.>. As the disorder strength W increases, the topological regions of the phase diagram expand, in agreement with Ref. <cit.>; see Figs <ref> (b) - (d). Strong disorder ultimately destroys the topological phases as illustrated in Fig. <ref>. This occurs around W∼ 3.6 for t_1 =1, t_2 = 1/3 and for a range of φ and M. A notable feature in Fig. <ref> is the presence of re-entrant disorder-driven transitions, first from c̅=0 to c̅=1, and then from c̅=1 to c̅=0 at stronger disorder <cit.>; this may be seen along the line M=2 for example. In the next section we use finite-size scaling to extract the critical properties of both the disorder-driven transitions and the mass-driven transitions at fixed disorder strength; see Fig. <ref>.§ FINITE-SIZE SCALINGThe universal features of topological phase transitions can be obtained from a finite-size scaling analysis of the Chern marker, both in the absence <cit.> and the presence of disorder <cit.>. In Fig. <ref>(a) we plot the variation of the disorder-averaged Chern marker c̅ as a function of W, corresponding to a horizontal slice through Fig. <ref> with M=0. It is readily seen that c̅ interpolates between plateaus at c̅∼ 1 and c̅∼ 0, over a broad transition region in the vicinity of a critical disorder strength W_c∼ 3.6. The width of this transition region Δ W narrows with increasing system size L. Assuming that the correlation length ξ∼ (W-W_c)^-ν is of order the system size when the departures from quantization occur, one expects that the width scales as Δ W∼ L^-1/ν. More generally, we assume a scaling formc̅∼ f(ξ/L) ∼f̃((W-W_c)L^1/ν). In Fig. <ref>(a) we maximize the overlap between the c̅ curves obtained for different system sizes when plotted as a function of (W-W_c)L^1/ν. This yields W_c = 3.58± 0.02 and ν=2.42± 0.11. Replotting the data in Fig. <ref>(a) as a function of (W-W_c)L^1/ν with ν = 2.42 the data collapse onto a single curve; see Fig. <ref>(b) and the inset. This value of ν is close to, but a little lower than, the most recent numerical results for the correlation length exponent pertaining to the plateau transitions in the IQHE where ν∼ 2.6 <cit.>. It is however, compatible with the spread of results shown in Ref. <cit.>. We corroborate these findings with transfer matrix calculations on large strips with width up to 59 unit cells and length up to 10^7. We find that ν = 2.47 ± 0.09, which is numerically close to 5/2, and in agreement with the scaling of the Chern marker; see Supplementary Material. We further confirm our results for the disorder-driven transition with M=1. This yields ν=2.47± 0.08, in agreement with the results for M=0; see Supplementary Material. This suggests that the disorder-driven transitions in Fig. <ref> are in the same universality class, independent of the value of M.Having examined the disorder-driven transitions in Fig. <ref> we turn our attention to the mass-driven transitions at fixed disorder strength. In Fig. <ref>(a) we plot the evolution of the Chern marker on transiting from the topological to the non-topological phase with W=1 held fixed. The data show a clear crossing point at M_c ∼ 1.85 in conformity with Fig. <ref>. Re-plotting the data as a function of (M-M_c)L^1/ν with ν=1.06± 0.03, the data collapse onto a single curve; see Fig. <ref>(b). This result is in agreement with transfer matrix calculations which yield ν=1.05 ± 0.03; see Supplementary Material. This value is close to, but distinct from that of free Dirac fermions with ν=1. We will consider further instances of such departures in Sections 6 and 7. § FLUCTUATIONSHaving established the scaling of the disorder averaged Chern marker c̅, we now turn our attention to its fluctuations. We examine the sample-to-sample fluctuations of the Chern marker (δ c)^2 = (c - c̅)^2 in the middle of the system, where the overbar indicates disorder averaging. In Fig. <ref>(a) we plot the evolution of (δ c)^2 on transiting from the topological to the non-topological phase with W=1 held fixed. The fluctuations show a clear peak on approaching the critical point at M_c ∼ 1.84. The value of (δ c)^2 at the peak grows with increasing system size and is consistent with a power-law divergence (δ c)^2_max∼ L^κ with κ = 0.36 ± 0.02; see inset of Fig. <ref>(a). We therefore consider a scaling form (δ c)^2 ∼ L^κ g(ξ/L) ∼ L^κg̃((M-M_c)L^1/ν) in the vicinity of the transition. In Fig. <ref>(b) we plot (δ c)^2 L^-κ as a function of (M-M_c)L^1/ν where the values of M_c = 1.84 and ν = 1.05 are obtained from the scaling of the Chern marker. The data collapse in the vicinity of the transition highlighting the consistency with our Section 5 results for ν. In Section 7 we explore the variation of the exponents ν and κ with increasing disorder strength.§ VARIATION OF THE EXPONENTSHaving discussed the scaling of the Chern marker and its fluctuations we now consider the variation of ν and κ with W. In Fig. <ref>(a) we plot the variation of ν as we transit along the mass-driven phase boundary in Fig. <ref>. It can be seen that ν interpolates between that of free Dirac fermions with ν=1 and the value ν∼ 5/2 corresponding to the disorder-driven transitions. The results obtained via the Chern marker are in agreement with those obtained via the transfer matrix approach, although the error bars increase with W. The deviation between the results at strong disorder is attributed to finite-size effects in the Chern marker calculations, which are also performed in a different geometry; see Supplementary Material. The fluctuation exponent κ shows a similar evolution as ν, interpolating between κ∼ 0.35 and κ∼ 0.65; see Fig. <ref>(b). It is notable that Ref. <cit.> also finds evidence for varying ν as a function of energy in a model of Dirac fermions. However, this is over a much smaller range of values, between 2.33 and 2.53. Here, we provide evidence for a very strong variation of the exponents over a wide range of parameters.The results contained in Fig. <ref> raise a number of questions and scenarios. One possibility is that the disordered Haldane model exhibits a line of continuously varying exponents ν and κ, due to the presence of marginal perturbations <cit.>. Another possibility is that the weak disorder regime shows very slow convergence to the thermodynamic limit and will ultimately flow to ν∼ 5/2. Another scenario is the possibility of distinct fixed points at both weak and strong disorder to which the system will eventually flow. All of these scenarios may lead to the extraction of effective exponents, ν_eff and κ_eff, for the system sizes considered. It would be interesting to explore these possibilities in more detail. In closing, we note that a drift of the critical exponent ν was observed in early work on the site diluted Ising model <cit.>. This has been recently attributed to the effect of logarithmic corrections <cit.>.§ CONCLUSIONSIn this work we have explored the critical behavior of the disordered Haldane model using both the Chern marker and transfer matrix calculations. We provide evidence for disorder-driven transitions with ν∼ 5/2. Our findings are also consistent with a line of fixed points with a continuously varying exponent which interpolates between ν=1 and ν∼ 5/2. It would be interesting to explore the latter in more detail both numerically and analytically. We have also introduced an exponent κ associated with the power-law divergence of the Chern marker fluctuations in the vicinity of topological phase transitions. We provide numerical evidence for its variation along the mass-driven phase boundary. These results may provide a useful starting point to explore the drift in exponents found in other works on the IQHE <cit.>.§ ACKNOWLEDGEMENTSWe acknowledge helpful conversations with M. Foster, C. von Keyserlingk, R. Kühn, J. Pixley and L. Privitera. The early stages of this work was supported by EPSRC grants EP/J017639/1 and EP/K030094/1, by the Netherlands Organization for Scientific Research (NWO/OCW), and by an ERC Synergy Grant. GM was supported by the Royal Society under grant URF\R\180004. NRC was supported by EPSRC Grants EP/P034616/1 and EP/V062654/1 and by the Simons Investigator Grant No. 511029. MJB was supported by the National Renewable Energy Laboratory and thanks the EPSRC Centre for Doctoral Training in Cross-Disciplinary Approaches to Non-Equilibrium Systems (CANES) funded under grant EP/L015854/1. MJB, MDC and JM acknowledge the Thomas Young Centre and the use of Create for numerical simulations. The data presented in this work is available upon request. For the purpose of open access, the authors have applied a creative commons attribution (CC BY) licence.48 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Ando et al.(1975)Ando, Matsumoto, and Uemura]Ando1975 author author T. Ando, author Y. Matsumoto,and author Y. Uemura, 10.1143/JPSJ.39.279 journal journal J. Phys. Soc. Japan volume 39, pages 279 (year 1975)NoStop [Klitzing et al.(1980)Klitzing, Dorda, and Pepper]Klitzing1980 author author K. v. Klitzing, author G. Dorda,and author M. Pepper, 10.1103/PhysRevLett.45.494 journal journal Phys. Rev. Lett. volume 45, pages 494 (year 1980)NoStop [Nayak et al.(2008)Nayak, Simon, Stern, Freedman, and Das Sarma]Nayak2008 author author C. Nayak, author S. H. Simon, author A. Stern, author M. Freedman,and author S. Das Sarma, 10.1103/RevModPhys.80.1083 journal journal Rev. Mod. Phys. volume 80, pages 1083 (year 2008)NoStop [Hasan and Kane(2010)]Hasan2010 author author M. Z. Hasan and author C. L. Kane, 10.1103/RevModPhys.82.3045 journal journal Rev. Mod. Phys. volume 82, pages 3045 (year 2010)NoStop [Dalibard et al.(2011)Dalibard, Gerbier, Juzeliūnas, and Öhberg]Dalibard2011 author author J. Dalibard, author F. Gerbier, author G. Juzeliūnas,and author P. Öhberg, 10.1103/RevModPhys.83.1523 journal journal Rev. Mod. Phys. volume 83, pages 1523 (year 2011)NoStop [Qi and Zhang(2011)]Qi2011 author author X.-L. Qi and author S.-C. Zhang, 10.1103/RevModPhys.83.1057 journal journal Rev. Mod. Phys. volume 83, pages 1057 (year 2011)NoStop [Carusotto and Ciuti(2013)]Carusotto2013 author author I. Carusotto and author C. Ciuti, 10.1103/RevModPhys.85.299 journal journal Rev. Mod. Phys. volume 85, pages 299 (year 2013)NoStop [Lu et al.(2014)Lu, Joannopoulos, and Soljačić]Lu2014 author author L. Lu, author J. D. Joannopoulos,and author M. Soljačić, 10.1038/nphoton.2014.248 journal journal Nat. Photonics volume 8, pages 821 (year 2014)NoStop [Goldman et al.(2014)Goldman, Juzeliūnas, Öhberg, and Spielman]Goldman2014 author author N. Goldman, author G. Juzeliūnas, author P. Öhberg,and author I. B. Spielman, 10.1088/0034-4885/77/12/126401 journal journal Rep. Prog. Phys. volume 77, pages 126401 (year 2014)NoStop [Kim et al.(2020)Kim, Jacob, and Rho]Kim_2020 author author M. Kim, author Z. Jacob,and author J. Rho, 10.1038/s41377-020-0331-y journal journal Light Sci. Appl. volume 9, pages 130 (year 2020)NoStop [Lv et al.(2021)Lv, Qian, and Ding]Lv_2021 author author B. Q. Lv, author T. Qian,and author H. Ding, 10.1103/RevModPhys.93.025002 journal journal Rev. Mod. Phys. volume 93, pages 025002 (year 2021)NoStop [Ni et al.(2023)Ni, Yves, Krasnok, and Alù]Ni_2023 author author X. Ni, author S. Yves, author A. Krasnok,and author A. Alù, 10.1021/acs.chemrev.2c00800 journal journal Chem. Rev. volume 123, pages 7585 (year 2023)NoStop [Khmelnitskii(1983)]Khmelnitskii_1983 author author D. E. Khmelnitskii, @noopjournal journal JETP Lett. volume 38, pages 454 (year 1983)NoStop [Pruisken(1988)]Pruisken_1988 author author A. M. M. Pruisken, 10.1103/PhysRevLett.61.1297 journal journal Phys. Rev. Lett. volume 61, pages 1297 (year 1988)NoStop [Huckestein and Kramer(1990)]Kramer1990 author author B. Huckestein and author B. Kramer, 10.1103/PhysRevLett.64.1437 journal journal Phys. Rev. Lett. volume 64, pages 1437 (year 1990)NoStop [Lütken and Ross(2007)]Lutken_2007 author author C. Lütken and author G. Ross, https://doi.org/10.1016/j.physletb.2007.08.022 journal journal Phys. Lett. B volume 653, pages 363 (year 2007)NoStop [Obuse et al.(2012)Obuse, Gruzberg, and Evers]Obuse_2012 author author H. Obuse, author I. A. Gruzberg,and author F. Evers, 10.1103/PhysRevLett.109.206804 journal journal Phys. Rev. Lett. volume 109, pages 206804 (year 2012)NoStop [Dresselhaus et al.(2022a)Dresselhaus, Sbierski, and Gruzberg]Dresselhaus2022 author author E. J. Dresselhaus, author B. Sbierski,and author I. A. Gruzberg, 10.1103/PhysRevLett.129.026801 journal journal Phys. Rev. Lett. volume 129, pages 026801 (year 2022a)NoStop [Slevin and Ohtsuki(2023)]Slevin_2023 author author K. Slevin and author T. Ohtsuki, https://doi.org/10.1002/pssr.202300080 journal journal Phys. Status Solidi Rapid Res. Lett. volume , pages 2300080 (year 2023)NoStop [Chalker and Coddington(1988)]Chalker1988 author author J. T. Chalker and author P. D. Coddington, 10.1088/0022-3719/21/14/008 journal journal J. Phys. C Solid State Phys. volume 21, pages 2665 (year 1988)NoStop [Fulga et al.(2011)Fulga, Hassler, Akhmerov, and Beenakker]Fulga_2011 author author I. C. Fulga, author F. Hassler, author A. R. Akhmerov,and author C. W. J. Beenakker, @noopjournal journal Phys. Rev. B volume 84 (year 2011)NoStop [Gruzberg et al.(2017)Gruzberg, Klümper, Nuding, and Sedrakyan]Gruzberg2017 author author I. A. Gruzberg, author A. Klümper, author W. Nuding,and author A. Sedrakyan, 10.1103/PhysRevB.95.125414 journal journal Phys. Rev. B volume 95, pages 125414 (year 2017)NoStop [Zirnbauer(2019)]Zirnbauer_2019 author author M. R. Zirnbauer, https://doi.org/10.1016/j.nuclphysb.2019.02.017 journal journal Nucl. Phys. B volume 941, pages 458 (year 2019)NoStop [Zirnbauer(2021)]Zirnbauer_2021 author author M. R. Zirnbauer, https://doi.org/10.1016/j.aop.2021.168559 journal journal Ann. Phys. volume 431, pages 168559 (year 2021)NoStop [Ludwig et al.(1994)Ludwig, Fisher, Shankar, and Grinstein]Ludwig1994 author author A. W. W. Ludwig, author M. P. A. Fisher, author R. Shankar,and author G. Grinstein, 10.1103/PhysRevB.50.7526 journal journal Phys. Rev. B volume 50, pages 7526 (year 1994)NoStop [Ho and Chalker(1996)]Chalker_1996 author author C.-M. Ho and author J. T. Chalker, 10.1103/PhysRevB.54.8708 journal journal Phys. Rev. B volume 54, pages 8708 (year 1996)NoStop [Guruswamy et al.(2000)Guruswamy, LeClair, and Ludwig]Guruswamy_2000 author author S. Guruswamy, author A. LeClair,and author A. Ludwig, https://doi.org/10.1016/S0550-3213(00)00245-5 journal journal Nucl. Phys. B. volume 583, pages 475 (year 2000)NoStop [Bernard and LeClair(2002)]Bernard_2002 author author D. Bernard and author A. LeClair, 10.1088/0305-4470/35/11/303 journal journal J. Phys. A Math. Gen. volume 35, pages 2555 (year 2002)NoStop [Sbierski et al.(2021)Sbierski, Dresselhaus, Moore, and Gruzberg]Sbierski_2021 author author B. Sbierski, author E. J. Dresselhaus, author J. E. Moore,and author I. A. Gruzberg, 10.1103/PhysRevLett.126.076801 journal journal Phys. Rev. Lett. volume 126, pages 076801 (year 2021)NoStop [Bianco and Resta(2011)]Bianco2011 author author R. Bianco and author R. Resta, 10.1103/PhysRevB.84.241106 journal journal Phys. Rev. B volume 84, pages 241106 (year 2011)NoStop [Caio et al.(2019)Caio, Möller, Cooper, and Bhaseen]Caio2018b author author M. D. Caio, author G. Möller, author N. R. Cooper,and author M. J. Bhaseen, 10.1038/s41567-018-0390-7 journal journal Nat. Phys. volume 15, pages 257 (year 2019)NoStop [Ulcakar et al.(2020)Ulcakar, Mravlje, and Rejec]Ul_akar2020 author author L. Ulcakar, author J. Mravlje,and author T. Rejec, 10.1103/PhysRevLett.125.216601 journal journal Phys. Rev. Lett. volume 125, pages 216601 (year 2020)NoStop [Beck and Goldstein(2021)]Goldstein_2021 author author A. Beck and author M. Goldstein, 10.1103/PhysRevB.103.L241401 journal journal Phys. Rev. B volume 103, pages L241401 (year 2021)NoStop [d'Ornellas et al.(2022)d'Ornellas, Barnett, and Lee]Ornellas_2022 author author P. d'Ornellas, author R. Barnett,and author D. K. K. Lee, 10.1103/PhysRevB.106.155124 journal journal Phys. Rev. B volume 106, pages 155124 (year 2022)NoStop [Haldane(1988)]Haldane1988 author author F. D. M. Haldane, 10.1103/PhysRevLett.61.2015 journal journal Phys. Rev. Lett. volume 61, pages 2015 (year 1988)NoStop [Jotzu et al.(2014)Jotzu, Messer, Desbuquois, Lebrat, Uehlinger, Greif, and Esslinger]Esslinger2014 author author G. Jotzu, author M. Messer, author R. Desbuquois, author M. Lebrat, author T. Uehlinger, author D. Greif,and author T. Esslinger, 10.1038/nature13915 journal journal Nature volume 515, pages 237 (year 2014)NoStop [Kane and Mele(2005a)]Kane2005 author author C. L. Kane and author E. J. Mele, 10.1103/PhysRevLett.95.226801 journal journal Phys. Rev. Lett. volume 95, pages 226801 (year 2005a)NoStop [Kane and Mele(2005b)]Kane2005a author author C. L. Kane and author E. J. Mele, 10.1103/PhysRevLett.95.146802 journal journal Phys. Rev. Lett. volume 95, pages 146802 (year 2005b)NoStop [Sriluckshmy et al.(2018)Sriluckshmy, Saha, and Moessner]Sriluckshmy2018 author author P. V. Sriluckshmy, author K. Saha,and author R. Moessner, 10.1103/PhysRevB.97.024204 journal journal Phys. Rev. B volume 97, pages 024204 (year 2018)NoStop [Chern(1946)]Chern1946 author author S.-S. Chern, http://www.jstor.org/stable/1969037 journal journal Ann. Math. Second Ser. volume 47, pages 85 (year 1946)NoStop [Berry(1984)]Berry1984 author author M. V. Berry, http://goo.gl/n2qaJN journal journal Proc. R. Soc. A Math. Phys. Eng. Sci. volume 392, pages 45 (year 1984)NoStop [Ceresoli and Resta(2007)]Ceresoli2007 author author D. Ceresoli and author R. Resta, 10.1103/PhysRevB.76.012405 journal journal Phys. Rev. B volume 76, pages 012405 (year 2007)NoStop [Varjas et al.(2020)Varjas, Fruchart, Akhmerov, and Perez-Piskunow]Varjas_2020 author author D. Varjas, author M. Fruchart, author A. R. Akhmerov,and author P. M. Perez-Piskunow, 10.1103/PhysRevResearch.2.013229 journal journal Phys. Rev. Res. volume 2, pages 013229 (year 2020)NoStop [Dresselhaus et al.(2022b)Dresselhaus, Sbierski, and Gruzberg]Dresselhaus_2022 author author E. J. Dresselhaus, author B. Sbierski,and author I. A. Gruzberg, 10.1103/PhysRevLett.129.026801 journal journal Phys. Rev. Lett. volume 129, pages 026801 (year 2022b)NoStop [Dresselhaus et al.(2021)Dresselhaus, Sbierski, and Gruzberg]Dresselhaus_2021 author author E. J. Dresselhaus, author B. Sbierski,and author I. A. Gruzberg, https://doi.org/10.1016/j.aop.2021.168676 journal journal Ann. Phys. volume 435, pages 168676 (year 2021)NoStop [Kühn(1994)]Kuhn_1994 author author R. Kühn, 10.1103/PhysRevLett.73.2268 journal journal Phys. Rev. Lett. volume 73, pages 2268 (year 1994)NoStop [Fytas and Malakis(2010)]Fytas_2010 author author N. G. Fytas and author A. Malakis, 10.1103/PhysRevE.81.041109 journal journal Phys. Rev. E volume 81, pages 041109 (year 2010)NoStop [Schrauth et al.(2018)Schrauth, Richter, and Portela]Schrauth_2018 author author M. Schrauth, author J. A. J. Richter,and author J. S. E. Portela, 10.1103/PhysRevE.97.022144 journal journal Phys. Rev. E volume 97, pages 022144 (year 2018)NoStopSupplementary MaterialTopological Phase Transitions in the Disordered Haldane Model Here we provide further details of the Chern marker and transfer matrix calculations performed in the main text. We also provide additional results to support our findings. § REAL-SPACE CHERN MARKERTopological order in non-interacting systems is often described by the presence of a topologically non-trivial texture in the ground state wavefunction. In the case of Chern insulators, this topological texture is usually characterised by the global Chern index <cit.>:C=-1/π Im∑_n^ occ∫_BZ dǩ ⟨∂_k_x u_n ǩ| ∂_k_y u_n ǩ⟩,where u_n ǩ (ř) = e^- i ǩ·řψ_n ǩ(ř) is the periodic part of the occupied Bloch states for the n-th band. Although C is usually expressed in momentum space, topological order can have consequences even when this is not permitted, e.g. in finite-size systems with open boundaries or in the presence of disorder. In view of this, a local topological characteristic known as the Chern marker has been introduced by Bianco and Resta <cit.>. To see this, it is convenient to insert a complete set of states into Eq. (<ref>):C =-1/π Im∑_n^ occ∑_m^unocc∫_BZ dǩ ⟨∂_k_x u_n ǩ|u_m ǩ⟩⟨ u_m ǩ| ∂_k_y u_n ǩ⟩,where the missing terms are real. The derivatives in k-space can be recast in real space using⟨ψ_m ǩ| ř̂ | ψ_n ǩ⟩ = i ⟨u_mǩ|∂_ǩ u_n ǩ⟩for m≠ n; although the position operator is generically ill-defined in the case of periodic boundary conditions, its off-diagonal matrix elements are well defined. As suchC =-A_c/4π^3 Im∑_n^ occ∑_m^ unocc∫_BZ dǩ∫_BZ dǩ⟨ψ_n ǩ|x̂ | ψ_m ǩ⟩⟨ψ_m ǩ| ŷ | ψ_n ǩ⟩=-4π/A_c ImTr(P̂x̂Q̂ŷ),where for ǩ≠ǩ the matrix elements vanish, and in the second line P̂ and Q̂=Î -P̂ are the projectors onto the occupied and empty states, respectively. The pivotal point to define the Chern marker is to recognize that the trace is independent of the representation and can thus be taken in real space. Eq. (<ref>) is finally obtainedusing the cyclic property of the trace and P̂^2=P̂ <cit.>. Forfree-electron systems, the ground state is uniquely determined by the ground-state projector P(ř,ř)=⟨ř|P̂|ř|$⟩ which, for insulators, is exponentially decreasing withř-ř.The Chern marker has proved to be efficient in characterizing the topological phases of finite-size systems in equilibrium <cit.> and out-of-equilibrium <cit.>.The Chern marker has also proven successful in the presence of disorder <cit.>. §.§ Topological Phase Transition at Fixed DisorderFor the phase transitions at fixed disorder strengthW, obtained by changingMin Fig. 3 of the main text, we find that the correlation length exponentνincreases from unity with increasing disorder strength. Settingφ=π/2and takingW=0.2,0.6,1,1.4,1.6, 1.8we findν=1.02(5),1.02(5),1.05(6), 1.23(0), 1.49(8)and1.70(1)respectively, using the Chern marker. ForW ≲1these values are numerically close to the clean result withν=1<cit.>. In Fig. <ref>(a) we show the disorder averaged topological markerc̅for the topological phase transition withW=1.4held fixed, and averaged over10^4disorder realizations. We consider diamond shaped samples withLunit cells along each edge; see Fig. 1 of the main text. The data show a crossing point in the vicinity of the critical point atM_c∼1.96. The data exhibits scaling collapse when plotted as a function of(M-M_c)L^1/νwithν= 1.23andM_c = 1.96; see Fig. <ref>(b). The parametersνandM_ccan be obtained by minimising the mean-squared distance between the rescaled curves in Fig. <ref>(b). Explicitly, we minimize the square deviation D=1/(# L)^2∑_L,L'∫_Ω dm̃ (c̅_L (m̃ ) - c̅_L' (m̃) )^2,wherec̅_Lis the disorder averaged Chern marker andm̃ = (M-M_c)L^1/νfor a system of sizeL. Here,# Lis the number of system sizes used. The inset of Fig. <ref>(b) highlights the quality of the scaling collapse. In Fig. <ref> we show the disorder averaged Chern marker on transiting from the topological to the non-topological phase withW=1.8. The data show a crossing point in the vicinity ofM_c ∼2.1. The data exhibit scaling collapse when plotted as a function of(M-M_c) L^1/νwithν= 1.70andM_c=2.09; see the inset. These values are obtained by minimizing the mean-squared displacement in Eq. (<ref>). The data collapse onto a single curve verifying the non-trivial results. § TRANSFER MATRIX METHODIn the main text we extract the correlation length exponentνfor the Haldane model via the real-space Chern marker and via the transfer matrix approach. Here, we provide further details on the latter. The derivation of the transfer matrix starts from a real-space discretization of the Schrödinger equation,Ĥ |ψ⟩ = E|ψ⟩<cit.>. In the case of a two-dimensional lattice, the system is split into one-dimensional slices, indexed by the positionnalong the side of lengthL_x. In the case of the Haldane modelL_x = √(3) a N_x, whereN_xis the number of slices in thex-direction andais the nearest neighbor lattice spacing; see Fig. <ref>. For Hamiltonians with short-range hopping the Schrödinger equation reduces to𝕁ψ_n+1 + 𝕄ψ_n +𝕁^†ψ_n-1 = E ψ_n,whereψ_n = ⟨n|ψ|$⟩ is the real-space wavefunction for slice n. Here, 𝕁= ⟨n |Ĥ| n+1|$⟩ is the hopping matrix for nearest neighbour slices and𝕄=⟨n |Ĥ |n|$⟩ is a local contribution within a slice <cit.>. The matrices 𝕁 and 𝕄 are evaluated for a fixed n, and have dimension N_y n_c × N_y n_c, where n_c is the number of sites in the unit cell and N_y is the number of unit cells per slice. For the Haldane model n_c=2, as shown in Fig. <ref>.The Schrödinger equation (<ref>) can be recast as[ ψ_n+1; ψ_n ] = 𝕋_n[ ψ_n; ψ_n-1 ],where the transfer matrix 𝕋_n is given by𝕋_n=[ 𝕁^-1(E -𝕄) -𝕁^-1𝕁^†; 0 ]. This is a square matrix of dimension 2N_yn_c × 2N_yn_c. For a fixed value of E, the diagonalisation of 𝕋_n is faster than that of Ĥ due to the reduction in size. The transfer matrix 𝕋 across the whole system is obtained by multiplying the transfer matrices for each slice:𝕋 = ∏_n=1^N_x𝕋_n,where we set ψ_N_x+1=0 and ψ_0 = 0. In the absence of disorder the transfer matrices 𝕋_n coincide and 𝕋=(𝕋_1)^N_x. In general, 𝕋 is non-Hermitian, as follows from Eq. (<ref>). It is therefore convenient to define the Hermitian matrix <cit.>Ω = ln(𝕋^†𝕋).The eigenvalues of Ω come in pairs with opposite signs, ±λ_j, where j=1,...,N_y n_c. This reflects the conservation of probability flux. The inverse correlation length is obtained from the smallest positive eigenvalue:ξ ^-1 = lim_N_x →∞min_j |λ_j |/2L_x.In practice, this is obtained for a large but finite N_x.§ BOUNDARY CONDITIONSTo compute the correlation length from Eq. (<ref>) one typically chooses a strip geometry where L_x ≫ L_y. It is often convenient to impose periodic boundary conditions (PBCs) in the y-direction, as shown in Fig. <ref>. As we will discuss in more detail below, for the Haldane model, it turns out to be more convenient to use twisted boundary conditions (TBCs). In addition, the matrix 𝕁 has two vanishing eigenvalues and is not invertible. One approach is to separate the singular and non-singular contributions following the general treatment of Ref. <cit.>. For the model with PBCs in the y-direction, this leads to distinct physical behavior when N_y is a multiple of 3. In this case, the allowed momenta k_y=2π j/L_y, with j=0,...,N_y-1, include the y-momentum of the Dirac point at (0,4π / (3√(3)a)); see Fig. <ref>. This is further illustrated in Fig. <ref> which shows the eigenvalues of Ω plotted as a function of the rescaled momentum in the y-direction. For the particular choice of N_y=99 it can be seen that one of the eigenvalues corresponds to the Dirac momentum k_y = 4π / (3√(3)a). The impact of this can be seen in Fig. <ref> which shows the variation of the inverse correlation length on passing from the topological to the non-topological phase. The linear gap closing for N_y=99 is consistent with the inclusion of the Dirac point. We further consider the finite-size scaling of systems with N_ymod3 ≠ 0 and show that the Dirac point is approached with increasing N_y. This is demonstrated in Fig. <ref> which shows the evolution of the inverse correlation length ξ^-1 with increasing system size. It can be seen that the results converge towards the linear gap-closing expected on the basis of the low-energy Dirac Hamiltonian:Ĥ(p,γ) = ∑_αĤ_α; Ĥ_α= [m_α c^2-cpe^αγ; -cpe^-αγ -m_α c^2 ],where α=± 1 labels the Dirac point. Here, c=3t_1a/(2ħ) is the effective speed of light, pe^αγ = p_x + α p_y is the 2D momentum (p_x,p_y) mapped onto the complex plane, and m_α = (M-3√(3)α t_2 sinφ)/c^2 is the effective mass. The energy bands in the vicinity of the critical point at M_c=3√(3)α t_2 sinφ are given by E_± (M) = ±ϵ |M-M_c|^ν (where ± refers to the upper and lower bands respectively) with ν=1 and ϵ=1. This yields the inverse correlation length ξ^-1 = |M-M_c|, in agreement with the transfer matrix approach; see Fig. <ref>.In order to treat systems with different L_y on an equal footing, it is convenient to impose Twisted Boundary Conditions (TBCs) in the y-direction such thatψ(x,y) = e^iθψ(x,y+L_y).The twist can be generated by threading a magnetic flux Φ= ħ/eθ through a system with PBCs <cit.>, as shown in Fig. <ref>. This shifts the wavevector such that k_y→ k_y-θ/L_y = (2π n -θ)/L_y, for n=0, ..., N_y-1, where the twist-angle θ is defined modulo 2π. This renders 𝕁 invertible and the transfer matrix (<ref>) can be used directly. It is convenient to choose θ so that the eigenvalues of Ω are symmetrically distributed around the Dirac point at k_y=4π/(3√(3)a) for φ = π/2. For N_y = 0,1,23 we set θ =π, 2π/3 and π/3 respectively.§ FINITE-SIZE SCALINGIn order to extract the correlation length exponent ν for the topological phase transition illustrated in Fig. <ref>, we perform a finite-size scaling analysis for the correlation length. In the first instance we consider the simple scaling relation ξ^-1∼ L_y^-1 f(m N_y^1/ν),where m = (M-M_c)/M_c is the dimensionless distance from the critical point. As we will discuss below it is also necessary to include corrections to Eq. (<ref>) due to irrelevant operators. On the basis of the naïve ansatz in Eq. (<ref>) it is natural to investigate the evolution of the dimensionless ratio Λ=(ξ/L_y)^-1 as shown in Fig. <ref>. The data show a clear minimum in the vicinity of the critical point at M_c =√(3) which follows from the Dirac theory for t_1=3 t_2 = 1 and φ = π/2; see Fig. <ref>. The inset shows a zoomed-in portion which shows the finite-size approach to the thermodynamic result, M_c=√(3). This is further illustrated in Fig. <ref> which shows the evolution of the finite-size critical point M_c(N_y) (corresponding to the minima in Fig. <ref>) with increasing system size. The results asymptote towards the field theory prediction M_c=√(3). The evolution is well described by a power law M_c(N_y)=M_c - 𝒜 N_y^-λ,where λ is the shift exponent <cit.> and M_c=√(3). As can be seen in Fig. <ref>, the results are compatible with λ=3 and 𝒜 = 20.1 ± 0.1; the latter is empirically close to 𝒜≃ e^3 as inferred from the logarithmic plot. In Fig. <ref> we plot the evolution of M_c as a function of φ. It can be seen that the results are in an excellent agreement with the field-theory prediction M_c = √(3)sinφ for t_1 = 1 and t_2 = 1/3. In a similar way we can track the evolution of λ and 𝒜 for different values of φ. The prefactor 𝒜 also varies sinusoidally as shown in Fig. <ref> (a). In contrast, the exponent λ stays constant as illustrated in Fig. <ref> (b). Combining these results, Eq. (<ref>) can be recast as M_c(N_y)=M_c(1 - 𝒜̃ N_y^-3), where M_c = √(3)sinφ and 𝒜̃=e^3/ √(3) for fixed t_1 = 1 and t_2 = 1/3. §.§ Irrelevant contributionOn the basis of the naïve ansatz given by Eq. (<ref>) one would expect the relation λ=1/ν to hold <cit.>. This can be seen by minimizing the naïve scaling relation Λ = f(m N_y^1/ν) and applying f^-1 which renders M_c(N_y)=M_c(1 +f^-1(Λ_min) N_y^-1/ν). In the case of the low-energy Dirac theory with ν=1 this would yield λ=1; this differs from the extracted value of λ = 3, as shown in Fig. <ref>. As we will see below, this discrepancy is due to the absence of irrelevant scaling variables in Eq. (<ref>). The need for irrelevant corrections has been found in other disordered systems <cit.>. These corrections are also required to accommodate the vertical drift of Λ at the critical point at M_c=√(3) which is present in Fig. <ref>. In view of this, we generalise the ansatz in Eq. (<ref>) by taking into account a single irrelevant contribution:Λ = F(m N_y^1/ν, ψ N_y^-y),where y>0 is an exponent associated with the irrelevant field ψ(m). In the case of a finite-size system, the scaling function F is analytic and can be Taylor expanded around the vanishing irrelevant contribution ψ N_y^-y=0:Λ = F_0(m N_y^1/ν) + ψ N_y^-y F_1( m N_y^1/ν) + 𝒪( (ψ N_y^-y)^2).This allows the drift shown in Figs. <ref>(a) and <ref>(b) to converge to the critical point M_c with an exponent λ≠ 1. We note that m∼ N_y^-λ at the minimum of Λ. In order to keep the argument m N_y^1/ν from Eq. (<ref>) finite at the minimum, we demand N_y^-λ + 1/ν < ∞, yielding λ≥ 1/ν. §.§ Extraction of the correlation length exponent νDue to the quadratic variation of Λ in the vicinity of the critical point, as illustrated in Fig. <ref>, it is numerically preferable to focus on the scaling of the second derivative ∂^2_mΛ with system size. This can be obtained by explicit differentiation of Eq. (<ref>):∂^2_mΛ|_M_c = N_y^2/ν∂^2_m F_0(0) +N_y^-y∂_m^2 (ψ(m) F_1(mL^1/ν)) |_m=0.The first term in Eq. (<ref>) scales as ∂^2_mΛ|_M_c∼ N_y^2/ν which is quadratic for ν=1. The second term scales as N_y^2/ν - y with y>0, which is subleading in the thermodynamic limit. For the large system sizes explored in Fig. <ref>, the subleading corrections are indeed found to be insignificant. The first term yields ν =0.999 ± 0.001 ≃ 1, as illustrated in Fig. <ref>. This is in agreement with the correlation length exponent ν=1 of the Dirac theory. §.§ Extraction of the irrelevant scaling exponent yIn order to extract the scaling exponent y of the irrelevant field ψ we focus on the vertical shift of the dimensionless gap at the critical point, Λ|_M_c. This describes the vertical shift in Fig. <ref>. The scaling with system size can be inferred from Eq. (<ref>) by setting m = 0:Λ|_M_c = F_0(0) - N_y^-yψ(0) F_1(0).This is of a similar form to Eq. (<ref>). In the thermodynamic limit N_y →∞, Eq. (<ref>) converges to Λ|_M_c =F_0(0) ≡Λ_0. In the case of the clean Haldane model, the limiting value of Λ_0 can be inferred from the Dirac theory. To see this, we note that the dispersion relation in the vicinity of the Dirac point is given by E_+(δ k_y)= | c ħδ k_y | with c=3t_1 a/ (2ħ) and δ k_y is the distance from the Dirac momentum. In the presence of TBCs, the minimum of the dispersion relation occurs at a value of δ k_y that is half of the momentum spacing. The minimum value of E_+ is given by min E_+ = 3π / (2√(3) N_y) for t_1=1, yielding Λ_0 = 3π/(2√(3))≃ 2.72.On the basis of Eq. (<ref>) it can be seen that the finite-size deviation from Λ_0 scales as |Λ|_M_c - Λ_0 |∼ N_y^-y. The data shown in Fig. <ref> is consistent with Λ_0 = 3π/(2√(3)) and the exponent y=1. § SCALING RELATIONA notable feature of the results obtained above is that the values λ=3, y=1 and ν =1 satisfy the relation λ = 2/ν + y. The latter can be seen from the condition for the finite-size critical point, ∂_m Λ|_m_c(N_y)=0. Taylor expansion of ∂_mΛ in powers of m around the critical point m=0 yields0=∂_mΛ|_m_c(N_y) =∂_m Λ|_m=0 + m_c(N_y) ∂^2_m Λ|_m=0,with m_c(N_y) ∼ N_y^-λ. The scaling of ∂_m Λ and ∂^2_m Λ at the critical point m=0 can be obtained from Eq. (<ref>) by explicit differentiation. We find that the first derivative at m=0 vanishes in the thermodynamic limit as ∂_m Λ|_m=0∼ N_y^-y, as illustrated in Fig. <ref>. Substituting Eq. (<ref>) into Eq. (<ref>) yields the scaling relation λ = 2/ν + y, in agreement with the calculated values of the exponents λ=3 and ν = y = 1.§.§ Scaling collapseTo validate our results, we collapse the data shown in Fig. <ref> onto a single curve; see Fig. <ref>. Explicitly, we consider the shifted variable Λ̃:Λ̃ = Λ -ψ N_y^-y F_1( m N_y^1/ν)to remove the vertical shift in Fig. <ref>. It follows that Λ̃ is a function of m N_y^1/ν as shown in Fig. <ref>. In practice, the irrelevant contribution F_1 described by Eq. (<ref>) can be Taylor expanded up to the second order in m and removed appropriately.§ DISORDERED HALDANE MODELHaving investigated the critical properties of the clean Haldane model, we now consider the model in presence of on-site disorder. In the case of an infinite strip N_x →∞, the eigenvalues of the transfer matrix for a fixed realisation of disorder converge to their disorder averages; this is due to the consecutive multiplication of single-slice transfer matrices <cit.>. In practice, we use a long but finite strip with N_x=10^6 or N_x=10^7. §.§ Weak DisorderIn Fig. <ref> we plot the evolution of the dimensionless inverse gap Λ for disorder strength W=1, on transiting from the topological to the non-topological phase. As found in the clean case, the data show a clear minimum which drifts with the system size N_y. On the basis of Eq. (<ref>) we find M_c(∞) = 1.833 ± 0.002, and the shift exponent λ=2.7 ± 0.3, as illustrated in Fig.<ref>. The location of the critical point differs from the critical point of the clean system at M_c = √(3)≃ 1.73. In contrast to the clean case, the vertical drift of Λ at the critical point M_c = 1.833 is non-monotonic, as shown in Fig. <ref>. This complicates the extraction of the irrelevant exponent y via Eq. (<ref>). Nonetheless we can extract the correlation length exponent ν via the scaling of the second derivative ∂_m^2 Λ at the critical point M_c=1.833 using Eq. (<ref>). As in the clean case, we find that ν can be obtained from the leading term. This yields ν = 1.05 ± 0.01 as illustrated in Fig. <ref>. The value of the exponent differs from that of the clean Haldane model, where ν=1. In order to verify the extracted value of ν≠ 1, we consider scaling collapse of the data. Due to the non-monotonic vertical shift shown in Fig. <ref>, we focus on the scaling in the horizontal direction. This can be done by shifting the minimum of each of the curves in Fig. <ref> to a common origin. Explicitly we define:Λ = Λ(M-M_c(N_y)) -Λ_min,where Λ_min is the value of the inverse gap at the minimum. Replotting Λ as a function of m N_y^1/ν with ν=1.05, it can be seen that the data collapse onto a single curve as shown in Fig. <ref>. This is consistent with a correlation length exponent that differs from the clean Haldane model where ν=1. §.§ Strong DisorderHaving found a non-trivial value of the correlation length exponent ν≠ 1 in the disordered Haldane model with W=1, we now consider a stronger disorder strength, W=2.6. The inverse correlation length ξ^-1 is extracted from N_x = 10^6 transfer matrix multiplications in Eq. (<ref>). In Fig. <ref>, we plot the dimensionless gap Λ = (ξ/L_y)^-1 on transiting from the topological to the non-topological phase. The minimum of Λ drifts with increasing system size N_y towards the critical point at M_c = 2.352 ± 0.002 as illustrated in Fig. <ref>. The deviation of the finite-size critical point from the infinite-size critical point is described by the power law (<ref>) with the shift-exponent λ=2.1 ± 0.4; see inset of Fig. <ref>.The vertical drift of Λ at the critical point M_c = 2.352 with system size is monotonic and downwards for W=2.6. The drift is well described by Eq. (<ref>) with y=2, as illustrated in Fig. <ref>. We estimate the limiting value of the infinite-size dimensionless gap ratio as Λ_0 = 0.81 ± 0.02. This is close to values previously reported for the plateau transitions for the Quantum Hall Effect <cit.>.In order to extract the exponent ν, we examine the finite-size scaling of the second derivative ∂_m^2 Λ at the critical point corresponding to m=0. In Fig. <ref> we show that the derivative scales as ∂_m^2 Λ∼ N_y^2/ν with ν =2.37 ± 0.03. This is in accordance with the leading order contribution in Eq. (<ref>) without subleading corrections.The value of the exponent for this specific disorder strength, W = 2.6, is numerically close to that of the conjectured exponent ν = 7/3 ≃ 2.33 for the Integer Quantum Hall Effect. It also agrees with recent numerical results for disordered Dirac fermions at E = 0 <cit.>. It is also consistent with results for a geometrically distorted network model <cit.> and experiment <cit.>. As discussed below and in the main text, our results for the Haldane model appear to approach something closer to ν∼ 5/2 in the strong disorder regime. This is compatible with the spread in exponents presented in Ref. <cit.>. §.§ Intermediate DisorderHaving examined the cases of weak and strong disorder, we turn our attention to the intermediate disorder regime. In Fig. <ref> we plot the dimensionless gap Λ = (ξ/L_y)^-1 on transiting from the topological to the non-topological phase with W=1.4. Clear minima can be seen in the vicinity of M_c = 1.932± 0.003. In Fig. <ref> we show the drift of the critical point with increasing system size N_y. The results yield the shift exponent λ = 1.30 ± 0.05; see inset. In contrast, the vertical displacement in Fig. <ref> shows a strong downward trend with increasing system size N_y. The value at the critical point Λ|_M_c decreases linearly without saturation for the system sizes considered; see Fig. <ref>. In order to extract the correlation length exponent ν, we examine the finite-size scaling of the second derivative ∂_M^2 Λ at the infinite-size critical point with M_c=1.932. As can be seen in Fig. <ref> the second derivative scales as ∂_M^2 Λ|_M_c∼ N_y^2/ν with ν=1.21 ± 0.03 for the largest system sizes N_y > 40. For smaller system sizes it is necessary to include the subleading corrections in Eq. <ref>. The value of the exponent is in agreement with that derived via the Chern marker, ν=1.23 ± 0.05. Further verification of this result is obtained by rescaling the horizontal axis in order to see data collapse. In Fig. <ref> we plot the centred variable Λ as defined in Eq. (<ref>) as a function of mN_y^1/ν with ν=1.21. It is readily seen that the data collapse onto a single curve.§ VARIATION OF EXPONENTSHaving provided evidence that the critical exponents λ, y and ν vary with the disorder strength we plot their evolution in Figs. <ref>(a)-(c). As discussed in the main text, the exponent ν interpolates between that of free Dirac fermions with ν=1 and a value ν∼ 5/2 associated with the plateau transition in the IQHE. As can be seen in Fig. 7(a), of the main text the transfer matrix results are in good agreement with those obtained via the Chern marker. The departures at strong disorder are attributed to finite-size effects in the Chern marker calculations. As shown in Fig. <ref>(a), the exponent ν has little dependence on the maximum system size L_max in the weak disorder regime. However, it exhibits a slower growth for stronger disorder; see Fig. <ref>(b). On the basis of a naïve extrapolation we obtain ν = 1.99 ± 0.12 for L_max→∞, in agreement with the transfer matrix results. The evolution towards the limiting value is well described by a power-law; see Fig. <ref>(c). This reduces the discrepancy in Fig. 7(a) of the main text. In tandem with the variation of ν, we also observe a variation in λ and y; see Figs <ref>(b) and (c). In view of the variation of the exponents with the disorder strength, it's instructive to look for scaling collapse as a function of W. In Fig. <ref> we show the variation of Λ on transiting from the topological to the non-topological phase for different values of W and a fixed system size with N_y=59. It can be seen that the curvature in the vicinity of the critical point decreases with increasing disorder strength. For large system sizes N_y, the curvature in the vicinity of the critical point is given by Eq. (<ref>):∂^2_m Λ≃ A(W) N_y^2/ν(W),where the coefficient A(W) is independent of N_y. Within the parabolic approximation for Λ one obtainsΛ∼Λ_min + (m √(A(W)) N_y^1/ν(W) )^2/2,where Λ_min is the value of Λ at the minimum. Using Eq. (<ref>) we can replot the data in Fig. <ref> as a function of the rescaled variable m √(A(W)) N_y^1/ν(W), where ν and A are functions of disorder strength. As shown in Fig. <ref>(a) it can be seen that the data collapse in the vicinity of the critical point. Similar behavior is also observed for N_y=139 as shown in Fig. <ref>(b). The scaling collapse for two distinct system sizes acts as a useful cross-check on the evolution of ν with disorder strength.§ DISORDER-DRIVEN TRANSITION Having examined the M-driven transitions at fixed disorder strength, we now turn our attention to the disorder-driven transitions at fixed M. In Fig. <ref>, we plot the dimensionless gap Λ = (ξ/L_y)^-1 as a function of the disorder strength on transiting from the topological to the non-topological phase with M=0. The data show a clear minimum in the vicinity of a critical disorder strength at W_c = 3.56± 0.01. In contrast to the M-driven transitions, we do not see a drift of the minimum with increasing system size; see the inset of Fig. <ref>.In order to extract the critical exponent ν, we consider the finite-size scaling of the second derivative ∂_W^2 Λ evaluated at the critical point W_c=3.56; see Fig. <ref>. It can be seen that the derivative scales as ∂_W^2 Λ|_W_c∼ N_y^2/ν with ν =2.47 ± 0.09. This is in accordance with the leading contribution in Eq. (<ref>) without subleading corrections. The value of the exponent is in agreement with that obtained via the Chern marker, ν=2.42 ± 0.11. We may further verify the extracted value ν=2.47 by rescaling the data in Fig. <ref>. In Fig. <ref> we plot the centred variable Λ as defined by Eq. (<ref>) as a function of wN_y^1/ν, where w=(W-W_c)/W_c is the reduced disorder strength. The data collapse onto a single curve for ν = 2.47.Having established results for the disorder-driven transition with M=0 we turn our attention to the case with M=1. In Fig. <ref>(a) we plot the evolution of Λ across this quantum phase transition. As found for M=0, the minimum exhibits negligible drift with increasing system size; see Fig. <ref>(b). The location of the critical point at W_c = 3.57 ± 0.01 coincides with that for M=0. This is consistent with the vertical phase boundary shown in Fig. 3 from the main text. The scaling of the second derivative ∂_W^2 Λ|_W_c∼ N_y^2/ν at the critical point yields ν = 2.47± 0.08, in conformity of the result for M=0; see Fig. <ref>(c). In Fig. <ref>(d) we replot the data in terms of the centred variable Λ and wN_y^1/ν. The data collapse onto a single curve with ν = 2.47 further confirming the result. This suggest that the disorder-driven transitions across the vertical boundary in Fig. 3 of the main text are in the same universality class.§ FLUCTUATIONSHaving established the scaling properties o of the disorder averaged Chern marker c̅ and its relation to the transfer matrix approach, we now examine the fluctuations of the Chern marker (δ c)^2 = (c - c̅)^2. §.§ Mass-driven TransitionsIn Fig. <ref>(a) we plot the evolution of (δ c)^2 on transiting from the topological to the non-topological phase, with W=1.8 held fixed. The data show a peak on approaching the critical point at M_c ∼ 2.09. The maximum of (δ c)^2 exhibits a power-law scaling with increasing system size, (δ c)^2_max∼ L^κ with κ = 0.62 ± 0.02; see inset of Fig. <ref>(a). Assuming the scaling form (δ c)^2 ∼ L^κ g(ξ/L) ∼ L^κg̃((M-M_c)L^1/ν) with M_c = 2.09 and ν = 1.7, the data collapse in the vicinity of the transition; see Fig. <ref>(b). The result for κ differs from that at W=1 where κ = 0.36 ± 0.02; see Fig. 6 in the main text. Repeating the same analysis for different disorder strengths we find κ = 0.35(4), 0.35(3), 0.35(9), 0.37(6), 0.61(7) and 0.67(6) for W = 0.2, 0.6, 1, 1.4, 1.8 and 2.0 respectively. The exponent interpolates between κ∼ 0.35 in the weak disorder regime and κ∼ 0.65 in the strong disorder regime as shown in Fig. 7(b) in the main text. The variation of κ mirrors that of ν. §.§ Disorder-driven TransitionHaving established the scaling of the fluctuations of the Chern marker for the mass-driven transitions, we now examine the disorder-driven transition at M=0. In Fig. <ref>(a) we plot the evolution of (δ c)^2 across the transition at W_c ∼ 3.6. The data show a maximum on approaching the transition. The value of (δ c)^2 at the peak grows with increasing system size and is well described by the power-law scaling (δ c)^2_max∼ L^κ with κ = 0.67 ± 0.02; see the inset of Fig. <ref>(a). This value is close to that of the mass-driven transitions in the strong disorder regime; see Fig. 7(b) in the main text. Assuming the scaling form (δ c)^2 ∼ L^κ f(ξ/L) ∼ L^κ f((W-W_c)L^1/ν), in Fig. <ref>(b) we plot (δ c)^2 L^-κ versus (W-W_c)L^1/ν where W_c = 3.58 and ν = 2.42 are obtained via the scaling of the Chern marker. The data collapse onto a single curve in the vicinity of the critical point.19 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Chern(1946)]SM_Chern_1946 author author S.-S. Chern, http://www.jstor.org/stable/1969037 journal journal Ann. Math. Second Ser. volume 47, pages 85 (year 1946)NoStop [Bianco and Resta(2011)]SM_Resta2011 author author R. Bianco and author R. Resta,10.1103/PhysRevB.84.241106 journal journal Phys. Rev. B volume 84, pages 241106 (year 2011)NoStop [Privitera and Santoro(2016)]SM_Privitera_2016 author author L. Privitera and author G. E. Santoro, 10.1103/PhysRevB.93.241406 journal journal Phys. Rev. B volume 93, pages 241406 (year 2016)NoStop [Caio et al.(2019)Caio, Möller, Cooper, and Bhaseen]SM_Caio_2019 author author M. D. Caio, author G. Möller, author N. R. Cooper,andauthor M. J. Bhaseen, 10.1038/s41567-018-0390-7 journal journal Nature Physics volume 15, pages 257 (year 2019)NoStop [MacKinnon and Kramer(1981)]SM_MacKinnon_1981 author author A. MacKinnon and author B. Kramer, 10.1103/PhysRevLett.47.1546 journal journal Phys. Rev. Lett. volume 47, pages 1546 (year 1981)NoStop [Pendry et al.(1992)Pendry, MacKinnon, and Roberts]SM_Pendry_1992 author author J. B. Pendry, author A. MacKinnon, and author P. J. Roberts,10.1098/rspa.1992.0047 journal journal Proc. R. Soc. Lond. volume 437,pages 67 (year 1992)NoStop [Dwivedi and Chua(2016)]SM_Chua_2016 author author V. Dwivedi and author V. Chua,10.1103/PhysRevB.93.134304 journal journal Phys. Rev. B volume 93, pages 134304 (year 2016)NoStop [MacKinnon and Kramer(1983)]SM_Mackinnon_1983_ST author author A. MacKinnon and author B. K. Kramer, @noopjournal journal Z. Phys. B volume 53, pages 1 (year 1983)NoStop [Zawadzki et al.(2017)Zawadzki, D'Amico, and Oliveira]SM_Zawadzki_2017 author author K. Zawadzki, author I. D'Amico, and author L. N. Oliveira,@noopjournal journal Braz. J. Physvolume 47, pages 488 (year 2017)NoStop [Fisher and Barber(1972)]SM_Fisher_1972 author author M. E. Fisher and author M. N. Barber, 10.1103/PhysRevLett.28.1516 journal journal Phys. Rev. Lett. volume 28, pages 1516 (year 1972)NoStop [Cardy(1996)]SM_Cardy_1996 author author J. Cardy, Cambridge University Press journal journal Scaling and Renormalization in Statistical Physics(year 1996), Cambridge University PressNoStop [Slevin and Ohtsuki(1999)]SM_Slevin_1999 author author K. Slevin and author T. Ohtsuki, 10.1103/PhysRevLett.82.382 journal journal Phys. Rev. Lett. volume 82, pages 382 (year 1999)NoStop [Slevin and Ohtsuki(2009)]SM_Slevin_2009 author author K. Slevin and author T. Ohtsuki, 10.1103/PhysRevB.80.041304 journal journal Phys. Rev. B volume 80, pages 041304 (year 2009)NoStop [Beck and Goldstein(2021)]SM_Beck_2021 author author A. Beck and author M. Goldstein, 10.1103/PhysRevB.103.L241401 journal journal Phys. Rev. B volume 103, pages L241401 (year 2021)NoStop [Sbierski et al.(2021)Sbierski, Dresselhaus, Moore, andGruzberg]SM_Sbierski_2021 author author B. Sbierski, author E. J. Dresselhaus, author J. E. Moore,and author I. A. Gruzberg, 10.1103/PhysRevLett.126.076801 journal journal Phys. Rev. Lett. volume 126, pages 076801 (year 2021)NoStop [Kramer et al.(2010)Kramer, MacKinnon, Ohtsuki, and Slevin]SM_Kramer_2010 author author B. Kramer, author A. MacKinnon, author T. Ohtsuki,andauthor K. Slevin, 10.1142/S0217979210064630 journal journal Int. J. Mod. Phys. B volume 24,pages 1841 (year 2010)NoStop [Gruzberg et al.(2017)Gruzberg, Klümper, Nuding, andSedrakyan]SM_Gruzberg_2017 author author I. A. Gruzberg, author A. Klümper, author W. Nuding,and author A. Sedrakyan, 10.1103/PhysRevB.95.125414 journal journal Phys. Rev. B volume 95, pages 125414 (year 2017)NoStop [Li et al.(2009)Li, Vicente, Xia, Pan, Tsui, Pfeiffer, and West]SM_Li_2009 author author W. Li, author C. L. Vicente, author J. S. Xia, author W. Pan, author D. C. Tsui, author L. N. Pfeiffer,and author K. W. West, 10.1103/PhysRevLett.102.216801 journal journal Phys. Rev. Lett. volume 102, pages 216801 (year 2009)NoStop [Dresselhaus et al.(2021)Dresselhaus, Sbierski, and Gruzberg]SM_Dresselhaus_2021 author author E. J. Dresselhaus, author B. Sbierski,and author I. A. Gruzberg, https://doi.org/10.1016/j.aop.2021.168676 journal journal Ann. Phys. volume 435, pages 168676 (year 2021)NoStop
http://arxiv.org/abs/2312.16689v1
{ "authors": [ "J. Mildner", "M. D. Caio", "G. Möller", "N. R. Cooper", "M. J. Bhaseen" ], "categories": [ "cond-mat.str-el", "cond-mat.dis-nn", "cond-mat.mes-hall", "quant-ph" ], "primary_category": "cond-mat.str-el", "published": "20231227190005", "title": "Topological Phase Transitions in the Disordered Haldane Model" }
Performance Comparison of Session-based Recommendation Algorithms Shehzad and Jannach University of Klagenfurt, Klagenfurt, Austria [email protected], [email protected] Performance Comparison of Session-based Recommendation Algorithms based on GNNs Faisal Shehzad0000-0001-6239-8198, and Dietmar Jannach0000-0002-4698-8507 January 14, 2024 ===============================================================================In session-based recommendation settings, a recommender systemhas to base its suggestions on the user interactions that are observed in an ongoing session. Since such sessions can consist of only a small set of interactions, various approaches based on Graph Neural Networks (GNN) were recently proposed, as they allow us to integrate various types of side information about the items in a natural way. Unfortunately,a variety of evaluation settings are used in the literature, e.g., in terms of protocols, metrics and baselines, making it difficult to assess what represents the state of the art. In this work, we present the results of an evaluation of eight recent GNN-based approaches that were published in high-quality outlets. For a fair comparison, all models are systematically tuned and tested under identical conditions using three common datasets. We furthermore include k-nearest-neighbor and sequential rules-based models as baselines, as such models have previously exhibited competitive performance results for similar settings. To our surprise, the evaluation showed that the simple models outperform all recent GNN models in terms of the Mean Reciprocal Rank, which we used as an optimization criterion, and were only outperformed in three cases in terms of the Hit Rate. Additional analyses furthermore reveal that several other factors that are often not deeply discussed in papers, e.g., random seeds, can markedly impact the performance of GNN-based models. Our results therefore (a) point to continuing issues in the community in terms of research methodology and (b) indicate that there is ample room for improvement in session-based recommendation. § INTRODUCTIONRecommender systems play a critical role in modern platforms by recommending personalized content to users according to their past preferences. Conventional recommender systems e.g., in particular ones based on collaborative filtering, rely heavily on rich user profiles and long-term historical interactions. Such systems may however perform poorly in real-world applications, where users interact with the service without logging in (<cit.>). Consequently, session-based recommenders (SBRS), which recommend a next item solely based on an active session, received immense attention from industry and academia in recent years <cit.>.From a technical point of view, most recent session-based algorithms are based on neural methods. A landmark work in this area is GRU4Rec <cit.>, which is often used as a baseline to benchmark newly developed models. One main characteristic of GRU4Rec and many subsequent models is that they are designed to operate solely on user–item interaction data. In the more recent literature, Graph Neural Networks (GNN) however received increased attention in the context of SBRS, because they allow us to consider various types of heterogeneous information in the learning process in a natural way. Examples of such recent works published in top-tier conferences in the last three years include <cit.>.Quite surprisingly, for the category of neural SBRS models that operate solely on user–item interactions like GRU4Rec, various studies have shown that simpler methods, e.g., based on nearest-neighbor techniques, can lead to competitive accuracy results and in many cases even outperform the more complex neural models (<cit.>). Similar observations were made for traditional top-n recommendation tasks (<cit.>), where the latest neural models were found to be outperformed by longer-existing approaches, e.g., based on matrix factorization.[Such phenomena were also found outside the area of recommender systems, e.g., in information retrieval or time series forecasting (<cit.>).] Various factors that can contribute to this phenomenon of `phantom progress' were discussed in the literature <cit.>. Besides the issue that the baseline algorithms in many cases may not have been properly tuned in the reported experiments <cit.>, a central issue lies in the selection of the baseline models themselves. In many published research works, only very recent neural models are considered, and a comparison with well-tuned longer-existing models is often missing.In this present work, we examine to what extent such phenomena can be found in the most recent literature on SBRS that are built on Graph Neural Networks. For this purpose, we benchmarked eight GNN models—all were recently published at top-tier venues—with simple approaches. All algorithms are reproduced under identical settings by incorporating the original code published by the authors[This is important, as third-party implementations may be unreliable <cit.>.]into the session-rec SBRS evaluation framework <cit.>. The results of our study are again surprising. They show that under our independent evaluation, all of the analyzed GNN methods are outperformed by simple techniques, which do not even use side information, in terms of the Mean Reciprocal Rank, which was also our hyperparameter optimization criterion. Only in some situations, GNN-based models were favorable in terms of the Hit Rate. Overall, the results indicate that the problem of the inclusion of too weak baselines still exists in the recent SBRS literature.While examining the research literature, we encountered a number of additional bad practices that may contribute to the apparently somewhat limited progress in this area. First, we find that researchers often use the same embedding size for all compared models for `fair comparison'. Since embedding sizes are however a hyperparameter to tune, such a comparison may instead be rather unfair (<cit.>). Furthermore, the analyses in <cit.> showed that in a substantial fraction of today's research, the proposed models are tuned on test data instead of using a held-out validation dataset, potentially leading to data leakage and overfitting issues that produce overly optimistic results <cit.>.Finally, as reported in <cit.>, even the choice of random seeds can have a non-negligible impact on the observed results.In order to understand to what extent such bad practices may impact the performance results obtained by GNN models, we conducted a series of additional analyses using a subset of the models and datasets that were used in our main experiments. Our results show that the described factors can have a marked impact on the performance of the GNN-based models, thus potentially further contributing to a limited reliability of the reported progress in the literature. § RESEARCH METHODOLOGY As discussed in earlier works <cit.>, researchers often rely on a variety of experiment setups, using different protocols, metrics, datasets, pre-processing steps, and baselines. The primary objective of the study in this present paper is to provide an independent evaluation of recent GNN-based algorithms under identical conditions, i.e., same protocol, dataset, and metrics. §.§ Algorithms Compared GNN-based Models: To ensure thatour analysis is not focused on a few hand-picked examples,we followed a semi-systematic approach for the selection of algorithms to compare, using the following criteria:* Publication outlets: We manually browsed important outlets for recommender system research (e.g., conferences such as RecSys or SIGIR and journals with a high impact factor), to identify recent works that propose GNN-based models for session-based recommendation.* Reproducibility: We then only considered works for which the code was shared. We analyzed the shared repositories and contacted the authors if any part of the code was missing.[We considered articles to be non-reproducible if the authors did not reply after sendingreminders.]Ultimately, we were able to identify eight GNN models that could be trained and evaluated using the provided code and which we integrated into the session-rec framework.[<https://github.com/rn5l/session-rec>] The eight models are briefly described in Table <ref>.Baselines: We included four non-neural baseline models that were also used in previous performance comparisons in <cit.> and <cit.>, where it turned out that such more simple models can sometimes be quite difficult to beat. The baselines are briefly described in Table <ref>. §.§ Datasets and Preprocessing Datasets: We inspected the relevant literature from high-quality outlets to identify the most frequently used datasets.While a variety of datasets is used, the most prominent ones include RSC15 (RecSys Challenge 2015), DIGI (Diginetica), and RETAIL (Retailrocket).[<https://www.kaggle.com/datasets/chadgostopp/recsys-challenge-2015>,<https://competitions.codalab.org/competitions/11161> <https://www.kaggle.com/datasets/retailrocket/ecommerce-dataset>] Another important reason for choosing these datasets is that they contain category information, which is used by some GNN-based models. Table <ref> provides summary statistics for the selected datasets. Data filtering and splitting: We adopt common data preprocessing practices discussed from the literature (<cit.>). Sessions of length one, items that appear less than five times in the dataset, and items that occur in test data but are not present in the training dataset are filtered out. From a training perspective, RSC15 consists of a large number of sessions, making it difficult to systematically tune all GNN models for large dataset without having access to massive GPU resources. Since previous works (<cit.>) show that using a recent fraction of the data leads to competitive or even superior results, we use the commonly used 1/12 and 1/64 fractions of the RSC15 dataset. For data splitting, we follow the common practice as mentioned in (<cit.>). For DIGI, the sessions of the last seven days are put aside as test data and the remaining sessions are used for hyperparameter tuning and training. For RSC15 and RETAIL, the last-day sessions are used as test data.Discussion. While our choices for data preprocessing align with what is common in the literature, our analysis of the relevant papers revealed that there is no agreed standard. Some works, for example, only consider items that appear in at least 30 sessions, others filter out sessions that contain more than 20 items. In yet another set of papers, data preprocessing is not mentioned at all. All of this further contributes to the difficulty of determining the state of the art. In terms of the used datasets, we focused on three widely used ones. We note that each of the 8 considered GNN-based models listed in Table <ref> used at least one of these three datasets. Further experiments with other datasets to are left for future work. §.§ Evaluation Metrics and TuningMetrics and Evaluation Protocol: We rely on Mean Reciprocal Rank and Hit Rate (MRR@K, HR@K) as measures that are widely used[Accuracy metrics such as Precision, Recall, or NDCG are used as well in session-based recommendation, but these metrics are often highly correlated (<cit.>).] in the relevant literature <cit.>. Furthermore, we report two beyond-accuracy metrics, coverage and popularity. We found that different evaluation protocols are used in the original papers that proposed the eight GNN-based models, e.g., leaving out the last item of each test session. In our evaluation, we relied on the commonly used procedure of incrementally “revealing” the items in the session. This entails that all items but the first one in a session are part of the ground truth in the evaluation at some stage, see also (<cit.>).Hyperparameter optimization: We tune the hyperparameters of all models using the training dataset and validate them using a subset of the training dataset. As it is commonly known, the tuning process of GNN models can be time-consuming, in particular when a larger hyperparameters space is searched. Therefore, we relied on a random optimization process where the number of training rounds is 25 to 60, depending on the time complexity of the selected models <cit.>. All selected models for the three datasets are optimized using MRR@20 as the target metric. The ranges and tuned values of the hyperparameters can be found in an online repository, where we also share all data and the code used for pre-processing, tuning, and evaluation for reproducibility.[<https://faisalse.github.io/SessionRecGraphFusion/>]§ RESULTSIn this section we first report the main results of our performance comparison in Section <ref> and then provide additional analyses in Section <ref>. §.§ Performance measures In this section, we first report the results for the accuracy and beyond-accuracy quality measures and then briefly discuss aspects of computational complexity. Accuracy. Table <ref> shows the accuracy results for the different datasets at the common list lengths of 10 and 20. We sort the results by MRR@20, as this was our optimization target. In terms of the MRR, we find that for all datasets one of the simple methods leads to the highest accuracy values. In particular STAN and VSTAN show consistently good results across several datasets. There is however no single `winner' across datasets. For example, SR works particularly well for the larger RSC15 (1/12) dataset.[The effectiveness of SR on some datasets in terms of MRR was observed also in <cit.>.] Generally, we observe that the differences between the winning simple method and the best GNN-based model are sometimes quite small. Nonetheless, it is surprising that several years after the first publication on the effectiveness of nearest-neighbor techniques in 2017 <cit.>, the most recent published models are not consistently outperforming these baselines.Looking only at the GNN models, we find there is also no model that is consistently better than the others. The ranking of the GNN-based models varies across datasets and depending on the chosen metric. While this is not surprising, this observation stands in contrast to almost all published works on SBRS, where any newly proposed model is usually reported to outperform all other baselines on all datasets and metrics. Moreover, we find that some of the recent GNN-based models, even though reported to outperform the state of the art in the original papers, actually perform very poorly in our comparison, leading to MRR values that are sometimes more than 50% lower than the best models.In terms of the Hit Rate, we observe that VSTAN is the best model for the RETAIL dataset, and it has highly competitive performance for the RSC15 (1/12) dataset. For the DIGI and RSC15 (1/64) datasets, in contrast, several GNN-based models are better than the simpler approaches. In particular for the DIGI dataset, the margin between the best GNN model and the best simple model is quite large. This suggests that GNN models in fact can be effective and help us to achieve progress over existing models. The observed improvements at least in our experiment are limited to certain configurations and to the Hit Rate. We however recall that the Hit Rate was not the optimization goal during hyperparameter tuning.Considering only the rankings of the GNN-based models, again no clear winner can be found. In some cases, models that use category information (in particular MGS) work quite well. In other cases, the reliance on category information does not seem to be too helpful, and other category-agnostic GNN models lead to higher accuracy. l c c c c Accuracy results, sorted by MRR@20. Black circles in the table indicate simple baselines and empty circles indicate GNN-based models. The highest scores for each metric are printed in bold font while the second-best scores are underlined. Metrics MRR@10 MRR@20 HR@10 HR@20 5cRETAIL ∙ VSTAN 0.6310.6310.971 0.980 ∘ SR-GNN0.629 0.6310.886 0.931 ∙ SFSKNN0.599 0.603 0.9390.980 ∘ CM-HGNN 0.562 0.568 0.812 0.890 ∙ SR0.553 0.560 0.865 0.959 ∘ GNRRW 0.553 0.558 0.804 0.869 ∘ MGS 0.553 0.558 0.820 0.878 ∘ COTREC0.551 0.556 0.792 0.861 ∙ STAN0.544 0.548 0.873 0.931 ∘ TAGNN 0.540 0.545 0.804 0.882 ∘ FLCSP 0.451 0.455 0.800 0.865 ∘ GCE-GNN0.423 0.429 0.596 0.6785cRSC15 (1/64) ∙ STAN0.2900.2960.538 0.613 ∙ VSTAN 0.286 0.289 0.546 0.595 ∘ GCE-GNN0.278 0.285 0.538 0.633 ∘ CM-HGNN 0.278 0.284 0.5750.650 ∙ SFSKNN0.264 0.264 0.422 0.428 ∘ COTREC0.274 0.279 0.543 0.616 ∘ GNRRW 0.269 0.276 0.526 0.618 ∘ MGS 0.264 0.270 0.543 0.630 ∙ SR0.263 0.266 0.462 0.506 ∘ SR-GNN0.245 0.251 0.497 0.581 ∘ TAGNN 0.195 0.201 0.425 0.509 ∘ FLCSP 0.176 0.183 0.393 0.4975cRSC15 (1/12) ∙ SR0.3380.3440.557 0.627 ∘ GNRRW 0.335 0.342 0.581 0.682 ∘ SR-GNN0.328 0.332 0.563 0.621 ∘ MGS 0.326 0.331 0.566 0.642 ∙ STAN0.325 0.330 0.590 0.661 ∙ VSTAN 0.325 0.330 0.5990.673 ∙ SFSKNN0.321 0.325 0.550 0.609 ∘ COTREC0.307 0.312 0.560 0.639 ∘ CM-HGNN0.293 0.299 0.532 0.618 ∘ FLCSP 0.284 0.290 0.480 0.560 ∘ TAGNN 0.281 0.288 0.520 0.621 ∘ GCE-GNN0.220 0.225 0.391 0.453 5cDIGI∙ SFSKNN 0.3480.3510.559 0.604 ∙ STAN0.347 0.3510.529 0.600 ∙ VSTAN 0.342 0.346 0.520 0.581 ∙ SR0.333 0.337 0.568 0.617 ∘ COTREC0.330 0.335 0.555 0.637 ∘ MGS 0.322 0.329 0.559 0.656 ∘ SR-GNN0.321 0.327 0.5910.688 ∘ GNRRW 0.318 0.324 0.578 0.667 ∘ CM-HGNN 0.310 0.316 0.561 0.649 ∘ TAGNN 0.279 0.287 0.544 0.645 ∘ GCE-GNN0.227 0.236 0.419 0.553 ∘ FLCSP 0.175 0.184 0.398 0.525 Additional quality measures. In Table <ref> we report additional beyond-accuracy metrics that are relevant in practice. Cov@20 refers to the percentage of items, which appear at least once in the top-20 recommendation lists for all test sessions. Pop@20 of an item is measured in terms of the number of times it appears in the training sessions. The metric reported here is the average (normalized) popularity of the recommended items in the top-20 recommendation lists for all test sessions. The results show that the simple models lead to high coverage values, indicating that these models consider a broader range of items in their recommendations at an aggregate level. In terms of popularity, we find that the simple methods are at about the same level as some of the GNN-based approaches. However, two GNN-based models, namely TAGNN, and GCE-GNN seem to have a stronger and usually undesired tendency to recommend more popular items.Time complexity. To illustrate the complexity of the problem, we report an example of the training time(T-Time) and prediction time (P-Time) for the different models on the DIGI dataset using our hardware[A machine with an AMD EPYC 7H12 64-Core Processor 2600 Mhz 16 Core(s) and an NVIDIA RTX A4000 WDDM graphics card.] in Table <ref>. On this dataset, GCE-GNN is the slowest model which takes approximately 19 hours to be trained once, and we recall that many training rounds are needed during hyperparameter tuning for each dataset and for each model. We also note that the training time may vary strongly between the GNN models across datasets, depending, e.g., on the number of items, the number of the sessions, and the length of the sessions. The kNN-based models, in contrast, do not have a training phase. The time needed for predictions for these models lies about in the range of the faster GNN-based models. However, we notice that the COTREC model is particularly slow and needs approximately 2 seconds to generate one single recommendation list, which is prohibitive in practice.§.§ Additional Robustness Analyses for GNN-based Models As mentioned earlier, a number of factors can endanger the robustness and reliability of accuracy results reported in research papers, including the choice of embedding sizes and random seeds, or the practice of tuning on test data. When we were screening the literature for recent GNN-based models, we identified 34 relevant articles in top-level outlets.[The list of considered papers can be found in the online material.] An analysis of these papers leads to a number of interesting observations.Embedding size. We found that the authors frequently fix the embedding size and only tune the other hyperparameters such as learning rate or dropout rate. In several cases, we also observed that authors chose the embedding size of the baseline models from the original papers which may have been optimal for different datasets. However, they tune the embedding size of their own proposed models. Such a comparison, in which only one of the models is extensively tuned, certainly cannot lead to any reliable insights, as the embedding size in fact is a crucial hyperparameter (<cit.>).To analyze the potential extent of the problem, we conducted experiments in which we varied the embedding size of the models while keeping the other hyperparameters constant (at their optimal values, as determined in the previous experiment). Since some GNN-based models are computationally expensive, we focused on five training-efficient models and on two datasets.As a representative of a smaller dataset, we chose the RSC15 (1/64) dataset and conducted the experiments by using 15 different values for embedding sizes ranging from 16 to 500. We used the DIGI dataset as an example for a larger dataset, and the experiments are conducted using 10 different values for embedding sizes ranging from 16 to 250. In this case, the maximum value of the embedding size is 250, as we experienced memory problems with larger values.The outcomes of these experiments are provided in Table <ref>.[Note: the MRR@20 values of some GNN models are not the same as in Table 4 because we decreased the values of batch sizes to avoid memory issues, which in turn affects the accuracy of the models.] We can observe that some models, depending on the embedding size, can lead to largely different results in terms of the MRR@20. This confirms the importance of carefully tuning this central hyperparameter. As the results show, the differences between the best and worst MRR values can be more than 40% in some cases, simply by choosing a bad value for the embedding size. For some of the models, the difference between the minimum and maximum MRR values is less pronounced, but still in a range where researchers would report a substantial improvement over the state of the art when comparing models. We note that no clear pattern was found across the models that larger embedding sizes are consistently better than smaller ones.Random seeds. The random seed used in an experiment is certainly not a hyperparameter to tune. Still, its choice can have notable impact on accuracy results (<cit.>).We therefore conducted experiments to analyze to what extent this is true also for some of the examined GNN-based models. In these experiments, we selected random seed values between 100 to 10,000,000 by using a random function. In the shared repositories of the GNN models, random seed values between 2000 to 3000 can often been found, but there is no general pattern. In our experiments, we deliberately picked a larger range to examine if some more uncommon values (outliers) could have an unexpected effect. Again, we considered the RSC15 (1/64) and the DIGI dataset. We generated 100 random seed values for the RSC15 dataset, and 35 values for the larger DIGI dataset. The hyperparameters of the examined models were again left at their optimum values. The obtained results for MRR@20 when using different random seeds are shown in Table <ref>, again sorted by the difference between the minimum and the maximum observed value. Again, we can see that for some models and datasets, the obtained values using the same set of optimized hyperparameters vary strongly. For the RSC15 dataset and the FLCSP method, for example, the obtained accuracy results can be 30% lower than the best value simply because of a unlucky choice of the random seed value.[In this case, the random seed value leading to the worst MRR value was around 7,000,000. All detailed results can be found online.] Clearly, the differences are not as extreme for all models and datasets and may be partly due to the large range of values that we explored. The distribution of the accuracy values Figure <ref> for the RSC15 (1/64) dataset. The detailed data of the entire experiment can be found in the online material.Comparing the results of Table <ref> and the results obtained when using the random seed values used in the original code shared by the authors (Table <ref>), we observe that often higher accuracy values can be achieved when exploring a large number of random seed values. For example, the optimal MRR@20 value for the FLCSP method on the RSC15 (1/64) dataset is 0.183. From the first row in Table <ref> we can see that the worst MRR value was at 0.159 and the best one at 0.250, which can be seen as a substantial improvement, which is only obtained by changing the random seed. A similar observation can be made in other situations as well, e.g., considering the last row of Table <ref>, where the best MRR@20 value of GNRWW is at 0.352for the DIGI dataset, whereas the best value in Table <ref> was at 0.324. Note, however that in some cases the best MRR values obtained by exploring many (somewhat unusual) seed values are lower than when using more common choices as can be found in the shared code repositories, which means that these common ranges should be explored as well.Tuning on test data. In a last experiment, we repeated the main experiment from Section <ref>, but this time tuned the hyperparameters of four training-efficient GNN models on the test set, which is unfortunately not an uncommon practice according to <cit.>. The results show that depending on the dataset an accuracy improvement by 1.5 to about 2.0% in terms of MRR@20 across models can be achieved. The strongest individual “improvement” was at 5.6%, which confirms the observation from <cit.> that such evaluation practice leads to overly optimistic assessments of the accuracy of a model. The detailed results of the experiment can be found in the online material.§ CONCLUSION Our systematic analysis of eight recent GNN-based models for session-based recommendation reveal that methodological problems that were identified several years ago still exist in this research area. Specifically, researchers tend to rely on experimental evaluations which mostly consider other recent neural models as baselines, but do not include simpler, yet often well-performing baselines in their comparisons. Considering again the 34 recent papers from top-tier outlets that we analyzed in our study, we find that none of them considered models like SFSKNN or VSTAN as baselines. Instead, trivial methods like popularity based ones or the sometimes poorly performing item-KNN baseline from <cit.> are considered. Only one paper <cit.> considered several non-neural baselines, but this was a particular study which compared the offline and online performance of various models based on the session-rec framework.In sum, this leads to some worries regarding the true progress that is achieved in the field, in particular when we combine this finding with other potentially problematic factors that can influence the outcomes reported in research papers, as discussed in the additional analyses in this paper. On the more positive side, our analysis suggests that there is ample room for future improvements of session-based recommendation models. In particular, not too many works exist yet that consider the various types of side information, e.g., item category, price, that are available in some datasets. We believe that approaches based on Graph Neural Networks are indeed promising for such settings with rich data, even though the methods examined in our present study do not seem to exploit their full potential yet.In terms of future directions, we observe that various recent approaches for session-based recommendation are based on self-attention and the transformer architecture, e.g., (<cit.>). A performance comparison of such models is part of our future work. In a recent study, it was found that sequential transformer-based models like BERT4Rec <cit.> indeed seem to be outperforming simple models with a substantial gap, at least for larger datasets. An up-to-date performance comparison of session-based models based on attention and transformers is however still missing.10 anelli2022top Anelli, V.W., Bellogín, A., Di Noia, T., Jannach, D., Pomo, C.: Top-n recommendation algorithms: A quest for the state-of-the-art. In: Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization. pp. 121–131 (2022)chamberlain1705neural Chamberlain, B., Clough, J., Deisenroth, M.: Neural embeddings of graphs in hyperbolic space. arxiv 2017. In: 13th International Workshop on Mining and Learning with Graphs, held in Conjunction with KDD 2017cremonesi2021progress Cremonesi, P., Jannach, D.: Progress in recommender systems research: Crisis? what crisis? AI Magazine42(3),43–54 (2021)Fan2021Lighter Fan, X., Liu, Z., Lian, J., Zhao, W.X., Xie, X., Wen, J.R.: Lighter and better: Low-rank decomposed self-attention networks for next-item recommendation. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. p. 1733–1737 (2021)ferraridacrema2020tois Ferrari Dacrema, M., Boglio, S., Cremonesi, P., Jannach, D.: A troubling analysis of reproducibility and progress in recommender systems research. ACM Transactions on Information Systems39(2) (2021)ferraridacremaetal2019 Ferrari Dacrema, M., Cremonesi, P., Jannach, D.: Are we really making much progress? a worrying analysis of recent neural recommendation approaches. In: Proceedings of the 2019 ACM Conference on Recommender Systems. Copenhagen (2019)garg2019sequence Garg, D., Gupta, P., Malhotra, P., Vig, L., Shroff, G.: Sequence and Time Aware Neighborhood for Session-based Recommendations: STAN. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 1069–1072 (2019)hamilton2017representation Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: Methods and applications. IEEE Data Eng. Bull.40(3),52–74 (2017)hidasi2015session Hidasi, B., Karatzoglou, A., Baltrunas, L., Tikk, D.: Session-based recommendations with recurrent neural networks. In: Proceedings ICLR 2016 (2016)Hidasi2023TheEffect Hidasi, B., Ádám Tibor Czapp: The effect of third party implementations on reproducibility. In: Proceedings of the 17th ACM Conference on Recommender Systems (2023)Hou2022Core Hou, Y., Hu, B., Zhang, Z., Zhao, W.X.: CORE: Simple and Effective Session-Based Recommendation within Consistent Representation Space. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. p. 1796–1801 (2022)JannachLudewig2017c Jannach, D., Ludewig, M.: When recurrent neural networks meet the neighborhood for session-based recommendation. In: Proceedings of the 11th ACM Conference on Recommender Systems. Como, Italy (2017)Jannach2021SessionRSHB Jannach, D., Quadrana, M., Cremonesi, P.: Session-based recommendation. In: Ricci, F., Shapira, B., Rokach, L. (eds.) Recommender Systems Handbook (2021)kouki2020lab Kouki, P., Fountalis, I., Vasiloglou, N., Cui, X., Liberty, E., Al Jadda, K.: From the lab to production: A case study of session-based recommendations in the home-improvement domain. In: Proceedings of the 14th ACM Conference on Recommender Systems. pp. 140–149 (2020)lai2022attribute Lai, S., Meng, E., Zhang, F., Li, C., Wang, B., Sun, A.: An attribute-driven mirror graph network for session-based recommendation. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 1674–1683 (2022)lin2019neural Lin, J.: The neural hype and comparisons against weak baselines. In: ACM SIGIR Forum. vol. 52, pp. 40–51 (2019)liu2018stamp Liu, Q., Zeng, Y., Mokhosi, R., Zhang, H.: STAMP: Short-term Attention/Memory Priority Model for Session-based Recommendation. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 1831–1839 (2018)ludewig2018evaluation Ludewig, M., Jannach, D.: Evaluation of session-based recommendation algorithms. User Modeling and User-Adapted Interaction28, 331–390 (2018)ludewig2019performance Ludewig, M., Mauro, N., Latifi, S., Jannach, D.: Performance comparison of neural and non-neural approaches to session-based recommendation. In: Proceedings of the 13th ACM Conference on Recommender Systems. pp. 462–466 (2019)ludewig2021empirical Ludewig, M., Mauro, N., Latifi, S., Jannach, D.: Empirical analysis of session-based recommendation algorithms: A comparison of neural and non-neural approaches. User Modeling and User-Adapted Interaction 31,149–181 (2021)makridakis2018statistical Makridakis, S., Spiliotis, E., Assimakopoulos, V.: Statistical and machine learning forecasting methods: Concerns and ways forward. PloS one 13(3),e0194889 (2018)pang2022heterogeneous Pang, Y., Wu, L., Shen, Q., Zhang, Y., Wei, Z., Xu, F., Chang, E., Long, B., Pei, J.: Heterogeneous global graph neural networks for personalized session-based recommendation. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. pp. 775–783 (2022)picard2021torch Picard, D.: Torch. manual_seed (3407) is all you need: On the influence of random seeds in deep learning architectures for computer vision. arXiv preprint arXiv:2109.08203(2021)Rendle2022Krichene Rendle, S., Krichene, W., Zhang, L., Koren, Y.: Revisiting the performance of ials on item recommendation benchmarks. In: Proceedings of the 16th ACM Conference on Recommender Systems. p. 427–435 (2022)rendle2019difficulty Rendle, S., Zhang, L., Koren, Y.: On the difficulty of evaluating baselines: A study on recommender systems. arXiv preprint arXiv:1905.01395(2019)shehzad2023everyone Shehzad, F., Jannach, D.: Everyone's a winner! on hyperparameter tuning of recommendation models. In: 17th ACM Conference on Recommender Systems (2023)song2019session Song, W., Xiao, Z., Wang, Y., Charlin, L., Zhang, M., Tang, J.: Session-based social recommendation via dynamic graph attention networks. In: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. pp. 555–563 (2019)Sun:19 Sun, F., Liu, J., Wu, J., Pei, C., Lin, X., Ou, W., Jiang, P.: BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. pp. 1441–1450 (2019)Zhu2020AreWeEvaluating Sun, Z., Yu, D., Fang, H., Yang, J., Qu, X., Zhang, J., Geng, C.: Are we evaluating rigorously? benchmarking recommendation for reproducible evaluation and fair comparison. In: Proceedings of the 14th ACM Conference on Recommender Systems. p. 23–32 (2020)DBLP:conf/recsys/ValcarceBPC18 Valcarce, D., Bellogín, A., Parapar, J., Castells, P.: On the robustness and discriminative power of information retrieval metrics for top-n recommendation. In: Proceedings of the 12th ACM Conference on Recommender Systems. pp. 260–268 (2018)wang2022spatiotemporal Wang, H., Zeng, Y., Chen, J., Zhao, Z., Chen, H.: A spatiotemporal graph neural network for session-based recommendation. Expert Systems with Applications 202,117114 (2022)wang2019collaborative Wang, M., Ren, P., Mei, L., Chen, Z., Ma, J., De Rijke, M.: A collaborative session-based recommendation approach with parallel memory modules. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 345–354 (2019)wang2020global Wang, Z., Wei, W., Cong, G., Li, X.L., Mao, X.L., Qiu, M.: Global context enhanced graph neural networks for session-based recommendation. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 169–178 (2020)wegmeth2023effect Wegmeth, L., Vente, T., Purucker, L., Beel, J.: The effect of random seeds for data splitting on recommendation accuracy. In: Perspectives on the Evaluation of Recommender Systems Workshop (PERSPECTIVES 2023), co-located with the 17th ACM Conference on Recommender Systems (2023)wu2019session Wu, S., Tang, Y., Zhu, Y., Wang, L., Xie, X., Tan, T.: Session-based recommendation with graph neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 346–353 (2019)wu2020comprehensive Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems32(1),4–24 (2020)xia2021self Xia, X., Yin, H., Yu, J., Shao, Y., Cui, L.: Self-supervised graph co-training for session-based recommendation. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management. pp. 2180–2190 (2021)xu2022category Xu, H., Yang, B., Liu, X., Fan, W., Li, Q.: Category-aware multi-relation heterogeneous graph neural networks for session-based recommendation. Knowledge-Based Systems251,109246 (2022)yu2020tagnn Yu, F., Zhu, Y., Liu, Q., Wu, S., Wang, L., Tan, T.: TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 1921–1924 (2020)yuan2020future Yuan, F., He, X., Jiang, H., Guo, G., Xiong, J., Xu, Z., Xiong, Y.: Future data helps training: Modeling future contexts for session-based recommendation. In: Proceedings of The Web Conference 2020. pp. 303–313 (2020)zhang2020learning Zhang, Z., Wang, B.: Learning sequential and general interests via a joint neural model for session-based recommendation. Neurocomputing415, 165–173 (2020)zhang2021fusion Zhang, Z., Wang, B.: Fusion of latent categorical prediction and sequential prediction for session-based recommendation. Information Sciences 569,125–137 (2021)zhang2021graph Zhang, Z., Wang, B.: Graph neighborhood routing and random walk for session-based recommendation. In: 2021 IEEE International Conference on Data Mining (ICDM). pp. 1517–1522 (2021)
http://arxiv.org/abs/2312.16695v1
{ "authors": [ "Faisal Shehzad", "Dietmar Jannach" ], "categories": [ "cs.IR", "cs.AI" ], "primary_category": "cs.IR", "published": "20231227192426", "title": "Performance Comparison of Session-based Recommendation Algorithms based on GNNs" }